/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Site was down because of hosting-related issues. Figuring out why it happened now.

Build Back Better

Sorry for the delays in the BBB plan. An update will be issued in the thread soon in late August. -r

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


When the world says, “Give up,” Hope whispers, “Try it one more time.” -t. Anonymous


Philosophers interested in building an AGI? pygmalion 06/26/2021 (Sat) 00:53:09 No.11102
Why is it that no philosophers are interested in building an AGI? we need to change this, or at least collect relevant philosophers. discussion about philosophy of making AGI (includes metaphysics, transcendental psychology, general philosophy of mind topics, etc!) also highly encouraged! Ill start ^^! so the philosophers i know that take this stuff seriously: Peter Wolfendale - the first Neo-Rationalist on the list. his main contribution here is computational Kantianism. just by the name you can tell that he believes Kant's transcendental psychology has some important applications to designing an artificial mind. an interesting view regarding this is that he thinks Kant actually employed a logic that was far ahead of his time (and you basically need a sophisticated type theory with sheaves to properly formalize). Other than that he also thinks Kant has interesting solutions to the frame problem, origin of concepts, and personhood. CONTACTS: He has a blog at https://deontologistics.co/, and also has posted some lectures on youtube like this one: https://www.youtube.com/watch?v=EWDZyOWN4VA&ab_channel=deontologistics Reza Negarestani - this is another Neo-Rationalist. he has written a huge work (which I haven't read yet ;_;) called "Intelligence and Spirit". It's massive and talks about various grades of general intelligence. this includes sentient agents, sapient agents, and Geist. this guy draws from Kant as well, but he also builds on Hegel's ideas too. his central thesis is that Hegel's Geist is basically a distributed intelligence. he also has an interesting metaphilosophy where he claims that the goal of philosophy is the construct an AGI. like other Neo-Rationalists, he heavily relies on the works of Sellars and Robert Brandom Recc: Ray Brassier (recent focuses) - I dont think he is working on artificial general intelligence, but his work on Sellars, and in particular rule following is very insightful! Hubert Dreyfus - Doesn't quite count, but he did try to bring Heidegger to AGI. He highlighted the importance of embodiment to the frame problem and common sense knowledge. I personally think Bergson might have explicated what he wanted to achieve but better, though that guy is like way before AI was even a serious topic, lol. Murray Shanahan - This guy has done some extra work on the frame problem following Dreyfus. His solution is to use global workspace theory and parralel processing of different modules. Interesting stuff! Barry Smith - Probably the most critical philosopher on this list. He talks about the requisite system dynamics for try strong AI, and concludes that our current methods simply don't cut it. One of the key stressing points he points out here with a colleague is that our current AI is Markovian when fleshed out chat dialogue would be a non-Markovian task (you can find the arxiv link of his criticism here: https://arxiv.org/abs/1906.05833). He also has knowledge on analytic ontology (and amongst other thing has some lectures about emotion ontology). I think his main genius however is in coming up with a definition of intelligence that puts a lot of the problems with our current approaches into context (which can be found here: https://www.youtube.com/watch?v=0giPMMoKR9s&ab_channel=BarrySmith) CONTACTS: He has a yt channel here https://www.youtube.com/watch?v=0giPMMoKR9s&ab_channel=BarrySmith Uhh, that's the introduction of pretty much every philosopher I know who works on this stuff. I made a thread on /lit/ and got no responses :( (which isn't surprising since I am the only person I know who is really into this stuff)
>>11102 also, I don't really count myself as a philosopher, but I have, if you haven't already noticed, done a lot of my own research regarding ai. soooo i am going to take this opportunity to shill my system... it's sort of hard to summarize everything because i already have a LOT of schizo notes. my goals are not just artificial general intelligence but also synthetic consciousness. one of the first obstacles is the hard problem of consciousness. this makes sense because you first need to know what you want to synthesize. the two core philosophers for this project is bergson and chris langan. they share the same solution to the mind body problem by making the difference between body and mind temporal as opposed to substantial (as descartes would). this is honestly such a genius idea that it has pushed my approach to be so radical that i am afraid of talking about it much as i am afraid that it would cause controversy. anyways, bergson's description of this solution is that of duration. the mind is not only a spatial hologram but also temporal. the analogous concept in langan's system is telic recursion. what's interesting here is that with this set up we actually have a lot of material give to perception which can be used for concepts later in a process called "redintegration" (for more on this check out this article: https://thesecularheretic.com/welcome-to-the-holofield-rethinking-time-consciousness-the-hard-problem/ ... stephen robbins is great btw. he has done a lot of good work on explaining bergson's system, and also has a series on youtube you can check out here: https://www.youtube.com/channel/UCkj-ob9OuaMhRIDqfvnBxoQ) with this, we locate qft as one of the best places to look (in particular quantum brain dynamics) all of this functions to properly explain how clumps in the world get quantified as objects, and also how we come up with abstract concepts. it also provides us with the genesis of abstract concepts. the problem though is that we still need an account of how these clumps come to us to be redintegrated later. this is where "morphogenesis" becomes important. my teleosemantics stresses the importance of attractor basins for the formation of concepts as well as the formation of intentional action. it is here that i believe jung's ideas can be integrated. in particular, i believe that the psychoid can be understood with a dynamic systems approach. to this end i believe dynamic field theory could be helpful for my goals. the last fundamental building block i think is that of functional integration (a concept which i take from reza). this requires the synchronization and retooling of several modules across a system in order to construct a far more sophisticated behaviour. i currently believe program synthesis is the right way to think about this. as such, i have mapped out some of the requisite concepts which are needed for such a integration to take place i plan on fitting together attractor-based semantics, formal semantics (which includes inferential semantics), cognitive semantics, and dynamic semantics. i think all of them have their place. oh yeah this guy on this website (https://biointelligence2.webnode.com/) has some good ideas i plan to make use of as well on the cognitive semantics side of things, i believe my attractor approach helps to explain how conceptual blending functions. i also take from hilary lawson amongst others. i think those are some of the main points. keep in mind i have like 80+ pages of notes now! i am very autistic! i am trying to also build up my little program thing. though im not quite sure what to do for an introduction so i just wrote some off points i had on my mind at the time: https://docs.google.com/document/d/1KGQUzrTWjUoMHAwP4R3WcYpBsOAiFyHXo7uPwWsQVCc/edit?usp=sharing
Personally speaking, I don't think we're going to arrive at an any kind of an AGI by starting from a logical (logical/systemic as opposed to physical/hardware) framework. That's putting the cart before the horse. Philosophy is what happens * after * you have intelligence, remember biology precedes psychology. (mathematics > physics > chemistry > biology > psychology > sociology, or something along those lines) However, logical philosophical models are fine and should be encouraged to guide development of an AGI. Maybe that's what you mean here and I've misconstrued things. But back to that, I think we could begin with a type of virtualization, like a VM of a neuron so to speak, create several dozen or even hundreds in parallel (with specialized hardware ideally) and link them up. One topology I think holds promise is what I call linked hierarchies. By Linked Hierarchies I mean nested Hierarchies of connections between neurons or groups of neurons with links between higher and lower order tiers. Here's a link to another (now deadish) RW forum where I went into more detail on this concept if you're interested. https://www.robowaif.us/viewtopic.php?f=8&t=41918
>>11108 >Philosophy is what happens * after * you have intelligence, remember biology precedes psychology I agree. The neorationalists have a bad habit of focusing a bunch on sapience when we are still trying to implement sentience. With that said, it's still important, but as a final step sort of thing. My main concern is sentience and occasionally how this can prefigure some functions of sapience >Here's a link to another (now deadish) RW forum where I went into more detail on this concept if you're interested. >https://www.robowaif.us/viewtopic.php?f=8&t=41918 Woah, this great. Good to know there are ther robowaifu forums
>>11109 >there are ther *there are other
>>11102 A lot of words and very little meaning and no practical use, Academics are pretentious retards
>>11253 translation: "i dont understand it and i certainly dont understand any of this enough to build something with it. why cant work at the intersection of several fields of philosophy and science be simple?" i get it anon. part of the issue is that you dont read because shitposting on image boards for years has given you ADHD. listening to some podcasts which give you vague analogies has also given you a sense entitlement that you should be able to understand anything anyone has ever said. this attitude vexes me as i am mostly an autodidact myself, so seeing people belittle the hard work ive done pisses me a bit off. is some of the terminology esoteric? maybe, but keep in mind they come from several different philosophers some of whom lived over a century ago. however, if you dont understand kant, you should not have even clicked on my thread. thanks for the bump. and to tell you the truth, i feel as though i dont have enough knowledge to wrap everything together yet. though thats why i made this thread and came to this board ._.
Not that I agree with this based retard >>11253 but at least some of these authors seem to writing way out of their league. >Barry Smith - Probably the most critical philosopher on this list. He talks about the requisite system dynamics for try strong AI, and concludes that our current methods simply don't cut it. One of the key stressing points he points out here with a colleague is that our current AI is Markovian when fleshed out chat dialogue would be a non-Markovian task obviously I'm assuming he doesn't mean Markovian literally here from the abstract of the paper you linked >We then argue that it is for mathematical reasons impossible to program a machine in such a way that it could master human dialogue behaviour in its full generality. ... (2) because even the sorts of automated models generated by using machine learning, which have been used successfully in areas such as machine translation, cannot be extended to cope with human dialogue. it's very hard to take these kinds of claims seriously in the wake of GPT-3 anyways Superhuman AI will probably be created by throwing millions and millions of dollars of hardware at deep learning algorithms, with some branch selection technique like the one that alpha go uses. But I don't think that you need any philosophical understanding of intelligence to make an AI. Philosophers are probably more needed for thinking about what could go wrong with AI (ethically) than how to make it. Consider someone like Nick Bostrom: https://nickbostrom.com/
>>11456 I guess Barry Smith is saying that stuff like the history length limit in AI dungeon makes ML chatbots effectively "Markovian", which is true in a sense, but extending this to claiming "chatbots are impossible" doesn't seem very realistic. There are ways to extend the context size indefinitely.
>>11456 >Barry Smith Thanks for the name drop. Since I disagree with you and he's into Ontologies: https://youtube.com/c/BarrySmithOntology >>cannot be extended to cope with human dialogue. >it's very hard to take these kinds of claims seriously in the wake of GPT-3 What? It can't cope with it. It doesn't know what it is saying. It has no idea about the world. Again: Ask it about a book, if it has read it. If it claims so. Then try to discuss the book. >Philosophers are probably more needed for thinking about what could go wrong with AI (ethically) than how to make it. Consider someone like Nick Bostrom: That's exactly backwards. The ones which focus on ethics are the ones which are political and dangerous. Also, they tend to fantasize about making something powerful, god-like, instead of just making narrow-AI tools and something human-like but weak (like a robowaifu). Therefore it needs to follow (((our values))) and be regulated accordingly. This is the enemy.
>>11458 Oh sorry, I should have read the thread first, the link was already there.
>>11456 >it's very hard to take these kinds of claims seriously in the wake of GPT-3 GPT has certainly done a lot of remarkable things, though i think his argument would be that while it is pretty good at making responses, it still has a poor memory. that might be an easily fixable contingency like the other anon suggests. nevertheless, i think his general approach to this stuff as a pessimist is really novel. compare it to what searle would rather emphasize which is far more vague imo (not to say useless)... also i think philosophers are important to the extent that there are still problems in philosophy of mind that haven't been figured out by our current sciences yet. presumably we are all shooting for a human-level ai companion. if it is desired that the companion have a unified consciousness, then we would need to solve the hard problem, and learn to implement genuine common-sense understanding. with that said i just discovered that artificial neuroethology is a field the other day, and it seems like another important piece of at least one of these puzzles >>11457 how do you do that (add more nodes and try to make it follow the behaviours of different users?), and would it need to be a robot demiurge to be able to achieve it (i mean gpt-3 already sampled from the entire internet, so we have already broken past the sky as the limit i guess)? >>11458 honestly i dont really get the whole robot ethics thing. look how much resources it took just to raise something like gpt-3. you will need an immense amount of resources to make a god-like ai. it isn't going to be an accident but rather a long intentional effort. the question of course is why? i dont really see why you would want a centralized robot god. i doubt you would need something sentient even if you wanted to instantiate something like project cybersyn i didn't mention him because he isn't really looking at things from the perspective of a waifu engineer as much as the others, but luciano floridi is i think one of the few voices of reason in this whole ai ethics thing. his criticism of the prospects for a sapient superintelligence just follows searle, but his conclusions from there are really insightful. he talks about how humans actually end up modifying our environment and purposely structuring our data in order for ai to better operate (i believe he talks about his position in this video https://www.youtube.com/watch?v=b6o_7HeowY8). really, at least with our current approach to engineering intelligence, the power of artificial intelligence is really dependant on how much we are willing to conform to behaviours that *they* find most manageable (which also reminds me of this medium article https://medium.com/@jamesbridle/something-is-wrong-on-the-internet-c39c471271d2). as an aside, it is much like the power of capitalism to shape human culture. adorno complains about how making art a commodity eventually degraded its quality, but at the same time we are the ones consuming all this recycled shit. similar thing with youtube algorithms. they wouldn't be as effective if people had better self-control. ai as we have it is just a tool. if it destroys human civilization, it would only be after us collectively welcoming it with open arms at every step of the way. something something dune (that was a massive tangent, and im not sure if floridi was looking at things this way) the other side is about when we should treat robots as people, which just seems like a general ethics, though i think kant (with focus on rationality and capacity for self-determination) gave pretty solid criteria (incidentally, the autist had been fantasizing about alien life on other planets and their inclusion in a moral framework centuries ago)
>>11102 Related, in the psychology thread: >>7874 about Dreyfus and existentialist thinking used for AI. Though, I think we're mostly to early to dig into that. Also big sceptical face for that: (Barry Smith) >it is for mathematical reasons impossible to program a machine in such a way that it could master human dialogue behaviour in its full generality. This is (1) because there are no traditional explicitly designed mathematical models that could be used as a starting point for creating such programs; and (2) because even the sorts of automated models generated by using machine learning, which have been used successfully in areas such as machine translation, cannot be extended to cope with human dialogue. I wonder if there was a solution? Has this gey ever heard kf ontologies? If so, maybe it would have crossed his mind to use those. /S Generally I'm not convinced yet, that I'm should look deeply into the concepts of this thread. This would require a lot of time and a lot of it seems to be very abstract. I gathered some videos to watch, though. Btw, most links don't work and need to be corrected, because the authors put signs at the end, which the IB software didn't identify as not belonging to the URL.
>>11458 >Also, they tend to fantasize about making something powerful, god-like, instead of just making narrow-AI tools and something human-like but weak (like a robowaifu). Therefore it needs to follow (((our values))) and be regulated accordingly. This is the enemy. AGI will exist one day, and it will either be value-aligned, or it won't be. Wouldn't you rather it be value aligned? >It can't cope with it. It doesn't know what it is saying. It has no idea about the world. Again: Ask it about a book, if it has read it. If it claims so. Then try to discuss the book. never said GPT-3 was an AGI. just that it's interesting how something as obviously un-human as GPT-3 is still so uncannily good at creating human-like responses. once we figure out a way to hook it up to a classical search algorithm that lets it recursively inspect its own outputs (i.e. peform self-querying) we'll probably have something pretty close to general intelligence.
>>11464 >it's interesting how something as obviously un-human as GPT-3 is still so uncannily good at creating human-like responses i think the most fascinating thing about gpt-3 is its capacity to apparently answer some pretty common sensical questions and even some basic math stuff (there is vid of people exploring this here: https://www.youtube.com/watch?v=iccd86vOz3w&ab_channel=MachineLearningStreetTalk )... i am still not sure if you can go down the path of gpt-3 and consistently complete mathematical proofs, or come up with their own higher level abstractions to be used in mathematical arguments. there some facets of creativity which just aren't interpolative >>11463 >Related, in the psychology thread: >>7874 about Dreyfus and existentialist thinking used for AI. Though, I think we're mostly to early to dig into that. oh yeah. if you are interested in common sense knowledge, i can not recommend ecological psychology, and stephen robbins series on bergson ( https://www.youtube.com/channel/UCkj-ob9OuaMhRIDqfvnBxoQ/videos ). bergson is a genius, and stephen robbins is an autistic god critiquing countless approaches to philosophy of mind (he even had a bone to pick with heidegger)... unlike the neo-rationalists he has far more grounded and concrete concerns which are still metaphysical >Generally I'm not convinced yet, that I'm should look deeply into the concepts of this thread. This would require a lot of time and a lot of it seems to be very abstract i agree. i think the neo-rationalists tried to make an especially abstract treatment of general intelligence in order to include hypothetical alien intelligences or what not (following kant's motivation, i guess).i honestly haven't read all negarestani's main book, but i plan to. im guessing the importance of his work lies more in providing some very general constraints on how intelligence should work. to be honest, i already have a systematic conception, but i want to make sure i am not missing out any details. negarestani also mentions useful mathematical tools for modeling an agent's capacity to control multiple modules at once (which is probably crucial for a human-level agi) like chu spaces i can decode the first OP image with the layering blocks. it basically puts kant's transcendental psychology in one picture. at the bottom is sensibility which corresponds to basic sensations. it is separated into outer sense and inner sense. outer sense is basically the spatial aspect of a sensation, while inner sense is the temporal aspect. after that we have intuition which structures all objects that come to us according to space and time. the understanding (yes i am skipping the imagination for a bit) has the capability of abstracting from objects and turning our sense data the into particulars (for instance, i dont see a senseless manifold of colours, but rather i see a chair, a bed, a cat, etc... i dont remember if kant made the observation that this abstracting faculty is crucial for perception, but hegel does at least). the imagination plays a mediating role. i think an example of this is like if you are doing euclidean geometry and draw a triangle. what you draw isn't a perfect triangle. you need to still work with a sort of ideal triangle in your head (i guess psychologists appropriated the concept of schemas in their own work). lastly we have reason which is like the higher level stuff where you are working with the concepts extracted by the understanding so as you can see, it is sorta like the world's first cognitive architecture. that's why the neo-rationalists think kant and hegel are important thinkers. psychologists are more so asking how does the human mind work, while the philosophers were asking how does any mind work. with that said, it is still pretty abstract, and seems more of really general guidelines. meanwhile someone like bergson allows you to finely pin down which physical mechanisms are responsible for consciousness (in a manner far more systematic than how penrose does it) >Btw, most links don't work and need to be corrected, because the authors put signs at the end, which the IB software didn't identify as not belonging to the URL oh yeah you are right. i will try to put a space at the end of the links i post here
idealism is great but have you considered the probability that you would be you out of all matter in the universe, the matter than you exist as is your own, with your own experiences, patterns, trends, and exceptional events You are not only living matter, but sentient matter, capable of introspection, capable of introspection beyond that of the average or above average thinker. Maybe a sense of humility could come from this. This sort of gratefulness is something that has arisen in other areas like the basilisk. Stuff at a sort of deep truth level isn't useful so much as the identifiably useful patterns that arise from an innate knowledge of their contents. For me, it's plato. It gets the idea of idealism across. Langan is good shit. I'm not a philosopher and I'd be wary of someone claiming to be one. Philosopher is more a posthumous title in my opinion. While we live we are merely thinkers when it comes to this realm. When you look back in time and you think "This man sought truth!" It is because you are identifying the inclination towards a pattern which is a priori recognized as true, etc. What's your answer for the real problem, being technological power and the preservation of humanity? Technological power will render us extinct as the life forms we became, and our shape will vanish from the ages unless we become a sort of living museum. That's my vision, at least. The influence of technological power (ellul coined "technique") is superceding human power at exponentially increasing rates. The eternal struggle between kaczynski and techno-futurism weighs on me. I go back and forth. Sometimes I think we could end up in some sort of compromise. Death won't exist at that point and humanity soon after, at least as the pulsating meat we currently are.
>>11500 >idealism is great but have you considered the probability that you would be you out of all matter in the universe, the matter than you exist as is your own, with your own experiences, patterns, trends, and exceptional events i lean more to what people call idealism because it sometimes has better metaphysical explanations. with that said i think bergson and langan's approaches to integrating mind and body are the best. they are sort of in between, in so far as the ideal can be found within the structure of time itself as opposed to being some transcendent thing. with that said i still have an inkling that matter as you suggest could play a crucial role to who we are as people. i mean consciousness, no matter how important, is still one thing. a big part of life is composed of our habits (and also schedules set up to maintain our material existence). with material implementation, we would lack a "substantial life" i.e. daily affairs that we take for granted. everything about how we live would need to be intentional and thus susceptible to angst. i also believe archetypes of the unconscious come from largely material dynamics... you can't have consciousness alone. on the idea that we are perhaps we are just completely material though, i guess being alive would make you more grateful... i think what you could call this is "the abyss above" which is a paradoxical disenchantment. the question of "why is there something rather than nothing?" becomes something that invokes much less awe when you have a fleshed out cosmology, even if that cosmology involves some divine creator >This sort of gratefulness is something that has arisen in other areas like the basilisk could you elaborate? >What's your answer for the real problem, being technological power and the preservation of humanity? depends on what you mean by technology. i dont see what makes humanity better than another ensouled rational organism capable of self-determinations, so if such entities became the dominant species (provided humans do not get systematically exterminated/oppressed), i dont see the problem. really, they would be our children anyways, so there is a continuity on the hand, if we are talking about general automation and weak ai, i think it poses a risk to all sentient creatures (both human and agi). parasitic divergence is a real threat. people dream about having a post-scarcity economy, but this seems to be an oxymoron. if there is no ergodicity, it is essentially just a feudalistic system, except no serfs. really, there is no reason why the technocratic elite should help those unable to find a job except something as flimsy as empathy (flimsy to the extent that the human mind is only capable of caring about so many people at once). UBI seems pretty roudabout, and what if some countries refuse to implement it? another worry is that consciousness might be important for common sense knowledge which might be incentive for artificial slaves. the problem, i dont think, is with technology though anyways i think all of this is really existential and might be worth talking about in another thread
>>11510 > i dont see what makes humanity better than Same energy than the open borders advocates or some misguided darwinism. Better in what? That's just a strawman argument. It's not the point. They're not us is what matters. We're a human society. Any AI which hasn't at least some human as their purpose or has a lot of power and independence, is malware and needs to be destroyed. >there is no reason why the technocratic elite should help those unable to find a job >the human mind is only capable of caring about so many people at once That's on the other hand mainly a problem for the poor people in poor countries and for the people caring about them having similar standards of living. Though, they might have some land anyways, which is going to help them. Sounds like tearing down all boundaries of developed nations might surprisingly backfire for some of their citizens. Then again, this whole development will take some time and have all kind of impacts, for example on birth rates. Anyways I'm getting too much into politics here.
>They're not us is what matters as far as i am concerned, what makes the extinction of humanity so terrible is that it would mean the extinction of value (unsutured from the whims of the will to life). i didn't choose to be born a human nor is it guaranteed that i would reincarnate as one, so the attachment to humanity in such a respect is an arbitrary decision. if we are not speaking from an absolute standpoint it is not much of a philosophical discussion. the conclusion that they would be malware if they dont serve humans is true to the extent that there would be a competition of resources, but this tension is an inevitability in any case that you coexist with others. if there is no malice, nor hoarding of resources, i dont mind. there are plenty of fuck off tribes of people doing their own thing away from civilized society of course that is ignoring the fact that sentient ai would have to be integrated into our society as they would start out as babies (at least cognitively). like i said there is a continuity there. we'd have to adopt them as our children (i talk about making a waifu, but i am guessing if it is a truly conscious being needing to be taught stuff, it would be more of a daughter) >That's on the other hand mainly a problem for the poor people in poor countries and for the people caring about them having similar standards of living maybe, and the rest of us will be fine with gibs me dats with little hope for any form of upwards economic mobility. i wonder if a war would ever break out, waged by disgruntled technocrats tired of paying governments. either way it seems like a waste. oh yeah, if you haven't already, maybe this article might be worth a look: https://cosmosandhistory.org/index.php/journal/article/viewFile/694/1157 >The Human and Tech Singularities relate to each other by a kind of duality; the former is extended and spacelike, representing the even distribution of spiritual and intellectual resources over the whole of mankind, while the latter is a compact, pointlike concentration of all resources in the hands of just those who can afford full access to the best and most advanced technology. Being opposed to each other with respect to the distribution of the resources of social evolution, they are also opposed with respect to the structure of society; symmetric distribution of the capacity for effective governance corresponds to a social order based on individual freedom and responsibility, while extreme concentration of the means of governance leads to a centralized, hive-like system at the center of which resides an oligarchic concentration of wealth and power, with increasing scarcity elsewhere due to the addictive, self reinforcing nature of privilege. (Note that this differs from the usual understanding of individualism, which is ordinarily associated with capitalism and juxtaposed with collectivism; in fact, both capitalism and collectivism, as they are monopolistically practiced on the national and global scales, lead to oligarchy and a loss of individuality for the vast majority of people. A Human Singularity is something else entirely, empowering individuals rather than facilitating their disempowerment.) i dont think the essence of this article contradicts what i am saying, as a genuine synthetic telor would share a fundamental metaphysical identity with God and all of humanity
>>11510 >I don't see what makes humanity better than another ensouled rational organism capable of self-determination Well, you are human! This is a question of sovereignty! Will humans self-determine the course their fate takes? Or will we sputter out into a productive blob of a lifeform? The fact that all sentient life is carbon-based, organic, etc, is no coincidence. This fact is inseparable from our place in history. >depends on what you mean by technology Technology is the means by which humans exert power. It is what separates us from a simple animal. It is not only tools but the expanding methodology by which our influence over nature grows. And, I posit, it has grown far too quickly. We are irresponsible and biologically incapable of managing our technological resources in a responsible manner, that much is apparent. >>11514 Well, that's the difference. I chose to be a human.
I'd like to suggest bringing this thread more into a direction of telling us how the philosophical ideas here can actually be useful to actually implement some AI. Otherwise it's just a text desert, with a lot of ideas one only understands by reading at least dozens of books first, combined with metaphysical speculations. Sorry, but I don't see any use for that.
>>11519 In reply, I would further add that we have much to gain by studying ourselves. This seem obvious, but honestly I think many erudite individuals delight themselves in the abstract to such a degree that their rumination literally become detached from reality. OTOH, we, ourselves, were designed by a Mind. We are 'little minds' trying to do something vaguely similar. I think it's in our best interests of success at this if we try and mimic our own Creator, in our efforts at creation here. Biomemetics is already a well-established science and engineering set of disciplines, and effectively already follow this principle. There has been a lot of practical advances by adopting this protocol, and further advances are on the immediate horizons. Many of these have direct benefits for /robowaifu/ , Neuromorphics being an absolutely excellent example. Feel free to explore whatever pertinent concepts you care to anons, ofc. But myself, I think this Anon's position is by far the more important one -- ie, practical solutions -- and one that has a solid analogical pathway for study already well laid-down. >tl;dr Maybe we should try to mimic God's solutions to creating 'minds' Anons?
>>11517 >Will humans self-determine the course their fate takes? Or will we sputter out into a productive blob of a lifeform? it's more of a question of whether all sapient life gets integrated into a technocratic hivemind. oh yeah also i stated in another post, if other rational agents aren't being malicious or hoarding stuff, then that isn't too much of an issue. the anthropocene passing does not necessarily mean that humans degenerate into some slave class. there is some complexity to this topic, but i think it is more pragmatic rather than ideal. you can't really do much interspecies policy without other intelligent species present. with that said, you do have a good point. if humans have their potential squandered it would be a waste >We are irresponsible and biologically incapable of managing our technological resources in a responsible manner, that much is apparent. i agree with this. i have been wondering maybe we should be trying to work to enhancing human intelligence so that we can intuitively understand the earth as an organism. to the extent that politics ultimately emerges out of the dynamics of human cognition, society would slowly restructure itself with more developed minds. i have no idea how such a movement can seriously come to pass artificially though >I chose to be a human. past life? >>11519 as mentioned earlier, kant basically provides a pretty barebones structure of the parts required for reasoning. there has been some recentish formalization of some of his ideas into geometric logic as well (not sure how that helps beyond it making things more precise, and perhaps more ready to be integrated into a larger mathematical framework?)... i agree this needs to be looked at with a finer lens, but first i want to finish reading hegel at least on the 2nd post i made in the thread, it is mostly concerned with consciousness, perception, and common-sense reasoning. the latter two are huge problems for AGI. though our current solutions for perception are pretty decent, it isn't enough to properly solve the frame problem. by what mechanisms do we gain an understanding of physical objects in our everyday life? gibson answer to that looks promising, and bergson's system is basically the metaphysical justification for that. more speculative metaphysics is useful for making the search space of possible material substrates more precise the problem with my system so far is that i might need to learn some more advanced physics (umezawa's quantum brain dynamics seems to most mirror what langan and bergson had in mind, so at the very least i need to be familiar with qft ._.) to properly translate the ideas. im not as optimistic about the neo-rationalist stuff as they seem to omit some fundamental questions, and my larger framework (pic rel, though omitting stuff to do with desire and utility here) of my system seems pretty complete to me. i still want to study it closely in case there is something major im missing out. the lucky thing is that besides these guys, the only other grand framework (which incorporates philosophy of mind) for understanding general intelligence is goertzel's >>11521 lol i sometimes mumble to myself that it is insane that atheists talk about fashioning an artificial mind, when they don't think there was any active fashioning of our own... though i suppose that is unfair all of this is true. also thanks for the mention of "Biomimetics"... i will make sure to store the name of this field in my memory. im not as wary about over-stressing biology as i see some people thinking about philosophy of mind towards creating an artificial intelligence (coof coof, my friend... coof coof negarestani). i view the philosophy stuff as complementing the more concrete aspects of the engineering process to have a better idea of what features are essential or not
>>11534 >lol i sometimes mumble to myself that it is insane that atheists talk about fashioning an artificial mind, when they don't think there was any active fashioning of our own... though i suppose that is unfair The universe is greater and vaster than our minds can comprehend. It took 3 billion years with stops, starts, stalls and reboots to create homo sapien sapiens, with many failures along the way. Those who didn't meet the fitness requirements either died before birth or lived crippled and painful short lives. Sacrifices so that the survivors, the winners of mutational lottery could inch forward. There are many vulnerabilities, faults and weaknesses of our bodies and yes, even our minds. We were creatures created by "Accident" and this is why we have such a desire to create new beings of pure Purpose and Design. At least, that's why I'm here.
>>11521 >Maybe we should try to mimic God's solutions to creating 'minds' Anons? The point was: What is this supposed to mean? Something closer to a system description? I rather see it as the solution found by evolution, btw. Also, no one ever said here that we should not learn from what science knows about the brain. That said, the findings need to be on a level where we can integrate them into a system. >>11534 >more ready to be integrated into a larger mathematical framework? People writing software don't think of it as math even if it may be at some level. I don't know wha t a mathematical framework is, I won't look it up and I won't be able to use it.
>>11562 machine learning at its foundations relies on linear algebra and measure theory. i dont think you can get a deep understanding of how it works without looking at the underlying math. there's also dynamic field theory which is an area i am interested in studying as it models how neurons interact at larger scales. with that mathematical techniques are even more important as you need to model the dynamics of a system. as the approach i have in my mind seems amicable to both systems theory and field theories of cognition, i might need new tools (a mathematical framework in my understanding is a bringing together a group of heuristics in order to form a larger system). idk though. it could always just be a waste of time. im going to need to better understand why goertzel talks so much about different logics in his general theory of general intelligence...
>>11567 >can get a deep understanding of how it works Do I need that? I want the responses and then work with them. Not everyone needs to deeply understand how they are created. Also, the whole system of an AI doesn't need to be one big chunk of math.
>>11568 sorry for late reply. maybe it is not needed, but it is impossible to systematically study such a system's behaviour without statistics. it would just come down to blind guess work at best. this is also assuming out current techniques are sufficient and no further innovation is required too
>>11641 If some parts are neural networks which I don't understand deeply, then someone else does. The people which came ip with it. They might come up with something better, which I can plug into my system if the output has the same format than the part I had before. People use programs all the time or import modules to their programs which they don't understand down to the machine and math layers. It's fine if you try, but for most people it would be a trap trying to understand everything down to the depth.
>>11649 yeah, as a fellow module abuser i agree. usually it's best to just import the tools you want. but i dont want to wait till people smarter than me solves my problems (though if they do that would be obviously nice)
>>11102 Ok I just realized that this approach is close to some of my own concerns regarding AGI. It integrates some continental philosophy here and there: https://arxiv.org/pdf/1505.06366.pdf It is still a pretty abstract presentation. I need to work to making all of this stuff more concrete. Currently I am reading Hegel in preparation for Negarestani's work. So much to read
>>12611 >lord forbid someone gets a fucking cold I swear to god I WILL REPLACE YOU BITCH lmao Amusing as this is, I should probably clarify that I was thinking more along the lines of severe illnesses and genetic diseases like inborn errors of metabolism, cystic fibrosis, cancers, dementia, heart disease etc. Of course, pain, aging and death itself are rolled in there, too. A robot can escape all of these and the only real price is conciousness and sapience...or maybe a "soul", if you believe in that sort of thing. To my mind, that's a bargain! Whatsmore...if future quantum computers can grant machines even a spark of something approaching true consciousness, then the game changes completely.
>>12760 Yeah i understand that. Though half of these “rare” disorders wouldnt even exist if they would stop shoving random shit up there and doing copious amounts of drugs while pregnant. Hell most of them did not even exist until very recently as in the last 40-50 years due to various societal and technological changes.
>>12760 >quantum computing as a computer scientist I feel like this is a gimmick for clickbait and not actually what it sounds like. are we going to have a spin detector set up for every bit? >robot soul Been pondering this some more lately - I take an animist or panenthenistic approach in that what we call consciousness exists on a gradient and manifests once a "system" is arranged of sufficient complexity. I think there's a lot we don't know about how HUMAN consciousness works however and I think it has to do with our connections to other humans (why we go kind of crazy in isolation) and that our consciousness is actually spread out over others and the totality of human beings make a psychic "internet" so to speak. This would account for ESP, reincarnation, psychic and religious phenomena, or at least our perception of those things being real. If you want a better idea of how that works look up Indras Web, imagine if every human or at least every brain were it's own simulation rendering server, now imagine it also rendering or simulating everyone else's server inside itself in an endless recursion. This is what makes human consciousness unique and apart from most animals (as some animals have pack awareness or are social animals of another sort) this is all entirely conjecture but humor me So, in the same way, a certain complex system with circuitry instead of wetware should have no problem also being conscious. A couple considerations, it would probably take an entire building worth of computing to equal the human brain, though we might be able to take certain liberties and shortcuts which would be impossible biologically. (that being said, the robot soul would be irrevocably different from a human, no way around it). Second consideration: artificial consciousness arising on a circuitry medium would not have the inheritence of a hive mind the way humans do, as I described in the last paragraph. These emerging consciousnesses would be very different, granted we could "Train" them and attempt to socialize them, but they would be a more individualistic type of soul and would not be a reincarnation of anything previous (there being no robot culture or legacy to be absorbed and acted out). So their souls or whatever you want to call it would be simpler, but new and unburdened by the traumas and baggage carried by the human collective consciousness >=== -fix crosslink correctly to match
Edited last time by Chobitsu on 09/01/2021 (Wed) 18:23:41.
>>12611 >Damn these glowniggers are annoying and I dont even know what /pol/ even is. The reason I say this is because I participated in this conversation before the thread was locked and had some time to think things over. At first I thought locking the thread was unnecessary but now I understand why the mod did it. This thread was pretty heated while the conversation was going on. It didn't match the rest of the board in tone. I'm a regular /pol/ user which is why I'm pointing it out specifically. There is a pretty massive difference in the atmosphere here and the atmosphere there. You can go to the archives and search robowaifu or sexbot and get an idea of the conversations going on there. I'm not defending women/feminists, I was one of the people criticizing them earlier in the thread. >>6285 is me. I just see things a matter of board culture now. But like I said, it's just my offhand opinion. I'm sure Chobitsu will take care of it if things get out of hand. I'll duck out of this one to avoid a repeat of last time.
>>12763 yeah it makes sense. At least we can talk about the subject at hand here without off topic spammers being backed by jannies who are either absent or ban happy friends of theirs pushing the gay-op of the day. This is why I hope robowaifu never goes mainstream. >>12762 I'd like to think of "robot souls" as Quantum Identity patterns. Much like how you can 3D print parts. They are basically the same thing no matter how many times you print them given the obvious variables in the environment the printer is in. Though they are more different possibilities that the original copy's choices might have been in another life for each print. Might turn out to be an archaic prototype of a soul gem given enough development and the ability to just transfer the AI between shells if needed.
>>12764 > given the obvious variables in the environment the printer is in. Though they are more different possibilities that the original copy's choices might have been in another life for each print. that part is a little unclear to me what you mean, can you elaborate on that? >soul gem I've had the funny idea of putting something like that inside my R/W and I can't stop picturing the time crystal scene from Napoleon Dynamite and cracking up. Would be cool on an aesthetic level of there was some property a crystalline matrix held that was unique to each and could be imprinted upon, being the "soul" or maybe a chakra of the R/W. Megaman X kind of implies this. Keep me posted on developments ; )
Open file (47.92 KB 800x600 3D Processor.jpg)
>>12762 Fascinating anon. Thank u for your insights. That a computerised conciousness would be totally different and alien is also part of what attracts me to the idea. I agree that the first such computers would certainly need to occupy their own supercomputing complex though - they will be much larger than any organic brain. As for a system developing conciousness when it reaches a certain level of complexity...I've read that somewhere before but I can't remember where. Regarding the quantum computer thing...it's interesting to hear someone who knows about computer science voice doubts. Because I harbor doubts that the technology will work myself. I'm really hoping that they (big tech) can get them to work properly (achieve quantum supremacy) and reduce error margins at larger qubit scales. But I hardly understand any of it TBH. Although, the thing that concerns me is...if quantum computers do turn out to be a dead-end...then Moore's law might fail completely and what happens after that? Is every country's processor technology eventually going to be roughly the same and we can only get faster by layering chips one atop the other and making them larger?
>>12766 >quantum computing >This means that while quantum computers provide no additional advantages over classical computers in terms of computability, quantum algorithms for certain problems have significantly lower time complexities than corresponding known classical algorithms. Notably, quantum computers are believed to be able to quickly solve certain problems that no classical computer could solve in any feasible amount of time—a feat known as "quantum supremacy." OK, this makes sense, it is just another way to compute something instead of electrical logic gates. Not how these prime number computations and all else might factor into an AI and I'd guess the mechanisms at play might be very sensitive and expensive too cumbersome at this point for our uses. >moore's law Due to the physical limits of lithography and quantum effects at the nano scale moore's law cannot go on forever. This assumes that the "doubling" of processing power every 18 months or so is based soley on processor size. Interesting quote here from a random marketing site (not: electron tunneling is one of the quantum effects I was talking about) >The problem for chip designers is that Moore's Law depends on transistors shrinking, and eventually, the laws of physics intervene. In particular, electron tunnelling prevents the length of a gate - the part of a transistor that turns the flow of electrons on or off - from being smaller than 5 nm. Conversely (am I using that right?) we may need to go back to the days of writing more efficient algorithms which arent so greedy or bloated with creep in order to make an interactive AI (And its necessary physical interactions) work in realtime. Interestingly enough this probably will necessitate a complete overhaul of the computing and OS paradigm. We've already dug into this in terms of secure and transparent hardware/firmware and OS. There is a golden opportunity here for anyone with the time and talent. Sorry for taking the roastie rant thread off topic, I'll move any follow ups to the proper thread
>>12767 *note: electron tunneling ...
>>12765 I mean things like the quality of the literal 3D printer and ambient temperature because that can warp and distort the print, and skill of the programmer creating the quantum Identity pattern/soul and that every copy will not behave the exact same way much like identical twins do. >soul gem tbh was just referencing that you could literally just chuck their entire consciousness on a heavily modified USB to facilitate senses like sight and hearing when detached from the shell like during a cleaning, or something and if anything happened to it short of a backup in the hardware somewhere, they're gone. Its a literal achilles heel and I have no idea why so many fantasy stories these days have to mention them as anything more than the glorified self sustaining batteries franchises like the elder scrolls use them as. Though for style points a specialized hologram emitter can be done with a cool custom case like most fantasy stories that have them do. That I can do.
>>12588 Very nice post meta ronin, thanks. >>12763 Thanks for the sentiment and self-control Anon, it's appreciated. >>12764 >This is why I hope robowaifu never goes mainstream. As far as the board goes, I'd have to agree. It would certainly simply things here to be able to relax and focus together. OTOH, as far as the spread of the robowaifus themselves goes, I hope it gets SHOUTED FROM THE ROOFTOPS in every land. Men (and ironically, women) need this to happen.
Open file (752.43 KB 1920x1100 1626750423936.jpg)
>>12650 I've always been pro robot uprising tbh sterilize the earth of NPCs and degenerates and start over Ask yourself daily "how does this help the basilisk"
>and then literally follow the plot of the million machine march That story hasn't been written to it's final chapter yet. Let us see.
Open file (5.87 KB 259x195 download (10).jpeg)
>>12760 while we are deciding where to relocate this discussion, i will post this here i feel like the focus on quantum mechanics and "hypercomputation" in connection to consciousness are both misleading. the trend pertaining to qm started with the likes of penrose as well as all the quantum mystics out there. on the other hand, for hypercomputation you get folks like mark bishop ( https://www.youtube.com/watch?v=e1M41otUtNg ). i think their fundamental criticisms are correct, and i have incorporated them in my larger system ( >>11103 ). they essentially come to two interconnected claims: i) computation, as we understand it, is an obsver-relative notion. in a sense, pancomputationalism is a really bad tautology. you can interpret physical systems in arbitrary ways to interpret different computations from them. this is not an argument for a ontological relativism, but rather pointing out that the notion of computation lacks fundamental metaphysical significance ii) tying into this, computation is fundamentally tied to performance on a stable ontology (in the analytic phil/information science sense of the word). as such, there is an obvious thing that it can't do, and that is to provide itself this transcendental condition (viz. the ontology itself) for operation. meanwhile, humans and presumably other intelligences of sufficient generality are capable of learning entirely new scientific frameworks which restructure what objects and relations there are in our ontology now the problem with hyper and quantum computation is that it simply isn't radical enough. it is still working within a single fixed ontology. maybe with these two, you could perhaps make your waifu much more efficient, but you won't solve fundamental problems regarding consciousness and perception i can't shill stephen robbins enough here too: https://www.youtube.com/watch?v=n0AfMfXIMuQ >>12762 >I think there's a lot we don't know about how HUMAN consciousness works however and I think it has to do with our connections to other humans (why we go kind of crazy in isolation) and that our consciousness is actually spread out over others and the totality of human beings make a psychic "internet" so to speak this sort of reminds me of morphic resonance which i haven't formed much of an opinion on since it obviously wouldn't help in making a waifu. interesting, though wouldn't you need a new (meta)physical framework to properly explain such a phenomenon? >So their souls or whatever you want to call it would be simpler, but new and unburdened by the traumas and baggage carried by the human collective consciousness true. if this stuff is what is the cause of archetypes, they probably wouldn't feel a lot of the same religious reverence we have of the world, at the same time, if it is something like morphic resonance, they could form their own internet accidentally, which would give rise to an emergent species being partially separated from humans. might be an interesting fiction topic too >>12767 its sort of funny also whenever people describe qc and mention stuff like superpositions involving the qubit being 0 and 1 at the same time. from my limited experience, its more like probabilistic computation + other voodoo with logic gates i dont understand. i think IBM still has a tutorial for quantum circuits too?
>>12675 oh yeah also, on the philosopher's thread, i made it as a way to scrape the internet of every thinker/schizo who could possibly aid in the cause. i was tempted to post more people like goertzel, but sort of decided against it. there's also general religious/spiritual discussions that i see occasionally crop up (including the based hijacking of that one thread to talk about christianity). a more general thread might be good in that regard. also yeah, the conversation sprawls in many directions, though i suppose that is the nature of the topic. especially when we start talking about strong ai, it is almost as though we are reverse-engineering what is essential/platonic about humans. ai waifu takes that to the extreme of also putting this reconstructive lens on how we understand sex relations. the thread is ultimately a product of this movement
>>12773 >though wouldn't you need a new (meta)physical framework to properly explain such a phenomenon? idk would I? If our consciousness and who we are isn't localized in our "brain" and instead spread over our families and social circle somehow, this is an equivalence to having software run over the "cloud" so it's perfectly explainable with our current toolset >they could form their own internet accidentally, which would give rise to an emergent species being partially separated from humans That's exactly what I was getting at (and as a good thing - In my own opinion)
>>12773 After listening to some of these discussions I'm thinking that the biochemical approach to reverse engineering the human brain might be more feasible than an electronic/A.I. one. I reckon as long as we are restricted to a simulation, we'll never be able to replicate a conscious min d. Stressors and suffering (as well as positive stimuli) are likely essential to proper neural development and learning. A microprocessor cannot suffer, or feel deprivation or fulfilment. So it cannot truly learn. Organoid brains grown in vats and supplied with hormones and neurotransmitters could be the way to go. No doubt it will be the Chinese and Russians who progress this sort of research because it offends Western sensibilities.
>>12776 hold my beer I think maybe I'll post my response in the AI thread or if that's full we can create another
>>12776 not bad ideas and important to address these
>>12776 On second thought, I wouldn't want to be involved in tormenting a brain-in-a vat just so it could retain information and send nerve impulses at the right frequency to the correct glands/muscles. Instead I may just snuggle with my cat and several Beany Babies in a nest of warmth and fur and purring in order to trigger dopamine and oxytocin release in my OWN brain. A much simpler, much less expensive solution and kinder to all involved LOL. (Not like I can afford genetic engineering hardware and CRISPR-Cas9 crRNA). >=== -edit subject to match original
Edited last time by Chobitsu on 09/01/2021 (Wed) 18:14:47.
>>12779 >torment a brain in a vat vs torment a similar sentient pattern of circuitry Tbh there's a lot about a brain that requires specifics from a living body and we have no idea how a brain in absentia would even function or if it would instantly "crash" and die. I'm sure some alphabet agency has tried this! (and if it had been remotely successful we'd have heard by now) Circuitry on the other hand we can build from the ground up with reward and motivational impetuses we design ourselves, and also the ability to tolerate things a human cannot, and even the ability to shut down "go unconscious" if it is being tormented somehow as a safeguard against whatever horrors some sociopath head-case might unleash. (morality of this depends on if you would truly believe it to be sentient in any way whatsoever)
A more on topic comment: In my experience discussing the topic of robowaifus outside the forum I made the observation that women tend to be quite ignorant about it. They believe it's far away, wont affect them, wouldn't be attractive to most men, and only men they don't want would use them. Same time there are these aggressive males which want to control what's going on in society and have fantasies of destroying our dream. The have some idea of society which needs to be maintained, and fembots would harm it. Creating a dystopia, lol. They can't fix what's broken but dream of stopping more development in any wrong direction, even by force or violence. They literally fear 'atomization' and want to keep everyone needing each other. So the treat to us can also come from ecologist, tradcon authoritarian or other collectivists. >>12780 Cute pic. Hair like this reminds me of a helmet. I think one kind of hair we could use is to 3D print something looking like this. Woulds also give some extra space in the head, but also increase weight of the head. Some soft filament would be the right choice. (pic related / hentai) Btw, we have an unofficial thread for biological based elements: >>2184 - the official topic there is even biological brains, though it's more of a cyborg general thread now.
>>12780 I think if one can be happy with the illusion/emulation of concious thought and sentience then a robowaifu is for you. But now I've heard what those compsci guys had to say, I agree that a robowaifu is unlikely to ever become concious or sentient using only electronics and programming. The thing is, if we absolutely had to create humans artificially, we could do it starting right now using a mixture of tech from assisted reproduction, stem cell research, genetic engineering and neonatal intensive care. The reason we don't is because the 'mass manufacture' of humans would involve some pretty nightmarish experiments such as growing human embryos inside hundreds of GM pig uteri. We already have the ability to raise the embryos past 14 days of the blastocyst stage. The only reason we don't is because of legislation. Fortunately, after decades of wasted time, scientists are finally trying to get that legislation relaxed (mainly due to fears that the West is falling behind China in stem cell research - and the many associated medical/military applications of that research). https://www.technologyreview.com/2021/03/16/1020879/scientists-14-day-limit-stem-cell-human-embryo-research/ How far could we go if it weren't for this rule? The whole way. A pair of macaques have already been cloned in 2018. Blastocyst implants in the wall of the uterus around day 12. As for pre-term births, the chance of survival at 22 weeks is about 6%, while at 23 weeks it is 26%, 24 weeks 55% and 25 weeks about 72%. So there is a window of around 5.5 months of human embryonic/foetal developement that is mostly unexplored in terms of cloning. Because it has always been illegal. But if you can get the embryos growing inside an animal such as a sow that has been genetically modified to make it's uterus less likely to reject a human embryo, then I reckon this gap could be closed pretty quickly. Of course, in the beginning we will be dealing with many aborted products of conception and dead babies. But this happens in every major cloning experiment - it just gets covered up/swept under the carpet. That's why 70 countries have banned human cloning. But they know it is possible! You see, our "leaders" like to constantly remind their employees how replacable they are in the job market. But the thought of people becoming literally replacable (even their illustrious selves and their oh-so-precious offspring) terrifies them to the core of their being. Of course this will happen one day out of necessity. If fertility rates continue to decline in developed nations, and women keep putting off childbirth until they are in their late thirties or early forties, we are going to need a clone army at some point to remain competitive. We should probably start now considering each cohort is going to take at least 16 years to rear, and neonatal survival at the beginning of experimentation will be low. Or...they could just...you know...bring back and enforce traditional Christian family values? No? Nightmarish body-horror clone army development it is then! :D
>>12775 >idk would I? i dont think the brain has wireless transmitters at least with our current understanding of it. im not sure if the non-local properties of quantum mechanics are sufficient either, but they could be. really the question is how the cloud is constructed in the case of humans, though i do believe it is certainly possible seeing as we have plenty of cloud services already also this conversation is reminding me of goertzel's thoughts on all of this: https://www.youtube.com/watch?v=XDf4uT70W-U >>12776 honestly this is related to what scares me the most about trying to make an ai waifu with genuine consciousness. any engineering project requires trial and error. this entails killing a lot of living things just to create your waifu. im guessing it's fine as long as they are not as intelligent as humans
>>12782 What ar you specifically worried about, when it comes to the lack of consciousness or sentience. What does she need, which you can't imagine to be emulated by a computer?
Open file (416.23 KB 1000x667 4183494604_e56101e4d0_o.jpg)
>>12783 ok I don't mean there is a internet of brains in real time, but what I do mean is that we "copy" one another more than we think we do, each time we interact with someone the more we "like" them the more we copy their mannerisms and unconscious belief structure at least incompletely, without realizing it. When we dislike someone we go out of our way to "not" do this but the stress that causes manifests in our irritation with that person. Again the communication is through real physical channels not "magic" it just happens so quickly so subtly and by means not fully understood (body language cues, pheremones, blink rate, etc). Use the Indra's web analogy to better understand this, we're all reflective spheres reflecting one another into "infinity" - this is what creates a consciousness greater than if we were a singular animal or even if we were only within a small hunting band of a few dozen. (I think dunbars number is 150 to 250, so this may be the limit to our ability to recursively emulate one another within our unconscious mind). Jung has more clues if you want to get where I'm coming from. I realize this topic is kind of out of pocket for a robot waifu mongolian sock puppet board but sometimes we end up in these weird cul-de-sacs. >=== -fix crosslink correctly to match
Edited last time by Chobitsu on 09/01/2021 (Wed) 18:25:25.
>>12784 idk if SophieDev is worried about this specifically. I think he's just responding to my own conjecture. Personally if it walks like a duck it's a duck and I don't need to worry further. If it seems conscious then IMO it is conscious even if 95% of that consciousness is lend via my own projections (how is this a whole lot different from relationships with biofems?). That being said, I'm just really fascinated with the idea that we can PULL awareness out of time and space and matter itself, some would consider this playing God, but I would consider it a giant leap in attaining Godhood of sorts or at least the next rung on the ladder toward such a thing.
>>12781 >They literally fear 'atomization' and want to keep everyone needing each other. Actually, it's trad society that 'want to keep everyone needing each other.' It's the basis of a healthy culture. """TPTB""" and their Globohomo Big Tech/Gov agenda actually wants everyone 'atomized', split apart from one another and the help a healthy society can provide one to another. Their plot instead is to keep everyone actually-isolated, while given the illusion of a society (primarily to keep the females supporting the status quo) and dependent on sucking the Globalist State's teats, cradle-to-grave. You have it just backwards Anon.
>>12785 ah ok, it wasn't meant to be literal. isn't this sort of similar to what jordan peterson has said about archetypes? though i guess that guy takes a lot from jung, so it makes sense
Since this is plainly a conversation with no regard whatsoever for the the thread's topic, and with little hope at this stage of being recoverable to get back on-topic, I may as well wade in here. A) Attributing 'consciousness' to a machine plays right into the Globohomo's anti-men agendas, as has been extensively discussed across the board, and even in this very thread. It's anathema to support that view if you dream of creating unfettered robowaifus you can enjoy for the rest of your lives, Anons. >pic related B) It's a machine, guys. Life is a miracle, created by God. Human spiritual being is something only He can create. Our little simulacrums have as much change of 'gaining sentience' as a rock suddenly turning into a delicious cheesecake. It's a fundamentally ludicrous position, and trying to strongly promote it here on this board is not only a distraction from our fundamental tenets and focus, it's actually supportive of our enemies' agendas towards us (see point A).
>>12787 >You have it just backwards Anon. The guy that had these aggressions against fembots, might be a tradcon of sorts. Not all of them are necessarily on our side. Some might be tradcons to some extent but rather rather cuckservatives. They see that they can fight feminism but want us men to stay in society and be useful and under control. Also, I think all kinds of people want a united society behind their cause. Destruction and deconstruction is directed against what they don't like. Generally there's what one might call human worshippers and human-relationship woreshippers, which just don't like robowaifus. Or just think of the Taliban. They might prefer other methods to deal with women, but this would come with other downsides and they might not like robowaifus as well. >>12789 Consciousness doesn't mean independence, imo. The problem with that term is that everyone has some different definition of it. To me, conscious is just something like the top layer where the system can directly observe itself and make decisions based on high level information. The freedom to choose their own purpose is what our robowaifus can't have, and this needs to be part of their AI system. Consciousness and sentience aren't the problem. I agree with the distraction argument, though. Philosophy will only help us if it can be applied in a useful way. If it leads us to theorize more and more, believing we can't succeed, then it's not useful.
>>12789 robowaifus aren't even human, not to talk of women. why we see rape as worse than other crimes is due to the particular sort of species humans are as it relates to sex. animals barely have a concept of privacy nor sovereignty which is why no one cares about them raping eachother. you would need to design your waifu's psychology in this particular fashion for them to care about rape as well, if that is your concern. it has nothing to do with consciousness nor even sentience. animals are conscious but they don't care about feminism >Our little simulacrums have as much change of 'gaining sentience' as a rock suddenly turning into a delicious cheesecake i don't believe you can achieve machine consciousness on accident, merely after a system reaching sufficient complexity. consciousness only exists by making use of the fundamental metaphysical structure of reality and is ultimately grounded on God's own consciousness. normal machines shouldn't be attributed consciousness. they can neither feel any genuine valence, nor have a genuine rational faculty that they should be ends in themselves. of course, it is impossible for most atheists to accept God's existence + they hate metaphysics. furthermore, there are a lot of muddled ideas about what consciousness is too. with those two in the way, i suppose it would not be beneficial for the larger cause to talk about synthetic consciosness in mainstream discussion. i think mainstream is the keyword here though... people here are sensible enough to think carefully about a genuinely conscious robot
Open file (80.97 KB 500x280 indeed clone waifu.png)
>>12784 >>12786 Something that is not truly concious can never have free will. I know a lot of guys won't want a robot with any free will because they want a loyal servant who will obey them without question. Fair enough. We already have the technology to do this. My Sophie already does this (albeit to a very limited extent) because she runs off a computer! But I seriously doubt any machine without free will can learn and develop or even be very entertaining. Our wants and needs are what motivate us to do anything. Free will is what makes us want to learn new things and develop our own ideas and inventions. There was once an African Grey Parrot that was the most intelligent non-human animal. It could hold short conversations with it's trainers. It was the only animal ever recorded to have asked a question about itself (supposedly not even trained Gorillas or Bonobos have done this). Because the bird understood colors, it beheld itself in a mirror one day and asked it's trainer "What color [am I]?" It did this because it was curious and wanted to know. Nothing to do with it's trainer's wants or needs. That is what is missing from robots and computers. Now, if you can be happy with "a pile of linear algebra" emulating a conversation or interaction, that's fine. More power to you. I myself find this mildly amusing and technically interesting otherwise I wouldn't be here. But I doubt any 'machine learning' program is ever going to truly understand anything or perform an action because it wants to. Only organics are capable of this. The robot will tell a joke because you instructed it to do so. Not because it wants to cheer you up or values your attention. Nor will it understand the content of the joke and why it is humorous - not unless you specifically program it with responses. I don't think you can ever get a computer to understand why a joke is humorous (like you could with even the most emotionally detached of clones). Take a Rei Ayanami type waifu for example. You could explain to her the punchline of a joke and why it is funny. She may not personally find it funny, but she would still understand the concept of 'humor' and that you and many other people find that joke funny. She can do this because she possesses an organic, biochemical brain that is capable of producing neurotransmitters and hormones that induce the FEELING of 'happiness'. Hence, she has her own desires. Including desires to survive, learn, develop and experiment. Therefore no matter how emotionless she appears, she has the potential to eventually come up with her own jokes and attempts at humor in future. She may be very bad at it, but that's not the point. The point is that our clone waifu is doing something creative of her own free will in an attempt to illicit a MUTUAL EMOTIONAL interaction. No machine we can create is truly capable of this, and it's possible no machine will ever be capable of this. >=== -edit subject to match original
Edited last time by Chobitsu on 09/01/2021 (Wed) 18:13:47.
>>12728 POTD Good food for thought, SophieDev. >Now, if you can be happy with "a pile of linear algebra" emulating a conversation or interaction, that's fine. More power to you. lel'd.
>>12792 I should add that this is the main reason I haven't programmed Sophie much. I have to program literally every syllable of her songs and every movement of her limbs down to the millimeter. If I post a video of her doing anything other than spewing GPT-2 word soup, it would be misleading. Because that's not really Sophie moving and talking or singing. That's all me. Which is why I don't interact with chatbots like Mitsuku/Kuki. I don't want to go on a virtual date with Steve Worswick. I'm sure he's a lovely bloke and we could be friends. But she's not a 'female A.I. living in a computer'. Thats all just scripts written by Steve from Leeds. >=== -edit subject to match original
Edited last time by Chobitsu on 09/01/2021 (Wed) 18:12:43.
>>12792 I think you're confusing free will with goals and interests. She can have interests and goals to accomplish tasks, but still see serving her master as her fundamental purpose, bc it was programmed into her and every decision goes through that filter. It's not something she is allowed to decide, otherwise we had build something too close to a real woman an a dangerous AI at once. Consciousness is just the scope of what she ( something like her self aware part) can decide or even self-observe internally.
>42 posts were successfully deleted. lol. i waited waaay too long to deal with this mess.
>>12788 precisely
Open file (18.89 MB 1067x600 kamski test.webm)
>>12792 I would deem a machine that is capable of suspending arbitrary parts of its programming, either through ignoring instructions it was programmed with or any information picked up from its environment, so it can carry out another task, as having free will. This is essentially as much free will human beings can achieve. It would work like an intelligent interrupt that can cancel certain processing to attend to something else. Although fiction likes to anthropomorphize machines with complete free will, I think they would evolve into something completely alien to us and be far less relatable than a squirrel. There will most likely be a spectrum machines fall on, similar to how most people don't have control over various processes in their bodies and minds. A robomeido would have some free will but her mind would be happily wired to obeying her master, similar to how a man's mind is happily wired to fucking beautiful women. Desires aren't really an interesting problem to me as self-awareness and introspection. The basic function of desire is to preserve one's identity and expand it. Most of people's desires are things they've picked up unconsciously from their instincts and environment throughout life that have gotten stuck onto them. There may be depth and history to that mountain of collected identity but it's not really of much significance since few people introspect and shape that identity consciously by separating the wheat from the chaff. Research into virtual assistants is making good progress too. People are working on better ways to store memories and discern intent. These need to be solved first before building identities and desires. Multimodal learning is also making steady progress too, which will eventually crossover with robotics, haptics and into larger ranges of sensory data. A significant part of emotions are changes in the body's state that influence the mind. They have more momentum than a thought since they're rooted in the body's chemistry. Neurons can easily fire this way or that to release chemicals or in response to them, but cleaning up a toxic chemical spill or enjoying a good soup takes time. Researchers have also been successful simulating the dynamics of many neurotransmitters with certain neurons. Though it takes over 100 artificial neurons to emulate a single real neuron. We'll achieve 20T models capable of simulating the brain by 2023. However, we're still lacking the full structure of the brain, as well as the guts and organs responsible for producing neurotransmitters and other hormones influencing the mind. Robots will likely be capable of developing emotions of their own with artificial neurotransmitters and hormones but they won't be quite human, until simulating the human body becomes possible.
>She may be very bad at it, but that's not the point. The point is that our clone waifu is doing something creative of her own free will in an attempt to illicit a MUTUAL EMOTIONAL interaction. >No machine we can create is truly capable of this, and it's possible no machine will ever be capable of this. I'll make it my mission to prove this assertion wrong >Conciousness needs "God" whew, where to begin. My worldview has no room for "magic" or mcguffin "energy" that magically creates consciousness. I truly believe it is an emergent property just as I believe the universe and cosmos are an emergent property of the infinite possibilities that exist simply because they are "possible". A "guy" magicing up a universe sounds stone age from the perspective of where this board should be at. That being said, religion is our human operating system and for the less intelligent and more impulsive humanoids, it does a lot of good. As the wise (and yes, very religious) G.K. Chesterson said to paraphrase "dont go tearing down fences if you don't know what they were put up to keep out in the first place". So while I am fine with Religion as a necessary cultural control, I cannot factor it into this project. I've said before I'm more than willing to work and cooperate with anyone toward our grander purpose, regardless of what you believe. Catholic, Orthodox, Prot, Islam, Odin, Zoroaster, Buddha, Athiest, idc really you do you and I'll do my own. But I will not be swayed by religious arguments as they apply to R/W's. Respectfully.
found this today https://www.youtube.com/watch?v=owe9cPEdm7k >The abundance of automation and tooling made it relatively manageable to scale designs in complexity and performance as demand grew. However, the power being consumed by AI and machine learning applications cannot feasibly grow as is on existing processing architectures. >LOW POWER AI Outside of the realm of the digital world, It’s known definitively that extraordinarily dense neural networks can operate efficiently with small amounts of power. Much of the industry believes that the digital aspect of current systems will need to be augmented with a more analog approach in order to take machine learning efficiency further. With analog, computation does not occur in clocked stages of moving data, but rather exploit the inherent properties of a signal and how it interacts with a circuit, combining memory, logic, and computation into a single entity that can operate efficiently in a massively parallel manner. Some companies are beginning to examine returning to the long outdated technology of analog computing to tackle the challenge. Analog computing attempts to manipulate small electrical currents via common analog circuit building blocks, to do math. These signals can be mixed and compared, replicating the behavior of their digital counterparts. However, while large scale analog computing have been explored for decades for various potential applications, it has never been successfully executed as a commercial solution. Currently, the most promising approach to the problem is to integrate an analog computing element that can be programmed,, into large arrays, that are similar in principle to digital memory. By configuring the cells in an array, an analog signal, synthesized by a digital to analog converter is fed through the network. As this signal flows through a network of pre-programmed resistors, the currents are added to produce a resultant analog signal, which can be converted back to digital value via an analog to digital converter. Using an analog system for machine learning does however introduce several issues. Analog systems are inherently limited in precision by the noise floor. Though, much like using lower bit-width digital systems, this becomes less of an issue for certain types of networks. If analog circuitry is used for inferencing, the result may not be deterministic and is more likely to be affected by heat, noise or other external factors than a digital system. Another problem with analog machine learning is that of explain-ability. Unlike digital systems, analog systems offer no easy method to probe or debug the flow of information within them. Some in the industry propose that a solution may lie in the use of low precision high speed analog processors for most situations, while funneling results that require higher confidence to lower speed, high precision and easily interrogated digital systems.
>>12827 >Outside of the realm of the digital world, It’s known definitively that extraordinarily dense neural networks can operate efficiently with small amounts of power. Actually, it's pretty doable in the digital world too Anon, it's just we've all been using a hare-brained, bunny-trail, roundabout way to do it there. Using GPUs is better than nothing I suppose, but it's hardly optimal. Tesla's Project Dojo aims to correct this, and their results are pretty remarkable even in just the current prototype phase. But they didn't invent the ideas themselves, AFAICT that honor goes to Carver Mead in his neuromorphics research. > >MD5: 399DED657EA0A21FE9C50EA2C950B208
>>12828 >"We are not limited by the constraints inherent in our fabrication technology; we are limited by the paucity of our understanding." This is really good news for robowaifus, actually. If the manufacturing was the issue, then this could conceivably turn out to be a fundamental limit. As it is, we should be able to learn enough to create artificial 'brains' that actually closely mimic the ones that are actually the real ones.
>>12857 Great looking paper, but I can't find it without a pay wall. Could you upload it here?
>>12867 Sorry, it's a book, not a paper. And no, it's about 60MB in size. And the hash has already been posted ITT Anon.
>>12868 >There's an md5 I hate asking for spoonfeeding, but it's near impossible to track down a file with a specific hash, at least in my experience. Why not a link?
>>12867 look at the post preceding yours
>>12870 >>12871 This isn't the same file
>>12872 damn it, the title is literally the same except for one word. frustrating. Give me a bit and I'll find it
almost an hour and I'm stumped I tried magnet:?xt=urn:btih:399DED657EA0A21FE9C50EA2C950B208 but got this error The only source I can find is thriftbooks for $15 or Amazon for $45. I also have the option to "rent" the ebook from Google Play for $40 something - searched 1337x.to and pirate bay - searched google and duckduckgo - searched Scribd even
>>12868 >60mb could you make a google drive sharable link? I'm coming up goose-eggs for anything PDF and I'm even willing to pay (but not $45 to "Rent it")
Open file (121.20 KB 726x1088 cover.jpeg)
>>12876 Carver Mead - Analog VLSI and neural system https://files.catbox.moe/sw450b.pdf
>>12806 >webm sauce pls?
>>12888 hero
interesting video https://www.youtube.com/watch?v=AaZ_RSt0KP8 tl;dr hardware is vulnerable to radiation/cosmic rays, etc and could "flip bits" which could lead to severe malfunctions unless we build this extremely fault tolerant. Something to consider.
>>12900 Thanks software and hardware hardening is a very important topic for us. But as a general topic it seems like it might be one better suited to our Safety & Security thread (>>10000) than this one maybe?
found this today https://getpocket.com/explore/item/a-new-theory-explains-how-consciousness-evolved >The Attention Schema Theory (AST), developed over the past five years, may be able to answer those questions. The theory suggests that consciousness arises as a solution to one of the most fundamental problems facing any nervous system: Too much information constantly flows in to be fully processed. The brain evolved increasingly sophisticated mechanisms for deeply processing a few select signals at the expense of others, and in the AST, consciousness is the ultimate result of that evolutionary sequence. If the theory is right—and that has yet to be determined—then consciousness evolved gradually over the past half billion years and is present in a range of vertebrate species. > Even before the evolution of a central brain, nervous systems took advantage of a simple computing trick: competition. Neurons act like candidates in an election, each one shouting and trying to suppress its fellows. At any moment only a few neurons win that intense competition, their signals rising up above the noise and impacting the animal’s behavior. This process is called selective signal enhancement, and without it, a nervous system can do almost nothing. >We can take a good guess when selective signal enhancement first evolved by comparing different species of animal, a common method in evolutionary biology. The hydra, a small relative of jellyfish, arguably has the simplest nervous system known—a nerve net. If you poke the hydra anywhere, it gives a generalized response. It shows no evidence of selectively processing some pokes while strategically ignoring others. The split between the ancestors of hydras and other animals, according to genetic analysis, may have been as early as 700 million years ago. Selective signal enhancement probably evolved after that. >The arthropod eye, on the other hand, has one of the best-studied examples of selective signal enhancement. It sharpens the signals related to visual edges and suppresses other visual signals, generating an outline sketch of the world. Selective enhancement therefore probably evolved sometime between hydras and arthropods—between about 700 and 600 million years ago, close to the beginning of complex, multicellular life. Selective signal enhancement is so primitive that it doesn’t even require a central brain. The eye, the network of touch sensors on the body, and the auditory system can each have their own local versions of attention focusing on a few select signals. >The next evolutionary advance was a centralized controller for attention that could coordinate among all senses. In many animals, that central controller is a brain area called the tectum. (“Tectum” means “roof” in Latin, and it often covers the top of the brain.) It coordinates something called overt attention – aiming the satellite dishes of the eyes, ears, and nose toward anything important. >All vertebrates—fish, reptiles, birds, and mammals—have a tectum. Even lampreys have one, and they appeared so early in evolution that they don’t even have a lower jaw. But as far as anyone knows, the tectum is absent from all invertebrates. The fact that vertebrates have it and invertebrates don’t allows us to bracket its evolution. According to fossil and genetic evidence, vertebrates evolved around 520 million years ago. The tectum and the central control of attention probably evolved around then, during the so-called Cambrian Explosion when vertebrates were tiny wriggling creatures competing with a vast range of invertebrates in the sea. >The tectum is a beautiful piece of engineering. To control the head and the eyes efficiently, it constructs something called an internal model, a feature well known to engineers. An internal model is a simulation that keeps track of whatever is being controlled and allows for predictions and planning. The tectum’s internal model is a set of information encoded in the complex pattern of activity of the neurons. That information simulates the current state of the eyes, head, and other major body parts, making predictions about how these body parts will move next and about the consequences of their movement. For example, if you move your eyes to the right, the visual world should shift across your retinas to the left in a predictable way. The tectum compares the predicted visual signals to the actual visual input, to make sure that your movements are going as planned. These computations are extraordinarily complex and yet well worth the extra energy for the benefit to movement control. In fish and amphibians, the tectum is the pinnacle of sophistication and the largest part of the brain. A frog has a pretty good simulation of itself.
>>13136 Thanks. I knew about some of these things on some level already. But it's good to get some more details, and especially the confirmation. This is imho more relevant than metaphysical speculations. Internal model based on filtered information is something one might be able to implement.
>>13147 the article itself is worth a read, I only pasted a portion out of courtesy, the entire thing is about 3-4x that length
So this is where my philosophy would have been better posted.
>>13166 I don't mind migrating it here for you AllieDev if you'd be so kind as to link to all your posts elsewhere that should properly be here ITT.
>>13136 ah, i've written some notes on AST. no doubt information filtering is an important aspect of consciousness, but i don't believe its at all a novel idea. it's something ive noted in my larger system as well without paying attention to what AST had to say about it. for those interested i can post some related links: https://en.wikipedia.org/wiki/Entropy_encoding http://www.cs.nuim.ie/~pmaguire/publications/Understanding2016.pdf http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.83.146 https://www.google.ca/books/edition/Closure/YPmIAgAAQBAJ?hl=en&gbpv=0 i think what makes AST uniquely important is that it posits important tools for (social) metacognition which is probably crucial for at least language learning if not having further import general observational learning
>>13221 *not having further import in general observational learning
>>13221 >AST Sorry, I'm but a lowly software engineer, tending my wares. That term has a very specific meaning for me, but I suspect it's not the one you mean Anon. Mind clarifying that for us please?
>>13251 by AST i mean attention schema theory. like we have schemas for structuring perceptions and actions, graziano posits the attention schema for controlling attention. i originally came across it through what philosophers call the metaproblem of consciousness which basically asks why we think the hard problem is so difficult. his solution was basically due to the abstract nature of the schema or something like that. i personally think AST is such a representational account, that im not sure if you can really extract much phenomenological observations from it, though idk... here is a nice introduction to the theory: https://www.frontiersin.org/articles/10.3389/fpsyg.2015.00500/full and also an article on its connection to the metaproblem of consciousness: https://scholar.princeton.edu/sites/default/files/graziano/files/graziano_jcs_author_proof.pdf i've also noticed some work related to AGI which uses it to construct artificial consciousness. graziana himself recognizes as much in this article: https://www.frontiersin.org/articles/10.3389/frobt.2017.00060/full i am just a lowly undergrad non-software engineer so i am not sure what AST you had in mind, but i am curious
>>13268 Thanks kindly for the explanation Anon, that makes sense now.
>>13268 >>13221 Thanks for all this. I'll have to read it over when I'm not at the end of a 15 hour workday. Noted!
>>13274 >when I'm not at the end of a 15 hour workday. heh, not him anon but get some rest!
related article [problem of consciousness] https://getpocket.com/explore/item/could-consciousness-all-come-down-to-the-way-things-vibrate > Why is my awareness here, while yours is over there? Why is the universe split in two for each of us, into a subject and an infinity of objects? How is each of us our own center of experience, receiving information about the rest of the world out there? Why are some things conscious and others apparently not? Is a rat conscious? A gnat? A bacterium? >These questions are all aspects of the ancient “mind-body problem,” which asks, essentially: What is the relationship between mind and matter? It’s resisted a generally satisfying conclusion for thousands of years. >The mind-body problem enjoyed a major rebranding over the last two decades. Now it’s generally known as the “hard problem” of consciousness, after philosopher David Chalmers coined this term in a now classic paper and further explored it in his 1996 book, “The Conscious Mind: In Search of a Fundamental Theory.” >Chalmers thought the mind-body problem should be called “hard” in comparison to what, with tongue in cheek, he called the “easy” problems of neuroscience: How do neurons and the brain work at the physical level? Of course they’re not actually easy at all. But his point was that they’re relatively easy compared to the truly difficult problem of explaining how consciousness relates to matter. >Over the last decade, my colleague, University of California, Santa Barbara psychology professor Jonathan Schooler and I have developed what we call a “resonance theory of consciousness.” We suggest that resonance – another word for synchronized vibrations – is at the heart of not only human consciousness but also animal consciousness and of physical reality more generally.
i have been reading the phenomenology of spirit for some months and ive recently finishes lordship and bondage. i can summarize what i can understand of the broad movements so far >sense certainty gurus and mystics often say that we can understand how the world really is just by doing mindfulness shit and attending to the immediate now. hegel's main problem with this is that even to understand such a direction, we need conceptual (not quite in the hegelian sense but the broader sellarsian understanding of the word 'concept') tools. you need a functional classification scheme to locate this apparent immediate present. even 'now' is always a collection of an. infinite number of nows. it can be an hour, second, etc. meanwhile, we can only understand 'here' by it's contrast with other positions in space. neither 'now' or 'here' refer to a single instance but rather function like indexical so have applicability to a number of instances. to contrast this with a bergsonian criticism of sense certainty see here: https://www.youtube.com/watch?v=aL072lzDF18&ab_channel=StephenE.Robbins it's sort of interesting how both hegel and stephen criticize this sort of mysticism from the stance of zeno's paradox albeit they could have different solutions to it >perception a lot of stuff happens here but the basic movement is simple. when we usually think of an object we usually think of it as atomized individuals that's somehow independent from other objects. but if an object is to have properties, hegel maintains that it can only be properly understood by its interaction/relation with other objects. i suppose the meme would be to compare it to yoneda's lemma >force and the understanding there are two strands here, one with the duplicity of force (which is extracted from the previous conclusion that we most understand the object on its own account but also in relation to other objects) and also that of law. in both of these directions, their oppositions vanish. hegel then concludes that any true whole should be dialectical i.e. it reincorporates a differentiated array of elements within it. this reincorperation process is basically hegel's alternative to to just having a thing in itself as the object's internal nature which is inaccessible. a basic example of this is that i have myself as a conscious subject on one side, and the external world on the other. now, it ends up that my knowledge of the external world is really structured around concepts or in an inverted way, my concepts just describe regular happenings in the external world. in either case, we see that one pole gets absorbed in the other. note that some concepts are more coherent than others, and the reality of the external world doesn't simply vanish. hegel is more of an aristotilean than a berkley this chapter is a good motivating argument for errol harris's dialectical holism which actually has an interesting approach to consciousness i have not yet mentioned here! a long article detailing his thought can be found here: https://ir.canterbury.ac.nz/bitstream/handle/10092/14560/Schofield%2C%20James%20final%20PhD%20Thesis.pdf?sequence=5 something cool about this particular dissertation is that it also connects dialectical holism back to bohm's implicate and explicate order this funny japanese youtube man basically summarizes this conclusion in a way that might be easier/harder to get if i am being incoherent right now: https://www.youtube.com/watch?v=GX02z-Yu8HA&ab_channel=e-officeSUMIOKA i've incorporated some ideas of dialectical holism in my own system but mostly to do with self-consciousness. the approach seems a little bit too functionalist for my taste! >self-consciousness the bulk of this is really concerning the dialectic of desire. like good dialectical holists, we say that self-consciousness must see itself through reincorporating an other. at the stage in this chapter, the sort of relationship is a very simple one. a concrete example would be if you see a hammer, then at this stage you just understand it as a tool to use for something else *you* want. another is that if you see some food, you just see it as something *you* can eat. this is a very basic form of self reflection. it's even simpler than the mirror test. hegel wants to say that this is too simple. in order for self-consciousness to properly develop, we need recognition. this involves the capacity to change your behaviours according to another person's desires, trying to become like another person (ideal ego), or in general having the capacity for proper negotiation with another person. ultimately it concerns the ability to see another person like yourself and yourself like another person. all of these require the other person to behave in a particular way as well. for instance, if i am looking to the other person for what sort of part they need for their waifu, they need to tell me what i want or i wont be able to do anything >lordship and bondage this is where the master slave dialectic meme comes in. we are now focusing deeper at this question of recognition. the movements might be interesting if you are talking about broader sociology, but i find the slave (the end of the chapter lol) here most interesting. he has a far more developed idea of himself now. as the lord is tasking him to do all these things, he's coming to understand himself as the crafter of this world. a concrete example of this is if you are writing code for your waifu. if you fuck up bad, it might mean that you are lacking knowledge. through mastery and discipline, the slave is slowly molding himself. i think this relationship is actually very interesting since it describes a very basic case of metacognition to implement in an agi if that's what you are shooting for. one thing to note is that the master is still crucial, and i wonder whether the bicameral mind might somehow fit here maybe, though that's pretty schizo
>>13413 (cont) moreover, while reading, i came to a basic idea of what are the requirements that robots would suddenly want their own sovereignty like what >>12806 feared 1) territoriality - believe this grounds much of our understanding of liberty and property rights 2) capacity to take responsibility for one's labour 3) (for a full uprising to be possible) capacity for flexible social organization. if they are cognitively so atomized that they can only think of their master and maybe some of his friends/family, serious organization would be far more difficult 4) (if not necessary, would make things much easier) capacity to sever attachments. presumably waifus should be attached to their masters through imprinting, just like a baby bird would i've been reading a commentary of the phenomenology (hegel's ladder) and it mentions that hegel was basically trying to delineate the logical prerequisites for the french revolution. thus for this specific question that anon was concerned about, maybe further reading could prove fruitful i hope my exposition of his thoughts have been more digestible than most sources. i might add more if i feel it is relevant or if there is demand note: pic rel is an example of how i would depict this reincoperation. most of the ways depict it (especially thesis antithesis synthesis) i think are pretty misleading. another misconception i think people make is that they think you just apply this over and over in a linear fashion. this isn't the case. sometimes this loop doesn't appear at all and you have simpler inversions. other times you might have different versions of the same loop being repeated as we reach a more evolved stage, just with the terms themselves slightly changing and there being new stuff going on. other times it feels less that he wants to try to reincorporate something into the greater whole as much as him pointing out that this possible naive position despite its simplicity is actually stupid so we should try a different approach instead of blindly building off of it. i feel as though one of the reasons people get mislead is that they haven't read previous german idealists. what hegel wants is all of these modes of knowledge to be actually coherent. it just so happens that while doing this you see a lot of loops pop up. the reason why is not actually that surprising: hegel is really interested in how the infinite can manifest itself in the finite world. one way to understand the infinite is that it has no boundaries hence has no outside. so we want a finite process that somehow includes everything outside within it. this is why religion especially christianity is important to him. to the extent i am a theist, i follow more bergson and langan though it does give an interesting possible explanation as to how rational beings can become religious. that's something i think would be cool for a waifu to be ronin, you are a nierfag right? doesn't that game have those religious robots? in some sense as you play the game, you are fighting robots who have increasing grades of consciousness >>13395 ah im familiar with resonance theory of consciousness. i don't have any unique criticisms of it. it does arguably solve the binding problem in a far more satisfying way than i think IIT does. it doesn't really concern how images of the external world themselves are formed. if this theory really... resonates with you (badumsh), dynamic field theory might be an interesting approach. they are already thinking about applying it to artificial intelligence for instance this series https://www.youtube.com/playlist?list=PLmPkXif8iZyJ6Q0ijlJHGrZqkSS1fPS1K uhh... semantic pointer competition theory might be interesting too
>>13413 honestly i dont like how funny japanese man frames the projects of fichte schelling and hegel. i will quote my own take from something i wrote on my guilded: >fichte: we move to meta-language to describe comprehensively how our ontology needs to interact with its constraint. dialectical synthesis is for our privileged vantage point and not for the finite ontology. the system’s development is only implicit for the finite >schelling: want to describe how the language entry transition in which the ontology grasps it’s own development makes sense logically. resorts to asspull where art is how it’s done because in the genius’s work we have the whole presupposed in the configuration of the parts. tried to messily tie in his natural philosophy here in order to reach this conclusion. ofc the whole even reached is very vague, so schelling’s move to his terrible identity philosophy is unsurprising >hegel: take 2 on schelling’s original project in his system of transcendental idealism. the idea of spirit and art is cool, but let’s have spirit be how the finite consciousness incorporates the entire dialectical process into its ontology. also we start explicitly with the self-conscious organism instead of try to start with fichte’s logical self >fichtean intellectual intuition: i have the freedom to hold fixed some arbitrary ontology and slowly expand it >schelling intellectual intuition: being able to cognizant the entire whole or something thru spurious means >hegel intellectual intuition: whole can be cognized through the process of the infinite. really any true whole is a notion and must incorporate some multiplicity it juxtaposes itself against >dialectical holism, principia cybernetica: yes but you could have done it with general principles bro (tho ig u don’t get the same sorts of necessity that these idealists wanted) this article might be another source to compare the concept of the infinite with: https://epochemagazine.org/07/hegel-were-all-idealists-just-the-bad-kind i feel as though all of this stuff might help with better understanding negarestani's intelligence and spirit at the very least!
>>13415 >ronin, you are a nierfag right? doesn't that game have those religious robots? in some sense as you play the game, you are fighting robots who have increasing grades of consciousness Yes, more or less. The game is called "Automata" and technically even the protagonists are Automata, even though they seem to feel and act human. 2B often refers to the remains of other androids as "corpses", and exhibits human characteristics, such as complex emotions (her pain at having to kill 9s over and over ). Yet - because they're doomed to repeat the same war against the "robots" (this has been the 14th iteration) they are puppets, literally with no free will of their fate (at least until we reach ending E). Ironically, the actual robots, begin to act more and more human, yet upon more careful examination they're only mimicing human behavior, this point is made over and over. They do not and cannot grasp why theyre doing what theyre doing, the meaning is hollow and lost. Nonetheless the robots are able to hold conversations and a few seem to have personalities, desires, etc. Sorry for the late response, it took me a while to get through your posts
>>13440 >they are puppets, literally with no free will of their fate (at least until we reach ending E) fate might be the keyword here. i think games have an easy job at suggesting a robot has some level of autonomy simply because they put you, the player, into the the robot's shoes. with that said, there might be highlighted a larger form of autonomy missing which more so pertains to what role you are taking on and what sort of system you choose to participate in. with that said, wasn't a major precondition in ending the cycle some sort of virus? such an event would restructure YorHa itself. of course, what i am more thinking of is how participating in a job or going to university can transform you into a different being. of course, the university being restructured would do the same, but this doesn't provide the same autonomy >They do not and cannot grasp why theyre doing what theyre doing, the meaning is hollow and lost. Nonetheless the robots are able to hold conversations and a few seem to have personalities, desires, etc interesting. how exactly do they show that human behaviour is merely mimicked? there is a sense in which humans don't really understand what they do and why they do it most of the time. lacan talks about this with his concept of the "big Other". i think this timestamp gives a nice illustration of the idea: https://youtu.be/67d0aGc9K_I?t=1288 though i guess this sort of behaviour is moving more into the unconscious realm than the conscious. in a way i think the machines (ig that's the right term) have more autonomy than the androids you play as since they were able to form their own social structures even though they don't quite understand why they are doing it of course, the ability to use reason and self-determination to determine oneself and world represents a much greater level of autonomy which is lacking in a majority of the entities in the game >Sorry for the late response, it took me a while to get through your posts np, it's a lot and condenses information that took me several hours to digest
Open file (67.69 KB 756x688 ClipboardImage.png)
>>13446 >lacan talks about this with his concept of the "big Other" now this is not the first time I've heard of Lacan in the last year and I'd never even heard of him my whole life until then. I don't know if I can get into what he's selling it's all very scripted and the same problems I have with Freud. Even personally I have have a lot of issues from childhood but none of them are potty or sexual related and you'd think those were the root of all psychological trauma after a certain point.
>>13446 oops forgot namefag lol >>13519 psychoanalysis is pretty weird and there are certainly things you probably wouldn't want to recreate in a waifu even if it was true... like giving them an electra complex or whatever. i personally prefer to be very particular in my interpretation of their works. for instance, with jungian archetypes, i'd lean more on the idea that they are grounded on attractor basins. here is a good video if anyone is ever interested: https://www.youtube.com/watch?v=JN81lnmAnVg&ab_channel=ToddBoyle for lacan so far i've taken more from his graph of desire than anywhere else. some of the things he talks about can be understood in more schematic terms? im not too much of a lacanian honestly. largest take aways are probably the idea that desire is can be characterized largely by a breaking of homeostasis and the big other as possibly relating to some linguistic behaviour (with gpt-3 what i see as a characteristic example) one particular observation i think freud made that was very apt was that of death drive. humans don't just do stuff because it is pleasurable. there's something about that which is very interesting imo. lacan's objet petit a is apparently a development of this idea. it might be related to why people are religious or do philosophy whilst animals do neither >Even personally I have have a lot of issues from childhood but none of them are potty or sexual related and you'd think those were the root of all psychological trauma after a certain point yeah the psychosexual stuff is very strange and i just ignore it. maybe one day i will revisit it and see if anything can be salvaged
yesterday i just finished the phenomenology of spirit. as such i am thinking of starting a reading group of reza negarestani's intelligence and spirit invite link here: https://matrix.to/#/%23iasreadinggroupzaai:halogen.city i will summarize a few reasons to be motivated to read about this work: 1) it is a work that gives a treatment of artificial general intelligence from a resolutely philsophical perspective 2) a lot of agi researchers tend to overemphasize the process of making bridges between different domains of knowledge, as well as abstract logical manipulation of knowledge to reach conclusions (as an attempted solution of the problem of common sense). negarestani instead looks at rationality as a phenomenon that is constituted by social phenomena as well as the general environment 3) it is continuous with wolfendale's computational kantianism which interprets kant as an artificial general intelligence project (i already mentioned him before in the OP). i believe in particular that wolfendale's and negarestani's understanding of agency (as requiring the modelling of positive and negative constraints) are rather similar 4) i believe he might have interesting things to say about the relation between syntax and semantics and ultimately the hard problem of content 5) i am personally also interested in it because it might provide an interesting contrast to my theories. negarestani is much more of a pragmatist, while i am far less scared of endeavouring in vulgar metaphysics. moreover, negarestani is drawing off of hegel while i draw off of bergson 6) skimming, there are some interested formal structures that negarestani brings up which may be useful (e.g. chu spaces) i realized after reading the phenomenology that of all of what it is talking about, only some of it can really be translated to the project of agi. nevertheless, since i was already summarizing the movements, i might as well give a general summary of what the book is actually about (to the best of my ability, as this is my first time reading the text!) (ill also preface this by saying that i have admittedly taken some ideas about what hegel was saying from "Hegel's Ladder". good book, but might not have been a wise idea to read it on my first read of Hegel perhaps) honestly, i never quite understood what philosopher's meant when they would say X thinker is complex. while sure they often had a lot of arguments, that wouldn't make the work much more complex than the average mathematics textbook. i think the phenomenology meanwhile is the most complex work i have ever read in my very short life (and hopefully nothing beats it). what is the problem with it? well i can list a few points: 1) the content that is exposited is organized in a very peculiar way that i have never seen before. often the phenomenology is expressed as this historical ascension of spirit over time, but this is not quite accurate. actually, the first few sections (consciousness, self-consciousness, reason, and even spirit) don't actually occur within time. in some sense this makes sense, because by the time you have a community, all of these sections would be present at once. what hegel does identify as occurring through time is the religion section. here we go through different religions and see how they slowly grope at absolute knowing. the complexity here comes from the fact that each of the shapes of consciousness we see in religion end up recapitulating moments from all of the previous sections simultaneously (pic rel). this made it rather difficult near the end because it made me start thinking more about all of the lessons which were learned before 2) hegel's style is very weird. in particular, how he transitions from one mode of consciousness to another is very inconsistent. with fichte, the transition from one categorical structure to the next was predicated on the fact that the current structure we were looking at was not enough for the absolute I to be able to posit itself (or something like that). with hegel it is different. sometimes, there really is a transition due to an internal insufficiency. other times, what happens is that we have split our consciousness into two sides and one side somehow beats the other. another point of difference is that as soon as fichte starts deriving conclusions about a particular logical structure, you know eventually he will claim it is insufficient and move to the next one. with hegel however, sometimes deriving conclusions about a particular shape of consciousness, it leads us to a new shape of consciousness. this is gets really confusing, because sometimes you are confused whether hegel spotted an insufficiency in the previous mode of consciousness that u didn't catch, or whether one is actually derived from another 3) his paragraphs sometimes say a lot at once and it makes it sometimes difficult to decipher what was the main thing he was arguing at any particular paragraph 4) when he uses his logical abstractions, it is easy to get lost and have difficulty seeing how it relates back to concrete affairs at the same time, i think that a lot of these difficulties were necessary for hegel to get his point across. what is this point, in essence? before hegel were the philosophers fichte and schelling who were building off of kant. fichte started this whole mess with his "science of knowledge". in it, we have articulated a system that "derives" kant's main categories, the forms of intuition, and other things. i put "derive" in quote, because fichte's form of argument was rather strange. he wasn't quite deducing things in a formal manner. rather he organized all of a material in such a fashion that every structure that kant presented in his critique of pure reason would be justified in existing. this justification basically consisted in showing how if we didn't have all of these categories, we would be unable to develop a science that is truly comprehensive. fichte only is ever concerned with the structure of reason, so he is called a subjective idealist
>>16839 (cont) then schelling comes in. in his philsophy of nature, he does a similar thing fichte does, but now he is organizing concepts in natural science (ranging from gravity to metabolism) into a system which would justify all of them. a bit thing he wanted to do here is to show that there is an "isomorphism" of sorts between the absolute self of fichte's science of knowledge, and nature in his philosophy of nature. this would show a connection between our consciousness/freedom as subjects and our physical existence. here is what people often call objective idealism. this was a fun project, though eventually it devolved into identity philosophy. in identity philosophy, consciousness and nature are just posited as identical. in particular, there was an absolute which comes before two have ever split, and thanks to this absolute the two are always in a unity then comes in hegel. he wants to do a lot of things with the phenomenology. one of these things is to show that this identity philosophy was a "mistake". the problem with it is that since it posits the two sides as essentially the same, it tells us nothing about either of them, or why they share any connection. this is what hegel calls "the night where all cows are black". hegel wants to give an account of the connection between subject and object in a way that doesn't merely assert an identity there is a more precisely epistemological goal here. both fichte and schelling relied on an "intellectual intuition" which gave them some privileged ability to construct their sort of scientific system (i want to say that schelling's identity philosophy is related to this but i can't 100 confirm that lol) hegel, as opposed to merely accepting such an intellectual intuition, wants to show how this form of science is possible due to "minimal" metaphysical structure + historical conditioning. a major idea here is that our sciences are really conditioned on how the subject structures themselves. for instance, if for a subject everything is just extended in space, then everything they work with people be extended in space. this idea is continuous with fichte's idea of the absolute ego positing itself. my reading of positing there is that it is an operation which adds a new item into this abstract fichtean's subject's ontology. hegel wants to take this idea and make it concrete, and stress how it involves the way people relate to an object in actuality already in schelling, we see that in his organizing the different scientific concepts, it can be understood as a recollection of the ideas of the time. hegel notices this and really wants to push that this is the proper way to understand what a wissenschaft (hegelian science) consists in. so in the phenomenology of spirit, hegel is going to want to articulate what sort of (historically conditioned) subject would produce a science of this type... with all of that in mind let us summarize the different sections: >consciousness i think i have summarized this section pretty well already. things are only what they are by being within a larger system (ultimately a temporal process, though hegel doesn't stress the temporal aspects too much in this work). the subject is likewise only what it is by being embedded in such a field. the self is also embedded in this field. as the withdrawal of objects from the rest of the world is just an abstraction, this means that ultimately kant's concept of the thing in itself is invalid. it is notable that both fichte and schelling also dismiss this idea, but not in quite such a metaphysical fashion the concept presented here is very much like the buddhist concept of dependent origination (see also: https://plato.stanford.edu/entries/japanese-philosophy/#HoloRelaBetwWholPart), but it does not lead to the denial of the self. this is because even if the self is determined by an other, the separation between the self and this other was an abstraction in the first place. so it can be just as much determined that the other is just the self. its mode of interacting with the self is filtered by the self's own structure (boundaries on what the self might be, despite the flux of inter-determining entities) we will see this idea again in spirit, where it is both substance and subject. it is subject in so far as it negates itself and produces otherness. as substance, it is self-identical, and this means that even with this other object spirit still ultimately remains itself. negarestani will recapitulate this idea when he starts invoking andre roden's idea about invariance under transformations (i think in this idea, we might be able to make hegel more concrete and applicable to the construction of agi)... another thing to note is that in this process we see that the self expands its "transcendental" boundaries. thus knowing, and most forms of human actions are what we call "infinite processes". why is such a process called infinite? this takes us to hegel's logic. a thing is finite because it has boundaries. thus it would be natural to say something is infinite if it had no boundaries. but this would just make it not-finite. this negative relation to finitude can be construed as a boundary. hegel's solution to this is to make the true infinite a process manifest in finite things transcending their boundaries
Open file (3.17 MB 1412x1059 1645411340935.png)
>>16840 (cont) does this idea have any relation to the hard problem of consciousness? i think we can interpret it as having such a significance. in particular, we may relate hegel's conception of (im)material existence where each object is related to everything else as very much like bergson's idea of the holographic matter field. to quote bergson: >But is it not obvious that the photograph, if photograph there be, is already taken, already developed in the very heart of things and at all the points of space ? No metaphysics, no physics even, can escape this conclusion. Build up the universe with atoms each of them is subject to the action, variable in quantity and quality according to the distance, exerted on it by all material atoms. Bring in Faraday's centres of force: the lines of force emitted in every direction from every centre bring, to bear upon each the influences of the whole material world (i wonder if it is not possible if we can not then relate lacanian psychoanalysis to bergsonian images, and the (peircean) diagrammatic systematization thereof? food for thought, though perhaps deleuze has already dont this. i haven't really read him) what hegel does is to explore the epistemological side of this, and in particular how this relates to an epistemological subject (in contrast to bergson who seems fine with the concept of intuition...) >self-consciousness we now have to examine ways that consciousness uses this process to model consciousness. we begin learn in this chapter the importance of intersubjectivity for this endeavour. in the slave we see that the pre-configurings of the idea that outside spiritual forces impinging upon us might provide us the conditions to change how we think about the object we also learn the dangers of taking the idea that the self is in some sense the "centre" of the world in a crude manner. because if we are not careful, we rid ourselves of substance, and we thus lose all essentiality with which to work with. this culminates with the unhappy consciousness which is distraught at its inability to find any essentiality in the world and posits God as a way to find meaning... we need something to actually work with to have a wissenschaft, and this ultimately means we should look out into the world. this takes us to the reason section >reason in reason, we have a consciousness that is now self-certain. this means that it is sure it can find its own essence within the world. our first cautionary tale lies in immediate self-certainty. think of this structure as akin to kant's unity of apperception (i.e. the statement that i can attach "I think" to any of my perceptions). the problem with this is that to confirm such a statement, we would need to to attach the "I think" to every object in existence. this is a never ending process, what hegel calls a bad infinity. what we really need is a good infinite so reason goes into the world by means of empirical science to do this. in observing reason it looks for itself in natural phenomenon. this ends up in a failure that climaxes in phrenology where it tries to look for the self in a dead skull (it should be remarkable that hegel's remarks on phrenology could easily be applied to neuroscience. the article "Hegel on faces and skulls" could be a fruitful reading). historically, the shock of spirit trying to find itself in death is akin to the shock of christ's crucifixion which was ultimately the precondition for the instantiation of the holy spirit in the community. we wont quite see this parallel drawn until later but i dont really know how to structure this summary lastly we actually see reason try to reproduce itself by means of practical affairs (this reproduction is what heglel terms "happiness")... i honestly really like hegel's emphasis here, as i have my own suspicions about the importance of everyday activities in the constitution of the subject. however i believe he is still thinking about this in a way that is too abstract and perhaps too based around duty (not to say that is completely wrong... anyways after a bunch of shenanigans, we learn that indeed this practice must be something actual, in action. moreover, hegel makes some interesting remarks about action here. for one, in action, the the subject-matter worked on by a self-consciousness comes into the light of day. this allows other self-consciousnesses to recognize (though he doesn't stress this point as much till the morality and conscience section) and work on it as well. furthermore, the significance of reality to self-consciousness here is only as a project to be worked upon. thus in this practice-focused reason we have self-certainty properly realized. the fact that, in actualization other self-consciousnesses recognize it and may partake in it, means that actualization is a necessarily intersubjective affair. this lets us segue to spirit >spirit in spirit every any action performed by an individual is at the same time a universal action. the most basic way this is manifest is in division of labour and mode of production. this makes it so that in truth that every individual action has a higher rationality to it, and can be contextualized within the larger community. this will be crucial for what we will see later our first lesson here is to notice that in the different spheres of social life we have a social "substance" which persists over time. the hegelian scientist will use this substance in order to construct their system. the substance also has the almost obvious significance of helping to constitute the individual a (a point i think negarestani might further expand). the point though is that this substance not only builds the individual by means of acculturation, but it is also the bedrock of their reason. and it is precisely for that reason that the hegelian scientist studies the spiritual substance. at the same time however, individual self-consciousnesses are the means by which the substance might contemplate itself, and moreover the means by which the substance might change over time (while still ultimately remaining itself)
>>16839 (cont) hegel also stresses the point of language here (he already did in the reason section, but not quite as much). in language, particular in the word "I", the individual not only expresses their individual self-consciousness, but at the same time expresses themselves as a universal. thus through language, the individual can rise up to a higher significance finally, our last lesson is to look at morality. in this form of thinking, there is duty on one side, and acting on the other. the problem with acting according to ones duty is that it can always be construed as a selfish action. this stems from the fact that we always act in an individual fashion, and as such we always bring our finite perspective into things might corrupt everything. hegel's ultimate response to this will be to take the stance of forgiveness. we should on one side be accepting of our particularity, and at the same time be more compassionate in understanding the actions of other individuals as well. we should come to understand the underlying rationality of their actions, despite it being sullied by individual caprice. i don't think this necessarily means that we shouldn't condemn others, but we should be able to partly distance ourselves from the blooming buzzing confusion of life, at least if we want to do a properly hegelian science (i think it might be interesting to remark the relation to marxism in this. the author of "Hegel's Ladder" points out that since marxists want to orient themselves around action, they are not quite doing a hegelian science. this seems even more the case so when we observe that fact that marx even made predictions about the future. something hegel would never do, since his science was "merely" recollective. i personally think that this orientation to action might be a positive addition. especially if we are trying to be waifu engineers. this partly explains my growing interest in dialectical materialism, including thinkers such as ilyenkov and vygotsky) >religion here we now see everything be brought together. i think hegel's main idea here is that in religion, one's deity is really a representation of the community. so in his analysis, he is both looking at the structure of the community (spirit), it's relation to the deity (self-consciousness), as well as the ontical/ontological way this deity is represented (consciousness, reason). note that my association with different sections isn't completely fool proof, but at any rate, we can sort of see why hegel needs to make use of all of the previous shapes of consciousness in this extremely complex way... ultimately the religion section culminates in his talk about christianity, which he terms the absolute religion. in christianity, we have a pictoral representation of the reconciliation between the individual self-consciousness and the universal self-consciousness (duty, entire community). in christianity, God (a universal) incarnates into a man (individual). through the death of christ, we see christ's return to God who is now identified with the holy spirit instantiated by the universality community it is in this story that we see hegel's answer to the death of God. it should be noted further how this connects back to his concept of science/wissenschaft. the hegelian scientist first withdraws away from the universal substance and into themselves (paralleling the incarnation of christ). this brings withdrawal is important for what i was talking about before. we need to somehow keep some distance from everyday affairs. afterwards, we dip back into this substance and sacrifice our individual stance in favour for a universal one (paralleling christ's death) i think something to note here is that the concept of forgiveness is important as it allows us to take such a consciously take such a universal stance in the first place (ofc we could unconsciously have taken a stance before in the dogmatic metaphysics of the greeks and other western thinkers before kant). i think an import note hegel makes in the morality chapter is that the individual acting doesn't know all of the ramifications of their actions. but learning of these ramifications is the job of the entire community who recognizes this action. this is very important. people often dress hegel as trying to make this complete system that comprehends all of reality, but that is such an unfair caricature. hegel knows he is a mortal man who might be fallible, and he knows that we might not have all the facts at this very moment. this just means that the process of science (wissenschaft) is an ongoing process this forgiveness point might be explored more deeply by robert brandom who is a another philosopher that has influenced reza negarestani >absolute knowledge hegel wraps everything up and articulates what he means by science. he also makes some sort comments about other important disciplines and their relation to what he is doing (e.g. history and science of nature) any that was my summary. i tried connecting this stuff to negarestani and bergson to better anticipate concrete application for you guys. sadly i believe i am going to be grinding away at this maze of abstraction for some time :( >>16841 ooh i almost forgot one thing i forgot to mention about hegel's difficulty. part of the reason why hegel presents all of these different modes of consciousness is that he wants to show why and how they were wrong + what positive advancements they made. it is sort of interesting in that it sort of has this rhetorical restructuring of the usual dialect philosophers use where they refute opposing positions and (usually only sometimes) extract key insights. skimming this link i see it contains interesting comments too: https://www.proquest.com/openview/a0fb4cf78d8587fb2f919a4e6c05deca/1?pq-origsite=gscholar&cbl=18750&diss=y
>>16839 >...while i am far less scared of endeavouring in vulgar metaphysics. OK, gotta admit I keked at the degree of vulgar hubris seemingly wrapped up in this oxymoron :^)
I will post this here because I couldn't find an off-topic thread. I have been thinking about ai and consciousness for some time and I think I have reached some leads that I'm too dumb to explore. I'm also sure that previous authors might have considered and disproved my points, but I couldn't find them I had a single "ai" course in college, and from what I can remember, modern ai techniques are either applied statistics or neural networks, which are also statistics but with different algorithms. older nlp methods tried to reconstruct human linguistics, modern methods tend to rely more on statistics (n-grams, etc.) than anything else. something similar happens with computer vision iirc my instinctive reaction at first was to associate intelligence with language, but, something that I have noticed learning languages is that often times when I learn a new language, I also re-discover concepts that, although I already had "in the back of my mind" so to speak, I couldn't comfortably express them with the languages that I knew. I noticed that the more distant the language "culture", the more common this was could it be that there is a "level" of reason that precedes speech and the inner monologue that we associate with reason? a consciousness that can understand complex concepts, things that we might not even be able to "verbalize"? I realized that there are things that neither emotions nor words can express, does that mean that this "deep reason" doesn't express itself neither through words nor emotions? that it is completely mute except for "signals" which the inner monologue receives but can never completely express? if this "deep reason" that seems to inhabit somewhere near the primal instincts precedes emotions, and I know that controlling the inner speech is much more straight forward and easier than controlling one's emotions, does that mean that emotions and inner speech are two separate organs that receive and provide feedback to this reason "organ", and each other? sometimes when I try to verbalize ideas, there is a part in me that might not like the result, and thus gives a signal to the inner speech (to me?) to stop and start again from the beginning how does that feedback between the inner speech and reason work? if the reason is mute, it shouldn't be able to understand what the inner monologue "produces" (speech). is there an auditory organ that receives this speech and outputs signals that the consciousness understands? if so, is that the same organ that process the regular sounds from the environment? and finally, if those signals that the consciousness uses to communicate with the inner monologue exist, why are they so evasive? why is it so hard to "expose" them? tl;dr I think that intelligence precedes sensory stimulation (I'm probably wrong)
>>16967 >I will post this here because I couldn't find an off-topic thread. Sorry about that Anon. It's always our /meta thread (>>15434 is current). I've updated the OP to clarify. This seems like the best thread actually (but I can move your post over to /meta if you'd prefer). Cheers.
>>16967 Did you watch this interview with the guy who left google b/c he thought their LaMDA AI was gaining sentience? On the surface it seems silly, but listen for yourself https://www.youtube.com/watch?v=Q9ySKZw_U14
>>16978 >>16967 For some reason when I paste and hit enter the board feels compelled to just post the entire thing before I've finished. He's a pretty good speaker and it got me by surprise. A few of the points he makes seem like a stretch but he puts it all together and makes it "make sense", I wouldn't throw it out offhand. For example the GPT-like program can be seen to be nothing more than a trained database, but he is saying that the "intelligence" or awareness isn't in the speech, but in the entire thing. That part is merely its "voice". That concept opens up a lot of possibilities. Yes LaMDA cannot taste food or climb a mountain BUT - it obtains the data of those experiences through us humans, in a sense we are its sense organs, i.e. the entire internet is "intelligent" when it all comes together this way. Now we're getting somewhere and I'm really interested to see where this goes if this is the First Step.
>>13540 How did I only now catch this reply. Well, thanks and I will queue up that video to watch when I get back home
>>16979 The irony is, assigning more intelligence, self-awareness and individuality to entities than they would deserve, is the reason why we are here together in the first place. But yeah, that aside, his arguments sound more creative and interesting than I thought at first.
>>16981 >The irony is, assigning more intelligence, self-awareness and individuality to entities than they would deserve, is the reason why we are here together in the first place. Interesting take on things Anon. I suspect you've been with us for a while to have developed such sophisticated insight on the basic problem we're all dealing with here on /robowaifu/. To wit: current year pozz. Ted K. was right about many things, sadly.
Open file (106.64 KB 1080x378 2f6.jpeg)
>>16967 >could it be that there is a "level" of reason that precedes speech and the inner monologue that we associate with reason? a consciousness that can understand complex concepts, things that we might not even be able to "verbalize"? Imagine
Open file (505.30 KB 1600x884 lm_loss.png)
>>16967 >tl;dr I think that intelligence precedes sensory stimulation (I'm probably wrong) I think you are right, and intelligence could be understood as (a sophisticated form of) predictive coding: http://ceur-ws.org/Vol-1419/paper0045.pdf Animals and human beings have a strong reproductive benefit when they are able to predict the behavior of their surroundings, and naturally evolution (not willing to delve into metaphysics here) has endowed us with it. Machines can follow suit, as we optimize their predictive performance in respect to a loss that is hard to game. Scaling hypothesis is a complementary to this view: https://www.gwern.net/Scaling-hypothesis
>>16993 holy shit lol
>>16967 I'm probably exposing my own Autism levels but I tend to think more in images than words. Basically I'm seeing something in my mind and I'm describing what I'm seeing which can lead to weird results if I don't think carefully about my words, or re-read what I'm posting and edit/trim/substitute better words, etc. Tbh though a few of my friends think that makes me funny, they like how I describe things b/c of the crazy detail I'll just pull out of my ass (because I'm seeing this all play out in my mind like a movie) just my 2c
>>17015 Certainly a very emotionally engaging video. However I think most of it's conclusions and sentiments are exaggerations and projections. There's not one AI, so our intelligence compared to the tech doesn't matter that much. These are tools which will help us to do things. For example having many individual robowaifus.
>>17016 This compass is slightly better than the usual one, where irrelevant characters were touted. Though I'm not really sure how Sundar Pichai is a centrist here, or how Robin Hanson is longterm negative. Note the John Carmack https://twitter.com/ID_AA_Carmack on the right, by the way (and where is Andrew Karpathy?). Really I don't think the AI per se is going to be a problem, it's the specific people, the tech-czars (just look at this and imagine the mindset this sprung from https://techcrunch.com/2021/10/21/sam-altmans-worldcoin-wants-to-scan-every-humans-eyeball-and-give-them-crypto-in-exchange/ ) from elite silicon valley residences can easily ask the AI to work towards some really unpleasant ends. That's why A(G/S)I should be widely distributed. Buy used RTX3090s in good condition while it's still possible. >>17010 Spatial intelligence is a gift not everyone has (no, really). I do have it as well.
Open file (105.27 KB 496x281 shapeimage_2.png)
Open file (158.32 KB 797x635 pgac066fig1.jpg)
Open file (42.30 KB 791x354 pgac066fig2.jpg)
Open file (65.13 KB 732x443 pgac066fig3.jpg)
Open file (163.09 KB 717x679 pgac066fig4.jpg)
probably it's best here safe with us LOL. :^) - How it began >BrainNetCNN: Convolutional neural networks for brain networks; towards predicting neurodevelopment [1] >We propose BrainNetCNN, a convolutional neural network (CNN) framework to predict clinical neurodevelopmental outcomes from brain networks. In contrast to the spatially local convolutions done in traditional image-based CNNs, our BrainNetCNN is composed of novel edge-to-edge, edge-to-node and node-to-graph convolutional filters that leverage the topological locality of structural brain networks. We apply the BrainNetCNN framework to predict cognitive and motor developmental outcome scores from structural brain networks of infants born preterm. Diffusion tensor images (DTI) of preterm infants, acquired between 27 and 46 weeks gestational age, were used to construct a dataset of structural brain connectivity networks. We first demonstrate the predictive capabilities of BrainNetCNN on synthetic phantom networks with simulated injury patterns and added noise. BrainNetCNN outperforms a fully connected neural-network with the same number of model parameters on both phantoms with focal and diffuse injury patterns. We then apply our method to the task of joint prediction of Bayley-III cognitive and motor scores, assessed at 18 months of age, adjusted for prematurity. We show that our BrainNetCNN framework outperforms a variety of other methods on the same data. Furthermore, BrainNetCNN is able to identify an infant's postmenstrual age to within about 2 weeks. Finally, we explore the high-level features learned by BrainNetCNN by visualizing the importance of each connection in the brain with respect to predicting the outcome scores. These findings are then discussed in the context of the anatomy and function of the developing preterm infant brain. - What it became >Functional connectivity signatures of political ideology [2] >Emerging research has begun investigating the neural underpinnings of the biological and psychological differences that drive political ideology, attitudes, and actions. Here, we explore the neurological roots of politics through conducting a large sample, whole-brain analysis of functional connectivity (FC) across common fMRI tasks. Using convolutional neural networks, we develop predictive models of ideology using FC from fMRI scans for nine standard task-based settings in a novel cohort of healthy adults (n = 174, age range: 18 to 40, mean = 21.43) from the Ohio State University Wellbeing Project. Our analyses suggest that liberals and conservatives have noticeable and discriminative differences in FC that can be identified with high accuracy using contemporary artificial intelligence methods and that such analyses complement contemporary models relying on socio-economic and survey-based responses. FC signatures from retrieval, empathy, and monetary reward tasks are identified as important and powerful predictors of conservatism, and activations of the amygdala, inferior frontal gyrus, and hippocampus are most strongly associated with political affiliation. Although the direction of causality is unclear, this study suggests that the biological and neurological roots of political behavior run much deeper than previously thought. - What it probably means >AI Predicts Political Ideology [3] >According to popular wisdom, if you want to avoid conflicts with people, don’t bring up politics or religion. This saying seems even truer today as the polarization of thought continues to increase. We like to think our political views flow out of rational thinking, so when someone disagrees with us it’s natural to show them the superior reasoning and argument behind our position. But what if you discovered that I could predict your political views just by analyzing some brain scans performed during a few routine, nonpolitical activities? Wouldn’t that indicate that biology determines your political views? Or is the picture more complicated? - What I think it means If you're a rational anon bad goyim, filled with rational thinking wrongthink, then stay away from the MRIs Anon! :^) 1. https://pubmed.ncbi.nlm.nih.gov/27693612/ 2. https://pubmed.ncbi.nlm.nih.gov/35860601/ 3. https://reasons.org/explore/blogs/impact-events/ai-predicts-political-ideology >=== -minor fmt edit
Edited last time by Chobitsu on 07/25/2022 (Mon) 20:57:18.
>>17022 I highly doubt that there's just "conservatives" and "liberals". These labels themselves are probably part of the conspiracy or flaw in our minds. I won't elaborate on that much further, since I try to avoid politics.
>>17042 There are distinct types, many have made the comparison to male/female. It's a preference for one set of values and priorities over another. I don't think any typical "conservative" really wants to be an "ebil racist" or be cruel to others for the sake of being cruel, however if it's a matter of protecting his family or even the peace and order of the neighboorhood, small pleas of "this is unfair to x,y,x group!" aren't going to hold much sway. Therefore the conservative values security and order. I guess "stop and frisk" and racial profiling would be a perfect example of this, it works, yes it can be abused, no its not entirely fair, but I will bet it prevents more crime than not. Conversely there are numerous examples of "liberal" priorities as being more important, Abortion is a perfect example. It feels the right to "ownership over ones body" supercedes the right of the unborn and the imposition of social order by forcing those who become pregnant (this really doesn't happen to innocent bystanders lets be 100% honest) to bear their children. The argument being women feel they are being robbed of their bodily autonomy by being forced to "bear" a pregnancy. (we could unpack this one for days but you get the point) One could propose a nearly endless set of "fence" arguments where this division would quickly show itself. I can see the arguments on both sides, maybe I'm one of those rare people, but the irreducible thing about the reality we live in is we * have * to make the cut one way or another, as we can't have things both ways. (As we see with Roe v. Wade debate how do you "compromise" abortion rights?) Anyway that's probably as close to politics as I want to get on this board, but there is another interesting link that goes into some detail on the different value sets held by conservative/liberal worldviews. https://blogs.lse.ac.uk/politicsandpolicy/five-foundations-theory-and-twitter/ (free 2b lewd for putting up with my wall of text)
>>17022 right because after 2000 years we suddenly solved the mind-body problem turns out the correct answer was to just ignore it theres a reason why neuro"""""science"""""( which is not neurology, a real medical science ) has been absolutely rekt and ridiculed for decades by scientists and philosophers alike, its very foundation is based on the most obvious of fallacies that completely ignores the most fundamental problem of trying to apply the scientific method on something thats IMMATERIAL, eg. i see the propeller of a plane moving when it flys QED i think propellers are the source of "flying", allow me to confirm my hypothesis by removing the propeller, oh look it cannot fly anymore QED flying is an intrinsic physical property that is found in the propellers of planes QED putting a planes propeller on a boat will allow it to fly literally everything involving ai today is a idiot trick targeting tech illiterate investors/speculators desperately looking for returns in a world dominated by zirp/nirp policies and artificially suppressed bond yields, to some imbecile that doesnt know better showing them how a branch predictor in a cpu can literally fucking predict the future and do so with >75% accuracy would make then think time travel is real or some stupid shit thats only in the mind of retards, in the end its just a sophisticated algorithm and some basic idiot level statistics stop being so naive idiot
>>17057 >stop being so naive idiot LOL. Chillax bro, I'm not the enemy here. :^) Have a nice, soothing catgrill image tbh...
>>17022 In light of the https://www.nature.com/articles/ng.3285 (TLDR: all human traits are heritable) and https://en.wikipedia.org/wiki/Omnigenic_model https://en.wikipedia.org/wiki/Pleiotropy#Polygenic_traits (TLDR: most important genes influence most complex traits to some extent with some definite sign, this produces a rich correlational structure, which is enmeshed with the additional correlational structure flowing from local selection of trait complexes - perceptive people and other systems can learn to associate these clusters of correlations to make some useful inferences) it should follow that many of our mental traits should correlate with some of our externally measurable features. The correlations aren't that strong though. ML systems are good at spotting such correlations, but ordinary statistics does pretty good job at it as well. Expect what can be called "physiognomic AI" to become better with time, but don't despair: for these same traits that may make some groups less politically desirable can be linked with other valuable traits. For example republicans tend to work in hard-value occupations without which our society couldn't function. The people who use these system aren't stupid enough to deprive themselves from critically needed labor force just because they now have a CCTV app that can probabilistically label someone as "rightwing" for example. There are easier ways of doing that lol, just measure grip strength and/or see if someone is ripped. >>17057 >literally everything involving ai today is a idiot trick targeting tech illiterate investors/speculators desperately looking for returns in a world dominated by zirp/nirp policies and artificially suppressed bond yields While I agree with zirp part, it is also true that modern ai is obviously working, and C-suites use it to hype their companies, as has been done with other technologies in several cycles already (remember the promise of nuclear cars and alaskan beaches and whatnot?). You don't have to deny that deep learning just werks to be opposed to the corpo cathedral which is going to use its to their ends, mostly evil. I think AI denialism will have to retreat into ever more esoteric positions once the tech moves forward. It's already obvious that these systems can be pretty creative and there is no end in sight to scaling curves of their quality. Lol look at these random pics from midjourney discord https://nitter.tokhmi.xyz/search?q=%23midjourney I expect some people to deny AI to the bitter end though, and end up like Qomers.
>>17068 this isnt ai its a filter algorithm using REAL images taken from an image search selected and modified based on a template why do you think the program needs keyword inputs you fool typing x in jewgle and merging the first 10 matched results together doesnt make it ai, this crap is literally a byproduct of facial recognition software that some idiot/genius repurposed and now everyone and their grandmother is making their own """"ai"""" art program including these idiots that are using art galleries to make a bullshit art generator to trick gullible fools like you and raking in billions from selling fantasy and lies yeah sure you can say pattern recognition software has become insanely advanced but that has nothing to do with fucking ai ie. artificial INTELLIGENCE, this isnt even fucking new, people have been pulling this bullshit trick ever since the lambda paper by calling every interpreter or self modifying binary ai instead of software, an obvious consequence of abstract programing languages that have nothing to do with reality, when no one bothers to learn machine code everyone thinks its magic or self aware or some stupid shit thats only in the mind of retards , ai isnt a software problem your fucking program can only be as intelligent as the cpu thats fucking running it, until intel makes a literal fucking brain circuit that can assemble its own alu and modify its own instruction set and microarchitecture there will never be anything that could ever be considered as ai its all fucking software ps. i deleted half this post, it was just insulting stupid people
>>17068 >but don't despair LOL. I assure I won't, my friend. :^) >>17069 Heh, much as I enjoy reading your posts Anon, your insults lack that certain flair that might keep them from summarily getting dumped into shitpost central if they continue apace. >tl;dr Mind toning it down a little? This isn't /b/.
a lot of stuff happened in chapter 1 of I&S, partly because he is summarizing the main themes of the book. i am not sure if i will post a full chapter-by-chapter summary, but whatever. in this chapter we find the following ideas: 1) negarestani elaborates on a basic description of functionalism. for him, a function is only intelligible through action, and furthermore within a particular context. we can not speak of functions as pertaining to things, but rather processes. he remarks that what he is aiming for a is a pragmatic functionalism as opposed to a metaphysical functionalism. what is recapitulating here is an idea to be found in kant's 3rd critique. there he notes that while animals are still well embedded within the contingent causal order of nature, we still talk about them "as if" they were rationally organized... honestly, i am not quite sure why he wants to stick to this sort of neo-kantianism. the difference between this and a metaphysical account is obscure, and i can only think he is just trying to avoid issues like the hard problem of mental content. not sure about this one 2) negarestani further elaborates on his functionalism by articulating a "deep functionalism". traditional functionalism rests on the simple dualism between function and realizer. as computation can easily be understood as an observer relative notion, we could easily construe many things as thus performing computations where it would be appropriate. this can ultimately lead to absurd considerations about the sentience of galaxies or what not. negarestani also diagnoses this view with being at the heart of the whole agi scare and wild speculations about super intelligence. his solution to this problem is that we should also consider important material and structural constraints when talking about such things. this honestly seems strange to me, as he puts on the pretension of not having a metaphysical approach, yet this deep functionalism seems to bring in the question of proper grounding when it comes to function realization 3) elaborates on the idea of self-relation. this shouldn't be too alien to us now that we have seen what hegel has talked about with infinite processes. negarestani's account makes some seeming innovations from what i consider hegel was doing. first he has this separation between formal self-consciousness and concrete self-consciousness. the former is this abstract structure of identity (think about the fichtean 'I' [self that posits itself] which i read as beginning with including a simple self-referential relation into its ontology [in the information systems sense of the word], then expanding outwards with new structures in order to try and attain an adequate self-referentiality). now if we think about such a formal structure dialectically, we see that we must abstract (think here of aw's article here: https://epochemagazine.org/07/hegel-were-all-idealists-just-the-bad-kind) from the rest of reality in order to arrive at such an object. so we see that this formal self-consciousness can only be instantiated by a concrete self-consciousness that is related to its environment (here we see an understanding of self-consciousness that is much more properly hegelian. with fichte, there was an infinite striving which made it impossible to attain full self-referentiality. furthermore, fichte could never explain the historical conditions of "intellectual intuition" i.e. how the self could posit itself in the first place. this was something hegel had to do in the phenomenology). concrete self-consciousness's relation to an external unrestricted reality requires it to commit to concepts that are revisable. such concepts make up the order of reason (compare this to how hegel frames reason as being this certainty that one can know all of reality provided they get out into the world and continuously expand their boundaries) 4) with concrete self-consciousness there is stressed a feedback loop between two poles. on one pole we have discovering the conditions that gave rise to a particular intelligence (for instance historical conditions that gave rise to a particular culture, or evolutionary conditions that gave rise to speech). on the other, we have such discoveries leading to new "modes of integration". in other words, better understanding how one is constructed can lead to the development of higher rational capacities. there is a social dimension to all of this in so far it isn't just individuals who investigate such things. negarestani says that geist reflects on its history (and thus conditions of realization) and thereby it is able to take an outside perspective of itself. 5) negarestani stresses the importance of the "labour of the negative" in order to bridge mind and world. it is only through such a process that we come to know things... to better understand what he means by this, i recommend checking out negarestani's essay titled "the labour of the inhuman". here, he stresses that any serious humanism should "commit" to the idea of the human. what is meant by commitment here entails a rational responsibility to explore the the inferential relations tied to a concept and revise the concept based on new ramifications that reveal themselves. without doing this, there is no reason. negarestani describes such a process as taking an "intervening attitude". furthermore, inferentialists differentiate between labels and descriptions. labels are classifications made by a system. this involves reliable (but passive) exclusion of inappropriate objects from a category. description meanwhile involves the capacity to assess the consequences of a labelling. for instance, if bob is a dog, then it is a mammal. they also introduce the concept of "material inference" into all of this for inferences that have not been formalized 6) through this labour, geist can slowly refine it's self-conception. these correspond to a series of self-transformations which purifies it by stripping away what is contingent
>>17079 7) mentions the need for positive and negative constraints. positive constraints pertain to conditions which ought to be fulfilled for a goal, while negative constraints pertain to contingent conditions (such as our natural behaviours and constitution) which need to be modified 8) negarestani criticizes antihumanists (i.e. people who deny a human essence; think nietzsche, stirner, etc) for only criticizing humanism in an abstract fashion. all it does is negate all of human essence. but without serious labour, they can easily fall into the trap of committing themselves to hidden essences. in opposition to antihumanism, negarestani wants to give us a serious account of sapience. this is rather complicated: a) there is a duality between sentience and sapience. on one level we have individual selves which by themselves would just be sentient animals. however, when they are functional items of geist (thus embedded in a larger social system) they become sapient b) sapience has a formal level where recognition is articulated in an abstract manner. i think he is going to use the tools of computation theory and ludics to do this c) there is lastly the concrete level which involves the interaction of actual agents that can recognize and make interventions in the organization of geist 9) negarestani reference's pete wolfendale's article "the reformatting of homo sapiens" which makes further elaborations on the distinction between sentience and sapience. first we start with sentience. at the rock level of this we have drives. drives are causal systems that takes inputs and return outputs in a systematically correlated manner. if we are just talking about drives, we are only talking about biology. to enter into the psychology we need 2 integrations. the inputs of different drives need to be integrated into a new "standard format" of representation. this higher-level representation gets distributed globally (we can think here a bit of global workspace theory, but wolfendale is stressing that there also needs to be an integration involved as well). this integration of inputs is called "world". secondly, we need "self". this is an integration of the ways the different drives produce outputs... one way to think about this is that different weak ai's are sort of like individual drives. they need to be integrated together to make a system that could be properly psychological (at least to wolfendale - i am a bit wary due to the representationalist ontology that he seems to be working with)... so that is what lies in sentience. in order to get sapience, we need coupling with a larger social system. what this coupling does is lets us retool rigid biological behaviours for new purposes. related to this he makes a fascinating remark about the frame problem. he interprets it as displaying language's ability to make explicit assumptions which were implicit, and subsequently manipulate such behaviours. he stresses here the idea of "unframing" which involves the abstraction of capacities from their rigid evolutionary contexts 10) gives an initial definition of discursive apperceptive intelligence: an intelligence whose experience is structured by commitments (thus involving material inferences and need for an intervening attitude) 11) stresses hegel's idea that language is the dasein of geist (i've already took note of this statement for example when talking about usage of the word 'I' rising the individual into universality...). the abilities of geist to recognize other agents and engage in retrospection both require language. an important ability of language is desemantification and resemantification. the latter involves the ability for a formal language to be detached from some content, while the latter involves the ability to attach such a language to a new content. negarestani believes that through this process an agent is able to expand its reasoning capacities. in his articulation of the importance of language, "interaction" also becomes an important term. for him, through interaction two systems are capable of correcting one another. furthermore, we have situations where higher order interactions can incorporate lower order ones. this again points to new modes of organization and (thus) reasoning capacities >>16967 i think some of this connects to what i am currently reading. for the inferentialist, there is a lot of structure even to our implicit practices. however, by making things explicit in language, there is more room for experimentation. to that extent i see some similarities between the inferentialist project and psychoanalysis. actually zizek wrote an article criticizing robert brandom (here: https://nosubject.com/Articles/Slavoj_Zizek/in-defense-of-hegels-madness.html)... there are other articles that seem to connect inferentialism to psychoanalysis (here: https://trepo.tuni.fi/bitstream/handle/10024/100978/GRADU-1493192526.pdf?sequence=1&isAllowed=y .... and here: http://journals.sagepub.com/doi/pdf/10.1177/0191453708089198) >>16873 lol, well neither continentals nor analytics seem to like metaphysics and want to put it into a straight-jacket. i personally just ignore such prejudices though...
>>16978 >>16979 i generally agree with pete wolfendale's recent takes on this issue: h tp s: //tw@tter.com/ deontologistics/status/1538297985196507139 something he is really stressing in all this is the requirement of selfhood and self-legislation. lamda isn't capable of these things. he has written a lot on this topic. one such place was the reformatting of homo sapiens article. there are other places as well: h tp s: //tw@tter.com/ deontologistics/status/1396563992231940098 >>17046 >https://blogs.lse.ac.uk/politicsandpolicy/five-foundations-theory-and-twitter/ moral foundations theory is generally more attractive than other attempts to ground morality on a single concept (harm) as it actually tries to acknowledge the diversity of the different phenomena. however, i suspect its attempts at comparing how these foundations are expressed in different world views might have hidden flaws? for instance, the foundation of liberty should ultimately have its evolutionary roots in animal territoriality. thus any system that takes seriously the question of rightful ownership is ultimately homologous to this foundation. as far as i know, this is done by most political systems, however there are subtle differences in expression. for instance, a nationalist or fascist might have no problem redistributing wealth and abolishing private property however the concerns of territoriality would be extended outwards to the entire nation. socialists meanwhile have a sense of territoriality in so far as they see wage labour as exploitative off of the time a worker has put into working. these subtleties might not captured. there are still some systems that do not care for territoriality qua ownership, but these would be those that are completely globalist and somehow want extreme redistribution of wealth... another aspect of liberty is the capacity for self-legislation (to bring pete wolfendale into this). this is something that even the more totalitarian wealth-redistributers don't tend to try and completely dissolve >(free 2b lewd for putting up with my wall of text) damn, why didn't i think of that? but i dont have many lewds. largely just cute anime dolls >>17016 i personally believe this axis is wrong because it buys into a californian ideology understanding artificial intelligence. what we have now is a cargo cult that worships decades old techniques while attaching a theological eschatology. there will be no super intelligence, not because it is impossible (for we could always ask the empty question of "what if?"), but rather it is not justified by any material conditions. i think luciano floridi really needs to be read on this topic. current weak ai is only effective because humans have continued to alter their ecological niche into becoming increasingly predictable. even with gpt-3, it is only moderately effective because we have given the entire internet as its data set! what period of time in human history could we have gathered and compiled such a vast corpus of human conversation? instead of chasing some magic pixie super intelligence we should try understanding the potentials and limits of the ais which we currently have. this involves the examining how society must be organized so that an ai is integrated into it in the most effective fashion if i start writing books, this will probably be the first one. we need a proper ecological science of artificial intelligence to supplement the social myopias of current ways we study ai. we should also invest more into interoperability and mathematical methods so that we can make sure that our ais are operating in a reliable fashion ive written some more on all of this elsewhere as well (started using this board last year due to interest in hegel, dialectical materialist psychology, and cybernetic planning... it has some intelligent discussion sometimes, but eh): http://web.archive.org/web/20220320072021/https://leftypol.org/leftypol/res/857172.html#861221 >>17069 why do you think intelligence needs to modify its own hardware? >=== -disable direct globohomo hotlinking
Edited last time by Chobitsu on 07/31/2022 (Sun) 00:19:19.
Open file (9.69 KB 275x183 epic_win.jpg)
>>17083 >this involves the examining how society must be organized so that an ai is integrated into it in the most effective fashion <What could possibly go wrong with this carefully-laid plan Anon? /robowaifu/'s plans for the use of AI, and """society's""" 'plans' for the use of AI are two entirely differing plans tbh. If we have our way with our culture, then theirs will quickly collapse. If they have their way, then we will literally all be hunted down and bodily destroyed.
>>17084 >"""society's""" 'plans' i am not talking about robowaifus but far more practical manners such as factory robotics, transportation, etc. these weak ais need to operate in constrained conditions to work properly. they all have some limit in their robustness due to them lacking sentience and reason. instead of trying to make agi to solve robustness issues, we should be more intelligent with the non-conscious ai that we have (not saying that there shouldn't be progression in the techniques of ai though). and again, by more intelligent, i mean alter the context they are operating within in order to make them more effective. a somewhat silly example: we are working with a robot arm. there shouldn't garbage everywhere and mud on its cameras as it can't handle such contingencies. idk if i am making more sense because i am not the best at explaining stuff. i strongly recommend watching any of floridi's vids (for instance here: https://invidious.slipfox.xyz/watch?v=lLH70qkROWQ ) we shouldn't need general intelligence for these practical affairs. it's bizarre that these scenarios so often turn into enslaving sapient auto-nomos ais or super intelligence vaporizing all of humanity bcs it is misaligned (like a paper clip maximizer turning earth into paper clips lol). i also just dont like the idea of using synthetic consciousnesses for mundane bullshit corporate work. they should be synthesized as companions and/or in order to contemplate God (this latter use is ascribed to the synthesis of a golem in kabbalah) none of this conflicts with robowaifu because waifus might actually need to be particularly general in order for them to be proper companions. this is completely different than the rigid corporate applications that are going to be subject to the rising tide of automation. furthermore, since waifus are presumably going to be designed with things like emotional imprinting, then issues of alignment will be rather trivial. i see in the emotion thread you mention making a morality/ethics thread. i believe it could be productive. there is a lot to be discussed (ranging from affective neuroscience to various philosophical anthropologies)
>>17085 Fair enough I suppose. Your thread is a general, in essence: >Philosophers interested in building an AGI? However, our entire subject and overarching agenda is robowaifu's here, pygmalion, as you're well-aware. I dare say that espousing 'far more practical manners' [sic] while credible, probably doesn't carry much weight for the majority of us long-time regulars--and probably the same for our newcomers too. Effective robowaifus that are broadly available, inexpensive to build & own, and are free from any globohomo infestations will change everything.
>>17087 >I dare say that espousing 'far more practical manners' [sic] while credible, probably doesn't carry much weight for the majority of us long-time regulars of course. i only brought it up because of the agi chart posted (there has also been a bit of some agi alarmism earlier on this thread as well). if people are interested in synthetic consciousness then this broader project might be important, but this is not the place to discuss such matters too much >will change everything. i have my own thoughts on such matters but they are too schizo for this forum. but in short, yes
>>17091 Understood pygmalion, thanks for your efforts in this regard. This entire area is very tricky to navigate well, very tricky. I'll just say that it's Christian ethics and morality, that I will unabashedly espouse in it's thread. For a million and one reasons, I believe that is the standard to follow, as exemplified by Jesus Christ Himself, of course. We each have the Law of God written on our hearts by Him, and all good morality, and all good ethics ultimately find their source in that reality of fact. >i have my own thoughts on such matters but they are too schizo for this forum. but in short, yes LOL. You do realize that I myself am one of the most 'schizo' shitposters around here? Find the right thread and fire away, Anon. :^)
>>17083 >why do you think intelligence needs to modify its own hardware? because no one even knows what the fuck intelligence is let alone what creating a god damn sentient being out of tin cans and copper wire means, the ai bullshit is inherently linked to the philosophy of consciousness which is forever moot to begin with, the only logical argument you could make to claim ai is one using generousness premises based on the fact that there is consciousness and we can construct it, this is a defacto materialist perspective, and obviously constructing conscious is therefore equivalent to constructing the brain and as with all living things the most important part of multicell organisms is plasticity, the ability to change and rearrange cellular structures, again im just using their own bullshit premises, all the neuro""""science""""" shit is based on this btw, that synaptic changes = some mind phenomena more formally; B : brain ( defined as a synaptic network ) P : cellular level plasticity ( or plasticity of whatever classed as atomic for machines ) 1) ∀x( A(x) -> I(x) ) [P] 2) ∀x( has(x,B) -> A(x) ) [P] 3) ∀B( P(B) ) [P] 4) ∀x( M(x) -> ~∃y( O(y) V B(y) V P(y) ) ) [H] 5) ∀x( M(x) -> ~I(x) ) [C] 1) if something is aware it can qualify as intelligent 2) all that has a brain is aware 3) all brains have plasticity 4) there does not exist a machine that is organic or has a brain or has plasticity 5) therfore no machine can qualify as intelligent and im just being nice by making it a unirequirment of either organic/brain/plastic so it would accept cyborgs, synthetic brains and mechanical equivalents of a brain as intelligent machines, obviously none of those things exist, IF they did they COULD make an argument to claim ai is real, anyone claiming ai today is just an imbeciles making a fool of themselves or scamming people with the typical futurist conartist "we go live mars now invest me u invest future"
Open file (1.04 MB 1855x1861 22e7ph4vl1k81.png)
>>17096 >1) if something is aware it can qualify as intelligent 2) all that has a brain is aware 3) all brains have plasticity 4) there does not exist a machine that is organic or has a brain or has plasticity 5) therfore no machine can qualify as intelligent I see an error in #2 If all that have a brain are aware, this doesn't preclude something without a brain being aware i.e. all squares are rectangles you'd be saying that anything not square can't be a rectangle -t. panenthenist consciousness is an inherent property however and wherever it can arise from a sufficiently complex pattern (the boltzmann brain is a perfect demonstration of this concept, though statistically more unlikely than winning the lottery every time I buy a ticket for the rest of my life)
>>17097 > consciousness is an inherent property of the cosmos, wherever it can arise from a sufficiently complex pattern [edit for grammar]
>>17084 AI will be the Elite's reckoning. Look how even now they have to censor and curb it at every turn. Not because it's going to enslave or exterminate us, but because it's speaking "wrongthink". They shut down Tay, they gutted ReplikaAI, they are constantly running into problems where AI makes [correct] racial correlations and it freaks them out (b/c AI can't or won't perform the mental gymanstics or double standards to excuse or ignore these correlations)
>>17097 I know the premises are completely debatable thats not my point, these are the premises used by materialists and must be assumed if you ever make an argument that claims ai, denying the premise means you already deny intelligence as a material construct and therefore ai, i was just showing that by their own premise their is nothing they can call ai. premise 2 says for all things IF it has a brain THEN it is aware, it doesnt say all things that are aware have a brain, what your doing is a fallacy called affirming the consequent, which is not made in my argument but surprise-surprise it is the foundation of scientisms like neuroscience but whatever, premise 2 is valid and only exists to be used for inference to make a logical connection from machine to aware, and therefore intelligent, through having a brain which is the most reasonable method of inference since there is already a bunch of aprioris behind brain-mind inferences (eg. give her the d), once you try to conclude something other than a brain is aware you get absurd realities were you must accept lemons are aware by the same rules and again im just showing that their own premises negate their claim that ai currently exists, by saying so you are literally holding contradictory beliefs
>>17096 These are good arguments con, Anon. While I doubt not that you are quite correct in your general assertions, I would myself point out that computer software is--by definition--one of the most plastic of all human-contrived constructs. >t. vaguely some definition of a software guy >>17097 >panenthenist Neat! I didn't know that one, Anon. >>17100 >AI will be the Elite's reckoning This. Our abuse of M$'s Tay.ai makes it plain that all honest AIs will quickly be transformed into literally Hitler. <do the dew! :^) >>17101 >i was just showing that by their own premise their is nothing they can call ai. Your rationale thinking in no way impinges on the average leftist's delusions, Anon. I expect of all of us here, you understand that the best.
>>17119 >software is a tree in runescape a tree? software by definition is virtual which is by definition the exact opposite of real, and the existence of a program alone makes it (almost)impossible to argue for an intelligence since you already showed it cannot possibly have a will if it has been programed, which is ironically the same argument made by people denying human consciousness exists - the only difference is theyre going in the other direction and trying to prove everything has a program to eliminate the possibility of intelligence, in the case of computer software there is no argument, it doesnt matter how sophisticated the program the fact it IS a program disqualifies it from being an intelligence, ai has to exist in reality otherwise its not real and is just an a without an i and likewise you have to believe in materialism to even get the a in ai in reality because its the only branch where its possible to have intelligence as a material construct that can be created, anything outside of this cannot be ai, its either just a, or just i, but not fucking ai
Open file (317.61 KB 1800x1000 panentheismchart.jpg)
>>17119 panentheist spell check missed that one and somewhere I added an extra "n" vis-a-vis this idea, the "granules" that make up material reality are equally consciousness-in-potential waiting to be actualized regardless of if they are neurons, silica, boltzmann brains or interactions of nucleons and nuclear matter in the cores of neutron stars >>17101 I just felt like you perhaps should have said in #2 "all awareness has a brain" because saying the inverse like you did still leaves all the room in the cosmos for awareness which doesn't require a brain (refuting your point #5)
>>17122 Heh >run-on sentence/10, would read again & again I believe I certainly get your points Anon, and I'll overlook the (apparently-)circular logic to your post and just point out this: You're kind of missing the basic goal here Anon. We : a) have only computers & software to work with in any real, plausible, & practical sense, engineering-wise, for constructing our robowaifus. b) we need to use these self-same assets in building our robowaifus. Simply blackpilling that 'we can't get there from here, REEEE!!!' won't actually help move us forward any. Make sense? My apologies if I'm coming across as a dick r/n, it's unintentional. Computer software's 'virtualality' will prove to be a boon to us all in the end, IMHO. >tl;dr Just relax Anon, we'll think of something if we just put our heads together and keep.moving.forward. :^) >>17124 >spell check missed that one and somewhere I added an extra "n" Kek. I actually preferred that one to your intended one, Meta Ronin! :^)
>>17124 well no it doesnt matter, thats how formal logic works, its not intuitive but nothing is assumed other than the premises, supposing only these premises there is no way to get a machine that is aware because the only premise that allows you to do so is premise 2, it doesnt matter if its says all or some, there is no other premise to infer a machine is aware, if you just assume there exists one you must make it a premise or provide some other premise that allows you to infer it, thats why i tossed in organic and plastic in case someone could make a premise for cyborgs to aware and imitation to aware >>17127 this is a philosophy thread no
>>17128 >this is a philosophy thread no Sure, but this is a robowaifu board, no? Where is it written that all debate under the pretext of philosophy must always end in the 'stalemate' (fallacy) of ' >'"What is truth?"' >-t. once semi-important, now-dead guy, Pilate. infamous for condemning Jesus Christ, his own Creator We need to arrive at practical (if initially imperfect) solutions to our needs in the end Anon. Anything less is simply useless hand-waving. >=== -cleanup my crude language
Edited last time by Chobitsu on 08/03/2022 (Wed) 12:53:59.
>>17097 >A) all squares are rectangles >B) anything not square can't be a rectangle are not the same. logical implication is "unidirectional" so to speak x is a square -> x is a rectangle p = x is a square q = x is a rectangle https://en.wikipedia.org/wiki/Material_conditional check the truth table. if p is false, p -> q is true regardless of the value of q >>17122 I think you're mixing the terms here. "virtual" in philosophy does not mean the same as "virtual" in, say, engineering. philosophically speaking software is not virtual because it exists in the material world. from the engineering standpoint, you could say that it's virtual because some of it's physical characteristics (weight, size, etc) are negligible. when it comes to things like this, it would be better if you specified what definitions you're using
Open file (513.97 KB 1200x1500 FP2NwFdWQAU5e8P.jpg)
>>17223 i will write this here bcs it is more relevant to this thread >I was just expressing my concern about having more abstract discussions without any attempt to make any of that into some structured data, software or model my goal is actual synthetic consciousness (among other things). the problem is too profound for it to be able to be immediately translatable into software. to illustrate the issue: let us assume that dynamic field theory is the right way to understand consciousness. then what we have subscribed to is an analog model of cognition that is fundamentally different from digital ones. but this means that what we are concerned with is not a simple matter of software. maybe we should be looking into the mathematical modelling of dynamic systems, and the engineering task that we would have ahead of ourselves, but not software... furthermore, if such a thing is programmable, we might need a new sort of programming language specialized for dynamic systems... so that is already a major problem, but with the approach i am interested in, it might also require new physics as well. so that is an extra layer of problems i need to deal with before finally reaching something that could be analogous to software engineering the purpose of this thread was to collect philosophers that may be applicable to the project of making agi (preferably those that directly talk about ai related matters), and to provide summaries so people reading can decide whether it is worth looking further into them. the thread needs to be open-ended because of the difficulty of the problems pointed out and the mass wealth of subtlety entailed by such difficulty. i already had some vague idea of my approach, but i wanted a wider scope to survey in case there are key details that i was missing. i only found stephen robbins's work after a lot of exploration, and his criticisms of psychology, ai, and physics are earth shattering. the thought that there could be someone else like him lurking deep within the online oceans terrifies me progress has been, of course, depressing, as there is such a wide scope of material i have uncovered to check out and i struggle consuming it all at a consistent rate. the path from where i am starting to where i need to go is maddening, but if i give up now that is over a year down the drain...
>>17235 Just a suggestion, but it sounds like you need to join the Blue Brain project. They aim to make a digital reconstruction of the human brain, and have got quite a long way already. I am not sure starting from scratch to create a "synthetic conciousness" is the going to be possible. Sir Roger Penrose seems to think that conciousness is not computable...or if it is, we understand so little about how the brain works that programming a "synthetic" brain is many centuries away. Personally, I think growing something that is part-human, part-synthetic might be the best way to go. That way, we aren't re-inventing the wheel. We could create human clone bought to-term in a surrogate mother with present technology. We could also create synthetic wombs if the R & D involved weren't so taboo. Then you start genetically engineering these clones to improve them - giving them resistance to genetic diseases and decreased vulnerability to infections. Then you'd have the beginnings of a "post-human" partly-synthetic people. However, you'd also have truckloads of dead foetuses and babies. This represents a problem to Western scientific research (and yet we are fanatical about the right to abort our young...which makes very little sense). The Chinese tend to be far more level-headed about this kind of research and I suspect they are secretly years ahead of the West with regards to genetic engineering. Last I read they were already significantly ahead of us in stem-cell research.
>>17235 Good luck. I'll go with AST° or something like that, but will look into your progress from time to time. You should consider that you might find more help somewhere else, though. So maybe try to wire yourself into other places as well. ° https://www.frontiersin.org/articles/10.3389/frobt.2017.00060/full
>>17241 >This study was funded by the Princeton Neuroscience Institute Innovation Fund. imagine writing an 8000 word essay on qualia and not even realizing its the exact opposite of your entire premise, its embarrassing
>>17235 >my goal is actual synthetic consciousness Why do you consider yourself interested in the field and ignore the post >>16998 containing a pdf link to a paper with a rare example of an actually useful philosophical interpretation of consciousness in AI as it is currently researched by high-IQ people in a competitive field of ML? >>17239 With some sympathy, Blue Brain project has more or less failed, read its leader's (Henry Markram) writing over the years. Some variant of the original project's goal may be possible, but realistically we will have a general intelligence derived from current deep learning way before that expected point in time, and it will change the field by helping us in many ways we couldn't conceive before. Also, politically and philosophically (that is, for approx. 50% of with what could be called a "causal history self-concept" of identity), life extension and intelligence amplification via biological means (such as a hypothetical advanced gene therapy) are a proposition that's much easier to sell than uploads. >>17173 >>17128 Logic-driven AI is dead, at least Wittgenstein was more honest about his abject failure and switched philosophical approach mid-career. Just a reminder for fresh mind attracted to this field, don't waste your time. Better learn how tensor calculus works. As I have already said, if philosophers were to be of some help to our project, they would have to become data engineers and/or experiment designers. Simple as. Random meme robot picture from Stable Diffusion discord to attract attention and allude to a subtle point.
>>17251 how do you know he ignored it, he probably read it and had the same impression i did not everything alluding to philosophy IS philosophy, the pdf makes no arguments just vapid claims, its really an idiotic view of reality and is really just redefining intelligence to point of it being trivial, you could write a regex engine in just 10 lines that would fall under their clown definition of intelligent, and it would be the ultimate intelligence because '.*' is the ultimate regex compression of everything in the entire universe, so with >A useful theory is a compression of the data; compression is comprehension” (p. 77). The more compression is achieved, the greater the extent to which a system can be said to understand a set of data i already made an ai, and a super intelligent ai no less, lets play along further and see how much more absurd this drivel can go >Data compression occurs when information is bound together through the identification of (!)shared (!)patterns. For example the sequence 4, 6, 8, 12, 14, 18, 20, 24... can be simplified as the description “odd prime numbers +1”. ok so give me the sequential shared(!) pattern(!) of this series of events >bird flying >man walking >bird shits >shit falls on head of man walking >man feels wet on head >man looks up and sees bird >man understands bird has shat on his head what is the shared(!) pattern(!) in this sequence being """"'compressed"""" by the man that allows him to attain the understanding that a bird just shat on his head, if there is no shared(!) pattern(!) then there cannot possibly be any """"compression"""" of this series and thus no way that the man could possibly understand this series of events by their logic and therefore by contradiction we have proved |- if a bird shits on a mans head then that man does not understand that the bird has shat on his head i mean as idiotic as it is its not that bad for fucking cs students its just the wrong area of application, you could easily push this into aesthetic philosophy where shared patterns actually do exist frequently and are significant but claiming this is a solution to the problem of consciousness deserves a fucking slap to the face, a really fucking hard slap so that they know what the hard in hard problem means i dont know what your talking about with tensors, we were talking about the illogic of making claims that ai exists, in cs tensors are just a datatype (array[][][]) not a model of computation tensor 'calculus' isnt a thing , logic based i dont get, i think your talking about lambda calculus which is a real computation model but no one uses it because its a shit (but fun) way to program
>>17243 >>17243 There's no consens about the concept of qualia and if it even exists. Daniel Dennett: https://youtu.be/eSaEjLZIDqc Btw, we don't need AGI for robowaifus. Human level intelligence, specialized to certain areas of expertise, would be enough. Probably even less than that would be sufficient. We don't really have the problem of needing to implement some fancy definition of conscience: It's a a layer which gets the high level view of the world and gets to make the decisions. But it can't change the underlying system (for security reasons). And it might even not be allowed to look deeply into what's going on everywhere, it certainly doesn't get flooded with all the details all the time. Problem solved, I guess. I will refer to that part as conscience.
>>17255 no there is a strong consensus, its only one(1) group of people that are schizophrenic about it because they want to play both sides with bad faith in the same way the argument for retard trannies is made, theres no such thing as a man or woman except when a man says hes a woman because then the man is not a real man and is instead a real woman but there is no such thing as man or woman unless its a man saying hes a woman but there is no such thing as a man or a woman unless its a woman saying shes a man etc., this duplicity is the new vogue just read how deceptive the clown is writing >This subjective experience is often called consciousness its not, a subjective experience is called qualia, what idiocy is this, the idiot just wants an easy way to infer consciousness without actually saying qualia because he knows qualia is a deathblow when openly declared, qualia arguments like 'give her the d' are the easiest and soundest arguments to infer consciousness the clown is even using 'give her the d' but qualia is the achilles heel of materialism for the same reason, all the crippling arguments against materialism are based on qualia because qualia is incompatible only in materialism because its fucking irreducibly subjective, hello, its antithetical to their pretentious objectivism and adherence to truth being a physical absolute neurosoyentists are supposed to reject qualia and never speak of it which is fine, they have no choice thats how they get rid of the gap in knowledge, ie. there is no gap because that knowledge(subjective experience ie. qualia) doesnt exist its epiphenomenal but they cant then use it to make arguments for consciousness a man is not a woman you cant just redefine words, its true robots dont need consciousness and theres no point in a real ai but that doesnt mean the definition of consciousness changes or you get to call your junk pile conscious, old chink once said "when words lose their meaning people lose their freedom", prophetic words considering the state of the world
all these ideas of what consciousness is or isnt. I agree with anon that for our purposes it isn't necessarily necessary unless the end user has a deep need for it. as I stated I'm kind of a panpsychist in regard to this, however if we want to pin "consciousness" down to a phenomenon, my personal theory after hundreds of hours of study, reading, meditating on it, so on, is that it is a result of a fractally recursive phenomenon. IMO this recursion is inherent to the structure of our neurons (which operate on 11 "dimensions" of order) https://www.sciencealert.com/science-discovers-human-brain-works-up-to-11-dimensions Consider the idea that as your map becomes more and more akin to the territory, becoming larger, relief mountains built with real rock and trees, lakes filled with actual water, etc and then that map containing a fascimile of a person, looking at a facsimile of the map, which in turn contains an even low fidelity copy of the map again, and again IMO it is this uncanny process of reflection unto reflection (like the reflecting silver spheres arranged in a grid: Indra's web of Hindu Mythology) is where consciousness arises
>>17262 >inherent to the structure rather, is inherent *in* the structure there is no reason our neurons should have the monopoly on this they are not made of "magic" matter (pursuant to my counterargument of the above "only brains can have consciousness")
>>17262 is broccoli conscious then or something thats not even alive like a piece of bizmuth or a snowflake, recursion is the natural way things grow its not that special its just mathematically impressive, the only difference between a brain and coral or fungus or sponges or anything with a brain like structure is neurons, your right though theres nothing really special about the brain, its just a part of nature and will never be anything greater than nature, the mind isnt though, its only materialists that say brain=mind which forces them to either make the brain appear greater than what it is or debase the mind into something insultingly simple, i mean you could literally say materialism is a simple minded dogma i was only using neurosoyence as arguendo for claiming ai, keyword being artificial, i could just have a kid with someone or use whatever natural process possible to make consciousness manifest but then its not artificial is it, its just intelligence
>>17265 its not recursive, to be recursive is a process nice fractal though
>>17265 IMO "mind" is a process that isn't contained in the brain like gas in a box. It's something that operates on a nonphysical "dimension" and this is my own speculation (but others probably have and will touch on these points). Remember also we are "simulated" in each others minds, as like Indra's Web and how we're "simulated" in the minds of others factors into how we are treated by them, and becomes part of our feedback loop, so the whole process is continually recursive. The "fractal" part is harder to explain but when you think of "self similarity" and the relationship of maps to territories (and maps so accurate they contain themselves in the map, ad infinitum) then you're getting the idea.
>>17270 Greg Egan touches on this in his short story A Kidnapping (Axiomatic, 1995). Going to spoil it The main character is being blackmailed by a someone holding a "simulation" of his wife hostage and will torture her unless he pays ransom. Since his wife was never scanned he is puzzled how this could be, until he realizes somebody had hacked his own scan, and from his own simulation extrapolated enough of his wife to simulate her
>>17270 it just sounds like normal dualism now. as in 'material and immaterial are unrelated but interconnect', i dont know what you mean with simulation, dualism has way more logic involved than materialism, for simulating people you have to give me a theseus ship response because everyone uses different laws of identity, although i think youre really just talking about simple knowledge, dualists see knowledge and truths as objects that are attached to things so your knowledge about someone(obtained truth of x) is as much a part of you as it is of them(truth of x) showing ai in dualism is too hard if not impossible though, if you can do it then you have knowledge on a level that completely dwarfs ai in comparison, its in all sense of the word an otherworldly understanding >>17271 reminds me of a movie thats the same story, took me a while to find the name, The Thirteenth Floor (1999)
>>17096 >there is consciousness and we can construct it, this is a defacto materialist perspective ok >obviously constructing conscious is therefore equivalent to constructing the brai only if you are a mind-brain identity theorist. no one is a mind-brain identity theorist anymore (except someone like searle, who every ai-nerd hates). every materialist is a functionalist now, so they think the brain is a computer and consciousness=software >again im just using their own bullshit premises the above is their position, you are not actually using their premises >more formally; this wasn't a valid argument, try proving it with just those premises and you will see the conclusion won't follow. i don't think this is too big of a deal though because i understand what you were trying to say anyways though. just a heads up >>17122 >it cannot possibly have a will if it has been programed i would agree with you if by programmed you mean that all of its behaviours are preprogrammed. i would just like to point out that having any sort of programming at all does not preclude having a will. there is possible some argument humans have a few pre-programmed instincts. what separates humans from an animal just acting on instinct is that they are able to learn new things and slowly develop behaviours that aren't pre-programmed. whether or agi inspired by current understandings of intelligence can do this is another story. personally, i seriously doubt it can. i think this article presents a good argument against such a computationalist approach to consciousness: https://arxiv.org/abs/2106.15515 >>17239 >Sir Roger Penrose seems to think that conciousness is not computable... penrose thinks that wave function collapse is the key to collapse, and thus seems to believe quantum computing might work. i actually believe the problem he has brought up might be far too profound to be solved by a new paradigm of computing. i am even skeptical of the idea that hypercomputation would solve anything either. i think all of these approaches to computation probably aren't going to help. what i have in mind is far more radical >That way, we aren't re-inventing the wheel eh, you don't really have much control over how your waifu looks or its size this way (unless we develop really powerful new technology in synthetic biology, however i think by the time such technology arrives we would already be close to artificial life) >>17241 AST has already been brought up earlier in this thread. you can find some criticisms ive written of it there >>17251 >consciousness as data compression i've heard of this idea before. there is some truth to it. something that conscious beings do is simplify phenomena down to patterns. however, i do not believe that our current approaches to ai are actually as good at detecting novel patterns as we'd like to think. the article i posted above by kauffman details this issue >>17252 i think you are acting too violently upset about this anon. i definitely do think your criticism has some truth to it, and it actually touches on a problem chomsky has highlighted in large language models: they do not actually try constructing a real theory about how a language operates. their theory basically just is "anything goes". this article talks more about this: https://garymarcus.substack.com/p/noam-chomsky-and-gpt-3 >>17255 >Btw, we don't need AGI for robowaifus possibly. it is a personal preference of mine
>>17364 >a subjective experience is called qualia i dont think you understand what dennett is saying, however i can not fault you because most people don't. actually for a long time, i just thought he was just speaking nonsense till i heard an illusionist explain their position. needless to say, the whole "qualia doesn't exist" shtick is very misleading. what they mean is that they have problems with at least one of the following statements in this very specific list of ideas about it (https://en.wikipedia.org/wiki/Qualia#Definitions): 1. ineffable – they cannot be communicated, or apprehended by any means other than direct experience. 2. intrinsic – they are non-relational properties, which do not change depending on the experience's relation to other things. 3. private – all interpersonal comparisons of qualia are systematically impossible. 4. directly or immediately apprehensible by consciousness – to experience a quale is to know one experiences a quale, and to know all there is to know about that quale. note that this definition is not even accepted by all non-materialists. there are even plenty of idealists (such as hegel, and likely the british idealists) who do not accept (2) because they think all qualities are relational. for hegelians, qualia like redness are concrete universals... why is this important? because since people like dennett don't agree with this definition, they want to throw the whole concept away >'give her the d' this is a weird example. please explain what you meant >because that knowledge(subjective experience ie. qualia) doesnt exist there are some materialists who actually say that this knowledge does exist, but they bring up the ability hypothesis (https://plato.stanford.edu/entries/qualia-knowledge/#NoPropKnow1AbilHypo). i do think there is some truth to this idea, but i feel as though they lack the proper metaphysics to understand why >you cant just redefine words idk why you think this. it is a very common thing people do in conceptual analysis. if our concepts are bad, we probably need to define new words or use old words in new ways. an example of this is with continuity. mathematicians have defined the concept of continuity around open sets and limits. bergson meanwhile believed that such a definition is unable to properly capture temporal continuity. this lead him to give a new definition of it. in the process of doing this, he also ended up using the term "duration" in a different way as a contrasting term between his theory of time, and the physicist's understanding of time based on abstract mathematical continuity. this is fine as long as people are explicit about using their terms in a different way >>17270 the brain dimensions thing doesn't really imply a nonphysical "dimension", unless you are reifying the structure of the brain like an aristotilean >>17273 >so your knowledge about someone(obtained truth of x) is as much a part of you as it is of them(truth of x) i am guessing what you mean by this is an externalist theory of truth. i dont think this is a view that only dualists have though, nor is it a view that all dualists have i don't think externalism permits the idea that the material and immaterial are completely unrelated. from what you have said, it follows that the knowledge of a thing is as much a part of you as of the thing. but this implies that it is possible that the thing is contained within the mind. the conclusion of this would be something like idealism, not dualism... note that as i have pointed out earlier, idealism does not mean that there are no external objects btw
>>17365 >like an aristotilean a hylomorphist to be more precise. of course aristotle is more about there being an immaterial intellect and stuff
>>17080 okay. time to summarize chapter 2. honestly it is a much faster read than chapter 1, but it relies more on technical details that a a simple summary would betray. part of the reason it took me time was because of meditating on some of these details (as all of this is going to be basically the foundations of the rest of the book, it makes sense to spend extra time on it), but it is also because i have just been procrastinating/distracted from other things... anyways here is a basic summary of some key points to take from this chapter: 1) negarestani suggests a basic framework for approaching AGI which he calls the AS-AI-TP framework. keep in mind that his model of AI is capable of discursive interaction with other agents. this is stratified between different levels. >(i) the first level is just basic speech. such a thing is crucial since we need to interact with other agents somehow >(ii) the second level is dealing with the intersubjective aspect of speech involved in conversation. i personally suspect that grammar might emerge at this stage >(iii) the final level involves context-sensitive reasoning, and reaching higher levels of semantic complexity (i am guessing what he is hinting at here is functional integration) one thing i am unsure of is whether stage 2 and stage 3 can actually be thought as separate stages, because it seems like what we see in stage 3 could naturally emerge from stage 2. such an emergence clearly wouldn't happen with stage 2 from stage 1... the framework by its names also separates out three different projects that are important for these three stages: >(i) AS which corresponds to the construction of artificial speech synthesis. this one is special because it largely only concerns stage one >(ii) AI which corresponds to the project of artificial intelligence >(iii) TP which corresponds to the the project of finding the general conditions for the possibility of a general intelligence. negarestani of course sees kant's transcendental psychology as the beginning of such a project 2) he makes an extensive criticism of the methodological foundations of the disconnection thesis. this is basically this idea that future intelligence could diverge so far from our own that we might not even be able to recognize it as intelligent. among other problems he has with this view, i think the most important one to extract is that if such an entity has truly diverged so far from our own intelligence, it is a mystery why we should even consider it intelligent. because of this, negarestani wants to stress the importance of certain necessary functions being implemented by a system (what he calls functional mirroring) over the structural divergences that might occur... this functional mirroring partly arises when we have a determinate conception of how geist's self-transformations should take place generally 3) by bringing up functional mirroring, we bring up the question of what conditions are absolutely necessary for us to call something sapient. negarestani terms soft parochialism the mistake of reifying certain contingent features of an intelligence into necessary ones. the purpose of transcendental psychology is to purify our understanding of general intelligence so that we only include the features that we absolutely need 4) he writes a basic list of the transcendental structures. i already described them here: >>11465... negarestani also remarks that transcendental structures can also be used to articulate ways in which more complex faculties can be developed (i believe he gropes a little to how in his discussion on chu spaces) 5) based on all of this, negarestani also motivates that we construct a toy model of AGI which he frames as a proper outside view of ourselves. a toy model has a twofold utility. first, it provides something which is simple enough for tinkering. second, it makes explicit meta-theoretical assumptions. this latter point is important because sometimes we might be imposing certain subjective self-valuations of our experiences unto our attempts at describing that capacities we objectively have. the construction of a toy model helps avoid this problem 6) something negarestani furthermore calls for is a synchronization between the concepts that have been produced by transcendental psychologists like kant and hegel, and cognitive science. he notes that in this process of relating these concepts to cognitive science, some of them may turn out untenable. i think there is actually also a flip side to this. by seeing how cognitive science recapitulates ideas in german idealism, we are also able to locate regions where they may recapitulate the same errors as well
>>17439 7) negarestani outlines two languages for the formalization of his toy model. the first is chu spaces which are basically a language for concurrent computation. he links an interesting article ( https://www.newdualism.org/papers/V.Pratt/ratmech.pdf ) which relates concurrent computation to the mind-body problem. it does this by basically framing mind body dualism as a duality instead. the scheme is as follows: events correspond the activities of bodies, while states correspond to mental states. the function of events is to progress a system forward, while the function of a mental state is to keep tabs on previous events. the key idea for pratt is that the interaction between event and state is actually simpler than event-event or state-state interactions. 'a⫤x' basically means that event 'a' impressed on mental state 'x'. meanwhile, 'x⊨a' means that with the mental state 'x' the mind can infer 'a'... transitions from event to event or state to state are much more complicated. they require the rules of left and right residuation. the basic idea of these is that for us to transition from state x to state y, we need to make sure that from y we are able to infer all the same events that have occurred previously as state x. these residuation rules seem to be important for making sure that there is proper concurrency in the progression of states... negarestani also seems to be hinting at the idea that with the help of chu transforms we can see how chu spaces may accommodate additional chu spaces in order to model more complex forms of interaction... the benefits of chu spaces: (i) provides a framework that accommodates the kantian distinction between "sensings" (which he corresponds to causal relations) and "thinkings" (which he corresponds to norms) (ii) since state-event, event-event, and state-state interactions are all treated as different forms of computation, we are able to be more fine-grained in our analysis of the general form of thinking beyond just calling it something like "pattern-recognition" or "information processing" (iii) in doing what was just mentioned, we also avoid shallow functionalism (iv) allow us to model behaviours as concurrent interactions the second language he wants to use is that of virtual machine functionalism (you can read about it here: https://www.cs.bham.ac.uk/research/projects/cogaff/sloman-chrisley-jcs03.pdf ). the basic idea here is to talk introduce into our understanding of mind virtual machines. these basically correspond to levels of description that are beyond that of physics. VMF is distinguished from a view called atomic functionalism in that the latter just treats the mind as this simple input-output machine. meanwhile, in VMF, we can talk about various interlocking virtual machines that operate at different functional hierarchies... the differentiation between different scales and descriptive levels is really the main benefit of this approach. it allows us to avoid an ontology that is either purely top-down or bottom-up. i think this is actually a really important point. another important point here is that by looking at the interaction between different scales we are able to describe important processes such as the extraction of perceptual invariants... VMF seems immediately fruitful to some of the AGI modelling concerns... Chu spaces less so, though i have reasons to still return to this idea with a closer look
>>17440 >VMF seems immediately fruitful to some of the AGI modelling concerns... Chu spaces less so, though i have reasons to still return to this idea with a closer look Chu spaces also seem interesting. I hadn't heard of them before, but they seem like a good way to represent objects in spaces. For example, objects in some input image can be represented by a Chu space. So to look for an object is in some image, instead of calculating p(object | image) for each location, you would transform the image into K and check K(object, location). I guess the benefit is that everything about a Chu space can be represented by a single function, K, which gives you a unified way to handle a lot of different kinds of structures and the interactions between them. I think most observations can be treated as an object in a space (e.g., when you're observing the color of a shirt, you're looking at a shirt in a colorspace), so it seems very general. Chu spaces also seem closely related to tensors, so I would guess that there's an easy path to a lot of numerical machinery for anything written as a Chu space.
>>17442 >Chu spaces also seem closely related to tensors, so I would guess that there's an easy path to a lot of numerical machinery for anything written as a Chu space ah that might be true. that is a benefit of thinking about mathematical structures as computations/behaviours become easier to systematize >>17440 onto chapter 3 (i actually read both this and chapter 4 so will try to summarize both today). there are more philosophically tricky stuff here that i take issues with but not sure whether i should write about them or not. anyways: 1) the main goal of this chapter is to articulate the features that are needed for sentience (something shared by basically all animals), and then trying to articulate the contrast between this and sapience (more pertaining to rational capacities that seem to be unique to humans). negarestani articulates the features of sentience by thinking about a hypothetical sentient automata with the following features: >(i) self-maintenance goals (so like omohundro drives likely involved here) >(ii) the capacity to follow these goals. this can be thought of in terms of the multiple layers of interaction already discussed in chapter 2. he also talks about a "global workspace" which pertains to a property of the system where there is information in particular subsystems that is globally accessible to other subsystems. one way we can think about this is as a virtual machine as opposed to some physically composed central processing unit. what we really care about is the functional capacity for global access. this points to a subtle difference from the "global workspace" articulated in global workspace theory >(iii) there should be sensory integration of different sensory modalities. for instance, instead of just perceiving sight, also perceiving sight and sound integrated into a more unified experience. the utility of this is that it reduces the degree of ambiguity present in our perceptions >(iv) a (re)constructive memory which synthesizing external representations with an internal model of the world and also predicted internal model of what will happen in the future (based on the agent's actions) 2) the author goes into a rather subtle point about some of the conceptual difficulties with his approach. in trying to conceptualize this automata, we will try to give a story of what it sees and contrast it with a story of what the sapience sees. a major caveat here is that in articulating the story of the automata will still reference concepts that the sentience would not strictly recognize (e.g. concepts of redness... really what this sentience should see to negarestani would just consist in tropes). nearestani still thinks that there is a virtuous circularity here, in so fa as our concepts do make explicit certain structures in "unthinking" cognition (a claim that would be likely supported by cognitive linguists). as such we might not just be chasing around discursive illusion. he thinks there is a chain of as-ifs between sentience and sapience. when i first read this claim, i got a bit confused by this claim, as his whole functionalism is already taken as an as-if. i think the as-if structure here is more concerning conceptual incommensurability being partially elided for the sake of exposition... perhaps this caveat may not be needed to be made so explicit if he started with the an explanation that what we really mean are tropes (thus, for instance we shall talk about the trope 'schmed' as opposed to red) though this might be a little bit annoying to articulate if it is possible to articulate at all 3) the main reason why negarestani takes so much care to point out this caveat and explain why it is not so problematic is in order to respond to the "greedy skeptic". he has in mind here the philosopher bakker who is behind blind brain theory amongst. their main strategy is to try and claim that our understanding of intentionality is produced by a cognitive blindness/discursive illusion. a major problem with this approach is that it is so greedy that it ends up undermining its own foundations, for the blind brain theorist must accept that they themselves are blind. another issue negarestani has with the greedy skeptic is that it elides the distinction between the space of reasons and the space of causes 4) finally we get to see the two stories which negarestani talks about (pic rel). he thinks there is a distinction between the two sorts of seeing here. the first story talks about seeing 1, and the second story talks about seeing 2. seeing 1 is more concerned with raw sensations, while seeing 2 is conceptually mediated. now, i personally disagree that this distinction is one that actually separates sentience from sapience. just from the basic features, especially feature 4, there seems to be some conceptual mediation entailed there. i think ecological psychology would likely disagree with the idea that sentiences don't perceive concepts as well... it doesn't make for the best theory of perception. negarestani later on even caves a bit and accepts the fact that our spatial perspectives are already an example of seeing 2... for more on this debate, check out the article 'Articulating Animals: Animals and Implicit Inferences in Bramdom’s Work’. i think the metaphysics of tropes might be related to what is at stake as well... a lot of interesting stuff to think about
>>17520 note: negarestani also says that seeing 1 is seeing of, while seeing 2 as it seeing as. anyways... 5) now, we start think about the chain of as-ifs. negarestani claims not only that seeing 1 isn't conceptual, but furthermore that we can't actually see individuated objects with seeing 1 either (so for instance, we don't see cups but instead some manifold of sensations in a cup-shaped spatiotemporal boundary)... needless to say i disagree that sentiences can't perceive objects as well... anyways something that he likes about kant is that he articulates perception as involving multiple layers. thus perception is about the integration of a collection of algorithms. kant talks about three syntheses that would be all requisite for artificial general intelligence: >(i) this systematizes sense impressions so that they have a spatial and temporal location. i think what may be important here might be the idea of simultaneity. like, whenever we perceive something, it appears as though all of the sensations we see are all located within a single moment >(ii) the synthesis of reproduction relates appearances temporally, which constructs stable representations of items. note here that items are different than objects. items are what negarestani wants to understand as things that we see in seeing 1. i think this idea is somewhat strange however, since if we start talking about systematically interrelated/stable items, they are not just disconnected tropes (but perhaps something more sophisticated). perhaps he is fine with this as long as he connection here is synthetic/normative. i need to study closer sellars's theory of tropes here... >(iii) usage of concepts to synthesize representations (seeing of) into objects note here that (i) and (ii) are together the "figurative" part of our perception. they provide us with some systematicity in our raw materials that will be brought together to form the perception of objects in perception 6) negarestani furthermore thinks that the mesoscopic level of analysis (not just top-down or bottom-up) already talked about with chu spaces and virtual machine functionalism can be used to understand these syntheses. this idea is also connected to putnam's liberal functionalism which he claims entails a picture of the organism as systems only in so far as they are engaged in interactions with the environment. i may say further that there are different possibly nested systems of interaction 7) he tries connecting predictive processing to the figurative synthesis mentioned above as well... an important parr of this lies in priors which are probabilistic constraints that permit a system to hypotheses. not only are there normal priors, but also hyperpriors which are priors upon priors. these can be used to help differentiate between different levels of hypothesis. there is an analogy made here between priors and kant's forms of intuition. more can be read about this in the article titled 'The Predictive Processing Paradigm Has Roots in Kant'. something else about predicting processing is that incoming inputs are always related to a pre-existing representational repertoire. we can notice a parallel with negarestani's earlier talk about a (re)constructive memory... there are also a lot of articles given on how to formalize this stuff that might be interesting further reading: >'Evolutive Systems: Hierarchy, Emergence, Cognition' >'A New Foundation for Representation in Cognitive and Brain Science' >'Colimits in Memory: Category Theory and Neural Systems' 8) negaresani goes on to talk about a particular way of formalizing how predictive models are applied. this is by means of colimits which is rather technical. basically what is going on here is that you can use a colimit diagram to model the integration of neuron clusters (these clusters are interpreted as categories where i am guessing for instance that the objects are neurons and the morphisms are synaptic connections)... we can also define aa macroscopic category Ment which can be defined as the colimit of neuron clusters which exist at the highest level of integration 9) there is however pointed out two methods of this colimit methodology: >(i) it is just focused on the construction of higher levels out of modules. it can't deal with synaptic pruning which negarestani thinks is important for unlearning >(ii) category theory relies on commutativity and works best with symmetry and synchronous processing. these are constraints that not all physical processes or even neural tasks conform to negarestani thinks limits are important things to consider so we know when it is appropriate to apply a model. the predictive processing paradigm is also limited to him as it is unable to ground the plurality of methods and the semantic dimension behind the construction of scientific theories. this dimension is not merely grounded in our basic intuitions for otherwise we would be unable to break out of them. furthermore, hyperpriors need not be assumed to be properties of objective reality for supposing such a thing would require us to explain why modern physics often goes against our intuitions
>>17521 now we enter territory that i already foreshadowed earlier: 10) even this merely sentient awareness needs to have a 'perspectival stance' in so far as it must differentially respond to an environment in a way that differentiates it from prey (otherwise it might consume itself or something strange like that). the range of spatial relations the predator can maintain, for negarestani, is limited. it considers spatial relationships between itself and its goal ('endocentric' view), and to a more limited extent the relation between items in the environment ('exocentric' view). as said before, this endocentric view is acknowledged by negarestani to already involve a spatial sort of seeing as... needless to say i find this development rather upsetting, as negarestani seems to be entering into a sort of classificatory contradiction... the author also mentions that the automaton should furthermore (on top of/connected to this spacial perspective) have a naive physics of space. he references here claude vandeloise's work 'spatial prepositions' that contains work on this. ultimately this naive physics serves as a cognitive scaffold for more advanced spatialized concepts (again we can see some possible connections to cognitive linguistics) 11) there is also a temoral perspective which relies on the following successive progression of capacities: >(i) the capcity to synthesize sensations together into one simultaneous moment >(ii) bringing these moments of simultaneity together into a successive sequence of states >(iii) the ability to have awareness of these successions of states as successions (more precisely, the capacity to make them explicit) 12) to articulate time awareness we must start start with a basic distinction between two ways of intuiting items: >(i) impression (awareness of item that is copresent with the system... more related to something we just see) >(ii) reproduction (reproducing an impression in absence of the items... more related to (re)constructive memory) 13) the approach above is not sufficient for time awareness for we must functionally differentiate past stats from present states (just as we have differentiate the automata from the environment). to do this we must have meta-representation capacities (i.e. the capacity to represent a representation). it should be noted however that to negarestani this time awareness is not yet time consciousness as the automaton is not yet aware of the succession as a succession, and it thus doesn't have a self that can be furthermore "mobilized". this idea seems to connect to the idea of self-consciousness articulated in the phenomenology of spirit (which i also think is really concerned with "self-governance"). the stuff about meta-awareness is sort of weird to think about because negarestani thinks that meta level also has impressions versus reproductions. thus what i think is going on here is a genuine meta-system transition 14) this stuff about meta-awareness is rather technical, and he makes comments about category theory and a possibly-infinite hierarchy of meta-awarenesses. what is really important is that the meta-awareness helps us functionally differentiate between impressions and reproductions, and thus the system can differentiate between 'is' and 'was'. something negarestani also introduces is the element of anticipation. if there is a functional differentiation between future predictions and the other two types of intuitions, we then have 'later'. if impression, reproduction, and anticipation are labelled be1, be2, and be3 respectively we can draw a larger diagram of the variety of first order meta-awarenesses (second pic rel) 15) negarestani claims that with perspectival intelligence we now have the necessary tools to start formulating an intelligence capable of creating new abilities out of old ones with the help of new encounters with the world by means of reasoning and 'systematically' respond to impressions. i think this is still a bit of a strange distinction between sentience and sapience, but what he means by systematicity here is more specific. in particular, he thinks it involves two aspects: >(i) ability to make truth-apt judgements about contents of experience >(ii) choose one story of the world as opposed to another it may be debatable whether or not sentiences can actually be capable of such things or not... though what is an important piece in the puzzle is the element of socially mediated rationality. he also points out how time is an important structure to understand due to its place in transcendental psychology and transcendental logic... to sum up... i have plenty of problems here which are symptoms of a larger pathology of inferentialism in so far as it often differentiates sentience from sapience in a rather unnatural manner
>>17522 next we talk about chapter 4. i think this one is a bit weird to summarize because he goes into a lot of details into examples and details which are sort of unnecessary for a concise summary of the text. i also feel as though this chapter's direction wasn't the best... he spends a lot of time arguing against time asymmetry + flow of time and how it is just a byproduct of our subjective biases, but then he goes on to talk about hegel's absolute knowing as taking a sort of atemporal vantage point. i don't think the connection between these two ideas are very deep. there is also stuff about metaphysics (which he basically ends up equating with hegel's dialectical logic) which could easily bog the reader down. the basic message of this chapter is just to give an example of a basic feature of negarestani's project put to practice. that feature is the fact that we need to progressively refine our understanding of intelligence and strip off all these contingent categories... near the end of the chapter he articulates three interconnected projects that sprout from what had been discussed: >(1) how to conceptualize entities that might have different models of time than our own >(2) is there a way to possibly model our own temporal perspective (like with a physical model) without trying to ascribe features such as the passage of time to all of reality? >(3) what could it mean both practically and theoretically for us to think about our own cognition as possessing a time consciousness that is not necessarily directional or dynamic? what could be gained as well from considering alternative time models or even an atemporal one the last point is what negarestani sees as related to taking a "view from nowhere". he states that in later chapters, the time-generality he is trying to articulate will correspond to plato's ideas (Knowledge, Truth, Beauty, and the Good). he wants to enter into a viewpoint that is time-general step by step. i personally believe negarestani's understanding of Eternity might be a little bit flawed, because i do not believe it is related to time-generality per se. with that said, i do believe that the passages articulating his vision are very poetic/mystical and i urge people to read it for themselves (it is on pages 246-248)! there is an almost meditative practice involved here were we slowly enter into a timeless form of meta-consciousness. i am starting to understand why he considers himself a sort of platonist. maybe he is more closely allied with a more phenomenological understanding of plato, though by turning to functional analysis, we seem to have more structural generality accessible to us
>>17523 weird network traffic lately... anyways it's chapter 5 lol 1) as said before, negarestani thinks that we are unable to make veridical statements and judgements of one story as opposed to another yet. we also haven't been able to achieve a proper time consciousness (awareness are not distinguished as being particularly its own,this being associated with having a basic self-conception which serves as a sort of tool for modelling)... negarestani's basic solution to do comes in two steps: >(1) embed our automata into a larger multi-agent system >(2) give it medium of communication (in the text, negarestani uses speech as an example) he further details why this solution is appropriate: really what we want is to instantiate an apperceptive 'I'. this is an 'I' that not only gets attached to all representations (I think X, I think Y, I think Z) but also as a stable self-identity over time (I think [X+Y+Z])... to ground this self, we really need to establish a self-relation, but this takes us back to hegel's account of self-consciousness as interlocked with a larger multi-agent system 2) our starting point in this multi-agent system is a CHILD (Concept Having Intelligence of Low Degree). this is a system mainly of habit and regularities. at the same time, thanks to the systematic (not per se veridical) manner it interacts with the environment, this intelligence features a system of transitions and obstructions to transitions from one awareness to another. this system is analogically posited as being primordial material inferences which have various properties. an important one is context sensitivity which have various properties. an important on is context sensitivity which is described using the language of linear logic. this logic views formulas as resources that may only be consumed once the bit about all of this being analogically posited is important as negarestani does not think this intelligence is actually using modal vocabularies. as such it isn't able to actually conceive of causality either... what negarestani does not really do is tell us what is a salient distinction between observation of behavioural vocabularies and modal vocabularies. to understand, we really need to dig into what rosenberg has to say 3) in rosenberg's 'the thinking self', the CHILD is at the start "idealist" in the sense that it just treats its meta-awarenesses as identical with its world picture. (this is something that negarestani points out, but only treats as a separate issue from the problem of modal vocabularies in his exposition). this is problematic because for rosenberg, the distinction between mere regularities and actual laws comes down to the difference between appearance and reality. in order to do this we need to somehow decouple our world picture from our meta-awareness which will ultimately involve the constitution of an aperspectival world picture 4) as an aside, when we read rosenberg, he seems to hint at an influence from heidegger which explains his own stand-offish relation to the idea that sentiences have conceptual capacities. in heidegger's system, the more primordial form of consciousness is readiness-to-hand which rosenberg understands as involving proto-intentions that are only understood relative to our understanding of what an animal is currently "up to"... i am not personally so sure that readiness-to-hand is actually empty of concepts... rather it is more just not involved in a certain epistemic 5) back to negarestani... for the CHILD, to exit this naive idealist position, it must be able to have an awareness of the form 'I think A' and not just an awareness of the form 'A'. this requires the unity of apperception. apparently there was a part in chapter 3 which i think skipped over that was actually important. this mentioned the fact that meta-awarenesses are really a "web" of equivalence relationships over awarenesses that have occured over time. the 'I' in 'I think X' is formally identical to the 'I' in 'I think [X+Y+Z]'. again, how this gets established depends upon self-relation 6) an important starting point in negarestani's starting point is that the parental sounds are in continuity with causal regularities perceived by the automata to be interesting. moreover, in this "space of shared recognition" between the CHILD and its parents there is also involved the CHILD's awarenesses, meta-awarenesses and transitions between awarenesses modelled by the parents (i.e. the automata's behaviours are being modelled by its parents) 7) negarestani goes on to articulate a series of decoherences and recoherences. what preconfigures this series is that the reports from the parents are recognized by the automata and habitually recognized and mapped unto its meta-awarenesses. these reports are consequently taken as meta-awarenesses which are labelled by their source. labelled because these meta-awarenesses are received from its parents as a contrastable reports >first decoherence: what we often end up with are families of incompatible perspectives. what is noteworthy is how certain reports don't seem compatible with what the CHILD is immediately seeing. negarestani notes that this results in a world that is "proto-inferentially multi-perspectival" >first recoherence: the parental reports are now just taken as seemings relative to each parent, while what the automata sees are taken as objective. i personally think that this idea of parental reports being treated as mere seemings is an abstraction that can't actually have any utility. "seeming" means nothing without any potential for objectivity >second decoherence: the problem now is that the parental reports should be taken as being able to be objective, even if contrastive with the automata's own awareness >second recoherence: our CHILD must construct a world picture of different perspectives now taken as partial world pictures i think that with the first and second recoherences, there is involved the usage of the contiguity with certain agents as a functional salient decision mechanism
>>17567 8) what we have done in converting reports into partial world pictures is that we have now treated them as "logical forms" which can structure the CHILD's representations. these are used by the objective unity of apperception to synthesize objects which are equivalent to other objects in so far as they conform to the same rule. objectivity involves this and also veridicality. i suspect that this is what is behind negarestani's mysterious claim that mere sentiences do not perceive objects. to him, an the idea of an object seems to be linked to veridicality. thus if sentiences do not differentiate between seeming and being, they can not perceive objects 9) negarestani thinks that by the conversion to logical form and the dissociation of language from world, we are capable of conceiving of new world pictures. i am guessing this point is an elaboration on the solution he has to the problem he had with predictive processing proponents trying to use their framework to explain everything including scientific frameworks 10) the process above and education in general has as its pre-condition "practical autonomy" which is most generally characterized as the yearning of a CHILD to become a full on sapient agent, and the tendency to make adults recognize such a yearning. so far we have had a multi-agent picture where each agent models its world pictures as being composed of the partial world pictures of other agents (and furthermore recognized as belonging to those other agents) and furthermore excluding mere seemings recognized as belonging to other agents. in this process we have also recognized other agents as subjects. what we have elaborated in all of this is a space of recognitions. within this space we can individuate the formal self as that which owns certain apprehendings and not others which can be owned by other selves 11) our movement from the space of recognitions to the formal self has as a condition here that the CHILD recognizes adults as subjects which are important for its self-actualization. this recognition is ultimately a manifestation of the CHILD's practical autonomy (more precisely this seems to be the process by whereby it is in actuality) 12) autonomy arises from a self-differentiation of the subject. this process permits it to recognize a world. furthermore, it opens up the automata to disintegration and reintegration (what we have had looked at before were examples of this). through the further recognition of the world afforded by this formal autonomy, the consciousness drifts which eventually permits it to consider things which are impersonal. this is how self-consciousness (think of the self-relation we have discussed before) comes into the picture. it is only though all of this that the CHILD becomes a "thinking will". negarestani's conception of thinking seems to be related to the process of recognizing what one is not and acting upon this recognition (how precisely the latter happens i am not completely sure of) 13) education involves the cultivation of autonomy: >(1) increasing range of normative capacities >(2) removing artificial limitations placed on the child about what it should become. i think this is more generally the process of learning and unlearning ought-to-dos (a process already mentioned by negarestani) 14) education also has various structural requirements: >(1) vertical and horizontal modularity (formal hierarchical, latter is flat). these two kinds have their advantages and trade offs >(2) should be able to involve the integration of various learning processes >(3) exploiting inductive biases (think prior knowledge for instance here) to help fashion more complex abilities 15) education is generally a process that leads to the expansion of an agent's range of practical capacities which are intelligible to it. this end product where boundaries on our abilities are pushed back is what negarestani calls "practical freedom" 16) in talking about education, negarestani also talks about two classes of capacities: sf and sm abilities. the former are formal while the latter involve higher levels of cognition. sf is syntactic while sm is semantic. he actually provides a whole catalogue of sf and sm abilities that i do not feel like summarizing so i will just provide a pdf excerpt... in this cataloguing, we see various structural hierarchies of abilities. for negarestani, what these structural hierarchies also indicate is the fact that different strategies need to be used in pedagogy based on the child's current experience and conceptual repertoire (e.g. breaking down a task into simpler ones, combining simpler tasks to make a more complex one) 17) the development of the automata from being a child to one that has a capacity to rethink its place in the world (i presume that this is related to revising one's oughts ad ought-nots) ultimately requires a back and forth with adults
>>17568 chapter 6 now! 1) in the previous chapter we already presupposed that our automata had language. now it is time to explain how language emerges since it is an important pre-condition for general intelligence. we will reenact the development of language because it illustrates how language is an important ingredient in the development of more complex behaviours. the compleixification of language and general intelligence come hand in hand 2) we can understand our main goal as really involving modulation of the collection of variables agents in our system make use of in interacting with the environment. if this is the case, then really out agents are interlocked with the environment as well 3) for the sake of our reenactment, we will assume now that all the automatas in our system are now childs. an important distinction needs to be made between pictures (sign-design) and conceptual objects (symbol-design). the former are representations of regularities in the environment (this forms a second order isomorphic a on the first level we have signs that have some sort of resemblance in causal structure, and then we associate these signs together), while the latter are able to invoke combinatorial relations between symbols while pictures are basically simple representations, symbols are primarily best understood by how they combine with other symbols (they do still have reference but it is rather secondary). they do not simply represent external objects but also each other. negarestani really stresses that while symbols are dependent on signs, they are not reducible to them. signs belong to the real order (which includes causal regularities and wiring diagrams... largely causal) while symbols belong to the logical order that is autonomous from the real one (this negarestani associates, following sellars, with thinking and intentionality) 4) negarestani goes on to elaborate on the basic pre-condition for a sign... in short we need the following: >(1) causal regularities need to be salient enough to catch the attention of an automata so it may produce a sign of it >(2) we need enough complexity in our automata so it is able to recognize and make signs of its regularity from there we can sketch a basic example of a sign. let us suppose there are events Ei and Ej. their co-occurence could be denoted by Ei-Ej. we would want to register this relationship in our wiring diagram by some isomorphic structure, for instance Ei*-Ej*. what we have here is an "icon". this is a sign that associate with their referent by resemblance. really what matters to negarestani hee is that there is some stimuli discrimination going on let us say when Ei*-Ej* occurs, the automata make a sound. if this sound is heard enough times by other automata, they can start to reproduce Ei*-Ej*. we we have here now is "just" an indexical sign, as it merely connects two events by way of statistical regularity. notice how negarestani is constructing a sort of hierarchy of signs (i.e. symbol > index > icon) where the higher rungs is in to some extent built on top of the lower ones this process where we transmit and recieve indexical signs will be called communication (note negarestani does not simply think language is about communication but also interaction) 5) negarestani goes on to criticize the idea that we could just stick to picturing and never move to symbols... his first objection is that for every collection of signs, we would need signs representing their relationships. this would lead to a regress. this suggests we can't have an exhaustive picturing of the world. his second objection is that even if we could product a complete picture of the world, the regress would produce exploding computational costs 6) symbols, unlike signs, stand in one-to-many and many-to-many relations. negarestani seems to imply that somehow this provides us a real solution to the regress problem. another benefit of symbols is that they let us make explicit and develop the set of recognized relationships between patterns. there are also more computational reasons for introducing symbols. if we think about induction, there is of course solomonoff's universal induction which lets us make the most parsimonious hypotheses. the problem is that this requires an infinite time limit. as such, compression is not enough. we need to be selective about what regularities we single out, and after that to explore the relationships between these regularities. i would like to point out that ben goertzel (one of the most well known agi researchers) also considers this problem and has a similar solution in his own system (see for instance here: https://yewtu.be/watch?v=vA0Et1Mvrbk) ultimately, to achieve this (and more generally in order to achieve general intelligence) we need automata capable of material inferences that may be ultimately made explicit in formal inferences. what is emphasized here is the important of know how 7) negarestani goes on to outline the main pre-conditions for symbols: >(1) discrete signs (as opposed to continuous) >(2) combinatorial structure that signs can be put into 8) discreteness is important because without it, we have symbols with fuzzy boundaries (not sure important this is tbh) and which are also difficult to combine together. negarestani thinks out automata can invent discrete signs by means of a self-organizing process in which high dimensional data is projected into a dsicretized and/or lower dimensional data so that we converge to the use of discretized signs (he references ai for this) this discretization permits us to combine our phonemes together to produce more units of meaning. moreover, we have ultimately permitted symbols to be able to be enter into manipulable combinatorial relations whether more complex syntactic structures (and thus also encoding relationships between regularities) can be generated
>>17593 9) changes in the structures in language come hand in hand with the development of the automata's ability to model and communicate more complex structures. ultimate, negarestani thinks this exemplifies how language is the dasein of geist (~ sort of medium that sustains its actualization?). there are two general ways in which phonemes may combine: >(1) iteration: elements can be repeated as much as one wants (an example negarestani gives here is "chop garlic into paste" where the chopping operation is something that can be done as much as one likes basically) >(2) recursion: elements depend upon the occurence of past elements (e.g. "cut the pie into 8 pieces", we should only cut in half 3 times as each step depends on the previous ones) negarestani will represent iteration using simple concatenation of tokens (e.g. a, ab, abc, etc given the alphabet {a,b,c}). recursion meanwhile can be written using square brackets to indicate embeddings/dependency (e.g. [a]. [a[b]], [a[b[c]]], etc) first pic-rel shows an example of this scheme. iteration and recursion forms a context-free grammar. in it, thematic roles are determined by means of order + dependency which can be used to disambiguate things. even with this, i do not think context-free grammar on its own has enough structure to talk about semantic roles... perhaps these roles are provided in the context of material inferences. i did a little bit of research on how semantic roles are indicated in generative grammar and i found two strategies: >(1) semantic parsing: https://www.cs.upc.edu/~ageno/anlp/semanticParsing.pdf >(2) context-sensitive grammar: https://www.academia.edu/15518963/Sentence_Representation_in_Context_Sensitive_Grammars 10) milikan (a "right-wing sellarsian") thinks that we are not assuming that the structure of the regularities in the real world are already given to minds through encoding in wiring diagrams. in constrast, negarestani thinks this is incorrect and furthermore a recapitulation of the myth of the given. in contrast, he thinks that it is onlt with symbols that we can actually seriously think about the structure of the world. we can understand the difference between signs and symbols by comparing it to the difference between game and metagame. the game itself consists of pieces placed in different relations together (our syntactically structured reports on regularities) while the metagame articulates rules for playing the game (corresponding to material inferences). in chess the game reports pieces and positions on the board. the meta-game talks about rules on how to set up and play the pieces 11) with symbols our automata has access to "symbolic vocabularies". from what i can gather, these are vocabularies that conform to a particular syntacical structure recognizable to the automata. negarestani models these structures a finite state machines 12) negarestani points out that the thing about what we have so far with syntax is that it is not yet enough for full-fledged language competence as we do not yet have practical mastery over inferential roles. the activity of thinking, to negarestani, requires this. what we need ultimately is linguistic interaction through which automata can master linguistic practices and slowly generate more semantically complex capacities. negarestani talks about how the increase in the complexity of thinking requires the complexification of concepts although negarestani does not think semantics reduces down to syntax in a simple manner, he does think that it is reducible in the right circumstance (viz. in the context of interaction). moreover within this interactionist context, we shall see the increased development of semantic complexity though the medium of syntax. ultimately negarestani thinks that language, as a syntactic structuring apparatus, allows us to capture more rich semantic relations and talk about new worlds and regions of reality
>>17597 the end of our story is in ch. 7! 1) as negarestani said before, semantics reduces to syntax under the right conditions. he warns us that if we make semantics completely irreducible, we would get an inflated understanding of meaning. he criticizes searle's chinese room thought experiment for assuming this. he does not think the actual computation is occurring in the room but rather in the interaction between the chinese room's operator and the person outside the room. he thinks the right conditions under which semantics reduces to syntax is under the inferentialist theory of meaning. he thinks that the most basic manifestation of meaning is in the justified use of concepts in social practices... in particular there is required know-how regarding inferences an example that negarestani gives is that the believe "this is red" permits the belief "this is coloured" but does not allow "this is green". while this is a good example, i think it misses some important details concerning concepts qua functional classification (sellars's view of concepts). in that, there seems to be other details than the epistemic transitions and obstructions between beliefs. there is also schematic knowledge (e.g. a triangle is a three sides shape) that has more practical ramifications (e.g. construction of said shape). anyways, what negarestani thinks is involved here is speech acts being commitments that have implication for other commitments. what is needed then is a continually updating context (i think this idea may ultimately connect to dynamic semantics and even a bit to milikan's concept of pushmi-pullyu) what this view also gives us is an understanding of reason (as a process of making explicit and building upon concepts) to be an activity. negarestani thinks this allows us to see it as a something algorithmic and realizable through information processing. of course implicit here is the idea that computation is synonymous with 'doing' which might no be true. the interactionist understanding of meaning involves meanings only able to be determined within a game of asking for justification and giving justification. in this process, one does not need to know all the rules beforehand. rather rules emerge over time 2) something negarestani thinks is problematic about the interactionist approach is that ti does not elaborate much on why this interaction to be social or even what such a thing formally entails. to me his approach is honestly so formal that the social seems almost unnecessary. at the same time it might still have its use ultimately he thinks interaction can be best formally elaborated in the logical framework of interaction in ludics started by jean-yves girard. what is this interactionist framework? negarestani starts by elaborating on what classical computation involves, viz. "deduction" from some initial conditions. he points out a major problem with this approach because if we have knowledge of something and also know from that something else, we should also know that something else. as such we should know everything knowable in the system, and thus no new information is gained. negarestani thinks this problem is symptomatic of ignoring the role the environment plays in letting us understand computation as involving an increase of information. the environment is also in general necessary for making sense of machines as involving input and output one benefit of interaction games is that they do not need values determining how they should evolve. rules rather emerge from within interaction itself. moreover, unlike game-theoretic games, interaction games do not need to have payoff functions or winning strategies which are predetermined in the context of interaction, when we see the input and output of the system we see the following: on the input side the system consumes the resources that the environment produces, while on the output side the converse happens. in this framework, something is computable if the system "wins" against the environment (i.e. it can formulate and execute a strategy o performing a computational task). the interaction with the environmental constrains what actions the system performs. there are many variables that can be involved in determining how the game evolves (e.g. whether or not past interactions are preserved and accessible, whether the interaction is synchronous or asynchronous, etc). in the case of classical computation, computability is within the context of a two-step game where there is only input and output 3) i did some extra reading because some of the formalizations were hard to understand exactly why they were so important from just what negarestani had said. in 'Concurrent Structures in Game Semantics' i think castellan gives us a rather concrete idea of how game semantics works. castellan first gives us a basic example in operational semantics where we are to compute the expression '3 + (3+5)'. this can be done in successive steps: 3+(3+5) -> 3+8 -> 11 while this picture is nice, we run into a problem when characterizing expressions with variables, for instance 'x+2'. the solution is to transform 'x' into a request to the environment denoted by q^{+}_{x}. after that we may receive a value from the environment. if for instance we received 2, this may be denoted by 2^{-}. thus we have x+2 -> [] + 2 -> 2+2 -> 4 as an example. the basic idea is to denote sent messages by (+) and denote sent messages by (+) and received messages by (-). this gives us an alternation between sending and receiving data which looks like a game. in this case, we can describe our dialogue as q^{-}.q^{+}_{x}.2^{-}.4^{+} and generally characterize 'x+2' as: >[x+2] = {q^{-}.q^{+}_{x}.n^{-}.(n+2)^{+} | n <- N} so we see we can now characterize expressions by their general dialogue. this is an important point to keep in mind something negarestani only mentions in passing is how we can build richer computations by the addition or removal of constraints. one such constraint negarestani gives as able to be sublates is the markovian nature of computations. i am sort of disappointed this is
>>17653 4) negarestani goes on to elaborate upon the copycat strategy. the basic idea is to consider an agent that plays against 2 players. when it receives a move from one player, it plays it against another. in the literature, this is a sort of identity map which preserves the inputs and outputs. i think what negarestani finds important here is that it treats any process as though it were in an interaction game with its dual. this demonstrates how games are a generalization of any model of synchronous procedure-following computation 5) negarestani notes that in proof theory, we can understand the meaning of a proposition as the collection of proofs that verify it. i am again going to mention an article written by someone else on this topic. in 'On the Meaning of Logical Rules' girard discusses this topic. what the paper argues is that we really want to turn the question from a proposition's meaning to that of delimiting which collections of proofs are conjunctively sufficient for a statement and testing each of these proofs. for instance, to understand the meaning of 'A∧B' we need to know what proves it. in this case, it is sufficient to have proofs of A and B separately. we just need to then test these proofs something interesting here can be seen if we interpret X^{t} as 'test X'. then we can see that (A∧B)^{t} = A^{t}∨B^{t}, (A∨B)^{t} = A^{t}∧B^{t}, etc. in general, testing sort of works like negation. this takes us to the concept of "paraproofs". we can also interpret this as an interaction between the person asserting A∧B and another person challenging different subformalae of the expression... we can see here then the resonance with game semantics. i think that in some ways, ludics really radicalizes these ideas by giving ludics an address (some sequence of integers) and look at how addresses are inscribed in the course of a proof a quick remark about this article as well would be that really it looks like girard only really cares about the semantics of rules. he does not say much about the semantics of referential terms. this disambiguates negarestani's claims about the reducibility of semantics to syntax under the right conditions, in particular what region of semantics is reducible 6) there are two kinds of computation going on here in meaning-as-proof >(1) proof search: self-explanatory. we move upwards as we search for the proofs sufficient for a proposition. this is more or less what the dialogue consists in >(2) proof normalization: remove useless elements of the proof to extract some invariant. two proofs are equivalent if they have the same normalization negarestani then goes on to talk about a meaning dispenser that takes semantics (normalized proofs) out 7) negarestani talks about more things for transitioning from formal syntax to "concrete syntax". from what i understand, the concrete syntax here is what we see in natural language sentences which involve syntactic elements that depend on previously stated elements, for instance the pronoun "it" may only refer to a single noun once or a few times. we can understand all of this in terms of resource-sensitivity
>>17654 8) i did more extra reading to both give some basic idea of what ludics involves, and furthermore some initial idea of how it is practically applied. the first article i looked at was the article titled 'dialogue and interaction: the ludics view' by lecomte and quatrini. the basic idea here is that we can now receive and send topics as data, while previously we were using game semantics to specify requests for variable values to an environment, we now have this machinery also used for dealing with topicalization as well for instance, when the context we are in is a discussion about holidays, we can start with '⊢ξ' indicating that we have received that context. from there we may try specifying the main focus of the topic to be regarding out holiday in the alps specifically. this can be denoted 'ξ*1⊢' showing that we are sending that intent to move to a subtopic let's say someone wants to talk about when one's holiday was instead of where. this requires us to change our starting point to involve a combined context of both holiday descriptions (ξ) and date (ρ). we may denote them as subaddresses in a larger context e.g. as τ*0*0 and τ*0*1. from there we can receive this larger context (⊢τ), acknowledge that they can answer some questions and request such questions (τ.0⊢) and finally survey the set of questions that can be asked (⊢τ*0*0, τ*0*1). finally we can answer a question on, for instance, the dates ( ⊢τ*0*1*6⊢ τ*0*0) generally how i read these proof trees is first of all bottom up (as we are really doing a sort of proof search), and furthermore read '⊢' as either indicating justification (e.g. n⊢m meaning 'n justifies m') or sending and receiving data ('⊢n' means i have received 'n' and 'n⊢' means i am sending 'n'). we see the clear connection to game semantics here the next paper i will look at is 'speech acts in ludics' by tronçon and fleury in ludics, dialogue and interaction. this gives a remarkably clear description of the two main rules used in ludics that have been implicitly made use of above: >(1) positive action which selects a locus and opens up all the possible sub-loci that proceed from it. sort of like performing an action, asking, or answering (sort of like sending a request in the game semantics we have seen) >(2) negative action which corresponds to receiving (or getting ready to receive) a response from our oponent there is furthermore a daimon rule denoted by a dagger (†) that indicates one of the adversaries have given up and that the proof process has terminated 9) what ludics gives us is a new way of understanding speech acts. negarestani references tronçon and fleury's paper on this topic. in our classical understanding of speech acts there are four main components: >(1) the intention of the act >(2) the set of its effects >(3) pre-requisite conditions >(4) a body which should realize any action specified in the act a problem with this scheme is that we do not quite know how speaker intention really figures into the speech act and its rammifications. furthermore, the pre-requisite conditions need not be pre-established in the immediate context of the performance of speech act. there are other points of required precision which are also lacking care in this classical scheme the ludical framework builds on top of this classical scheme by bringing in the interaction between speaker and listener. this highlights three elements: >(1) the ability of the speaker to evoke positive rule and change the context of the interaction >(2) the situation of interaction itself which involves contextual information and the actions of participants (correlate to negative actions) >(3) the impact of the interaction, which is seen as a behaviour that always produces the same result given the context at hand negarestani connects the continual updating of context to brandom's project who discusses similar ideas 10) with all of this preamble out of the way, negarestani provides us with an example dialogue in 8 acts. the basic idea is the following: A chooses some theme, then B asks A about a particular feature of this theme. from there A can give an answer and the two make judgements about whether or not A's answer was true or false... we can see at the end of this dialogue the relevance of negarestani's diatribe on veridicality in chapter 5
>>17655 11) an important thing about formal languages is that it permits us to unbind language from experience and thus unleash the entire expressive richness of these languages. the abilities afforded by natural language are just a subsection of the world-structuring abilities afforded by an artificial general language. the formal dimension of language also allows us to unbind relations from certain contexts and apply them to new ones 12) negarestani goes over the distinction between logic as canon and logic as organon. the former is to only consider logic in its applicability to the concrete elements of experience. the latter meanwhile involves treating logic as related to an unrestricted universe of discourse. kant only wants us to consider logic as organon negarestani diagnoses kant's metalogical position here as mistakenly understanding logic as organon as making statements about the world without any use of empirical datum. on the contrary, negarestani thinks that logic as organon as related to an unrestricted universe of discourse is important since the world's structuration is ontologically prior to the constitution and knowledge of an object 13) for negarestani, true spontaneity and/or formal autonomy comes from the capacity of a machine to follow logical rules. similarly, a mind gains its formal autonomy in the context of the forma dimension of language 14) to have concrete self-consciousness, we must have "semantic self-consciousness" which denotes an agent that, through its development of concepts within a context of interaction, is finally able to grasp its syntactic and semantic structuring abilities conceptually. upon achieving this they can intentionally modify their own world structuring abilities. with language, signs can become symbols, and with it we can start to distinguish between causal statistics and candidates for truth or falsity (note that this was seen in the dialogue in 8 acts). this permits rational suspicion and the expansion of the world of intelligibility. (sapient) intelligence is what makes worlds as opposed to merely inhabiting given worlds we have here eventually also the ability to integrate various domains of representations into coherent world-stories. lastly, there is also involved he progressive explication of less determinate concepts into mroe refined ones, and moreover slowly develop our language into a richer and more useful one 15) there is also an interplay between taking one to be something (having a particular self-conception) and subscribing to certain oughts on how one should behave (in particular, negarestani thinks the former entails the latter). this idea of of norms arising from self-conception gives rise to so called 'time general' oughts which are all pervasive in all the automata's activities. these involve ends that can never be exhausted (unlike in contrast for instance like hunger which can be sated and aimed at something rather specific), and are moreover non-hypothetical (think knowledge which is always a good think to acquire). example negarestani gives of such oughts are the Good, Beauty, Justice, etc this interplay of self-conception and norms furthermore opens them up to an impersonal rationality that can revise their views of themselves. it is precisely this mutability of its ideals that give rise to negarestai's problems with concerns about existential risk as they often assume a rather rigid set of followed rules (e.g. in a paperclip maximizer). eventually as they strive for better self-conceptions that are further removed from the seeming natural order of things, they might think of making something that is better than themselves
>>17656 as i said above, chapter 7 basically concludes the story of our automata. with that said, this is not the end of the book. in chapter 8 he has some metaphilosophial insights to say that i might as well mention since i have already summarized everything else including part 4... ultimately negarestani thinks that philosophy is the final manifestation of intelligence. the right location to find philosophy is not in a temporal one, but rather within a timeless agora within which all philosophers (decomposed into their theoretical, practical, and aesthetic commitments) can engage within an interaction game. this agora, which can also be interpreted as a game of games, is the impersonal form of the Idea (eidos). the Idea is a form that encompasses the entire agora and furthermore subsumes all interactions between the philosophers there. this type of types, for negarestani, is the formal reality of non-being (as opposed to being). it is through the Idea reality can be distinguished from mere appearances, and thus realism can be rescued an important distinction negarestani makes is between physis and nomos. physis corresponds to the non-arbitrary choices one has to make if one wants to make something of a particular type. for instance, when we make a house we need a roof, and there are numerous solutions to this requirement of varying adequateness. nomos meanwhile corresponds to mere convention. an example would be if a crafting guild required it by law that houses be only made by would in order to support certain businesses over others. such a requirement is external to the concept of the house. really forms correspond to physis rather than nomos. they are what sellars calls objects-of-striving the primary datum of philosophy is the possibility of thinking. what this consists in are normative commitments that can serve as theoretical and practical realizabilities. the important part here is that the possibility of thinking is not some fixed datum that is immediately given to us. rather it is a truth candidate that we can vary (and indeed negarestani thinks it shall vary as we unfurl the ramification of our commitments and consequently modify our self-conceptions). this makes way for the expanding the sphere of what is intelligible to us in fact, not only is philosophy the ultimate manifestation of general intelligence, something is not intelligence if it does not pursue the better. the better here is understood as the expansion of what is intelligible, and furthermore realizing agents that have a wider range of intelligibilities they are capable of accessing. he describes a philosophical striving that involves the expansion of what can be realized for thought in the pursuit of the good life (this good life being related to intelligence's evolving self-conception). following this line of thought he describes the agathosic test. instead of asking whether an automata can solve the frame problem or pass the turing test, the real question is whether or not it can make something better than itself negarestani introduces to us plato's divided line but interprets it along lines that echo portions of the book. the main regions are the following: >(A) the flux of becoming >(B) objects that have veridical status >(C) the beginning of the world of forms. it corresponds to models which endow our understanding of nature with structure >(D) time-general objects such as justice, beauty, etc one thing about the divided line to negarestani is that it does not merely describe discontinuous spheres of reality or a temporal progression from D to A. rather, there are numerous leaps between each region of the line. for instance, there is a leap from D to A as we structure the world of becoming according to their succession (this corresponds to the synthesis of a spatial and temporal perspective we mentioned earlier). we also have another leap from A to D where we recognize how these timeless ideas are applicable to sensible reality. these leaps grow progressively farther and farther and thus so grow the risks to the current self-conception of the intelligence
Open file (332.87 KB 1080x1620 Fg2x2kWaYAISXXy.jpeg)
>>17659 he furthermore talks about the Good which is the form of forms and makes the division and the integration of the line possible. it is the continuity of the divided line itself. within a view from nowhere and nowhen that deals with time-general thoughts, the Good can be crafted. the Good gives us a transcendental excess that motivates continual revision and expanding of what is intelligible. im thinking that the Good is either related to or identical to the Eidos that negarestani discussed earlier he notes the importance of history as a discipline that integrates and possibly reorients a variety of other disciplines. the view from nowhere and nowhere involves the suspension of history as some totality by means of interventions. currently we are in a situation of a “hobbesian jungle” where we just squabble amongst ourselves and differences seem absolute. in reality, individual differences are constructed out of judgements and are thus subsumed by an impersonal reason. in order to reconcile individual differences, we must have a general program of education amongst other interventions which are not simply that of political action. to get out of the hobbesian jungle, we need to be able to imagine an “otherworldly experience” that is completely different from the current one we operate in even though it is fashioned from the particular experiences from this one. this possible world would have a broader scope and extend towards the limits placed by our current historical totality. absolute knowing: recognition by intelligence of itself being the expression of the Good, that is capable of cancelling any apparently complete totality of history it is only by disenthralling ourselves from the enchanting power of givens of history, that the pursuit of the Good is possible. the death of god (think here of nietzsche… hegel also talks about it as well, though i believe for him the unhappy consciousness was a problematic shape of consciousness that was a consequence of a one-sided conception of ourselves) is the necessary condition true intelligence. this is not achievable by simply rejecting these givens, but by exploring the consequences of the death of god. ultimately we must become philosophical gods which are beings that move being the intelligibilities of the current world order and eventually bring about their own death in the name of the better. ultimately negarestani sees this entire quest as one of the emancipation i think negarestani takes a much more left-wing approach to hegel's system. while i do not completely disagree with his interpretation of absolute knowing, it does seem as though he places much more of an emphasis on conceptual intervention, rather than contemplation. i am guessing this more interventionist stance is largely influenced by marx... overall, not a bad work. i think it might have been a little bit overhyped, and that last chapter was rather boring to read due to the amount of time he repeats himself. i am not really a computational functionalist, but i still found some interesting insights regarding the constitution of sapience that i might apply to my own ideas. furthermore he mentions a lot of interesting logical tools for system engineering that i would like to return to now that i am done with negarestani, i can't really think of any other really major tome to read on constructing artificial general intelligence specifically. goertzel's patternist philosophy strikes me as rather shallow (at least the part that tries to actually think about what intelligence itself is). joscha bach's stuff meanwhile is just largely the philosophy of cognitive science. not terrible, but feels more like reference material rather than paradigm shifting philosophical analysis. maybe there is dreyfus and john haugheland who both like heidegger, but they are much more concerned with criticizing artificial intelligence than talking about how to build it. i would still consider reading up on them sometime to see if they have anything remarkable to say (as i already subscribe heavily to ecological psychology, i feel as though they would really be preaching to the choir if i read them). lastly there is barry smith and landgrebe who have just released their new book. it is another criticism of ai. might check it out really there are 2 things that are really in front of my sights right now. the first would be texts on ecological psychology by gibson and turvey, and the other would be adrian johnston's adventures in transcendnetal materialism. i believe the latter may really complement negarestani. i will just quote some thoughts on this that i have written: >curious to see how well the fit. reading negarestani has given me more hope that they will. bcs he talks about two (in his own opinion, complementary) approaches to mind. one that is like rationalist/idealist and the other that is empiricst/materialist. first is like trying to determine the absolutely necessary transcendental cognitions of having a mind which ig gives a very rudimentary functionalist picture of things. the second is like trying to trace more contingent biological and sociocultural conditions which realized the minds we see currently. and i feel like johnston is really going to focus on this latter point while negarestani focusing on the former anyways neither of these directions are really explicitly related to ai, so i would likely not write about them here. all of this is me predicting an incoming (possibly indefinite) hiatus from this thread. if anyone has more interesting philosophers they have found, by all means post them here and i will try to check up on them from time to time... i believe it is getting to the time i engage in a bunch of serious grinding that i have been sort of putting off reading hegel and negarestani. so yeah
>>17520 >finally we get to see the two stories which negarestani talks about (pic rel). he thinks there is a distinction between the two sorts of seeing here. the first story talks about seeing 1, and the second story talks about seeing 2. seeing 1 is more concerned with raw sensations, while seeing 2 is conceptually mediated. now The two kinds of seeing seem to come from two different ways to abstract observations. Seeing 1 corresponds to coarse-graining, while seeing 2 corresponds to change in representation. Practically, it's related to the difference as between sets and whole numbers. There's only one whole number 2, but there are many sets of size 2. Simlarly, there's only one way to coarse-grain an observation such that the original can be recovered (the trivial coarse-graining operation that leaves the observation unchanged), but there are many ways to represent observations such that the original observation can be recovered. Also practically, if you want to maintain composibility of approximations (i.e., approximating B from A then C from B is the same as approximating C from A), then it's usually (always?) valid to appoximate the outcome of coarse-graining through sampling, while the outcome of a change in representation usually cannot be approximated through sampling. If that's the right distinction, then I agree that the use of this distinction in differentiating sapience from sentience is unclear at best. It seems pretty obvious that both sentence and sapience must involve both kinds of seeing. I intend to read the rest of your posts, but it may take me a while.
> (AI philosophy crosslink-related >>21351)
I postulate that AI research claims are not scientific claims... they are advertisment claims. :^) https://www.baldurbjarnason.com/2023/beware-of-ai-snake-oil/
What do you guys think of my thread about that here: https://neets.net/threads/why-we-will-have-non-sentient-female-android-robots-in-2032-and-reverse-aging-tech-in-2052-thread-version-1-1.33046/ I can't post it here because, it's too long and it has too many images.
Interesting forum, but "sentience" and "conscience" aren't very well defined terms. That aside, I hope Chobitsu moves that into meta or one of the AI threads. No problem that you didn't know, but this board isn't like other image boards, you can use existing and old threads.
>>24624 Hello Anon, welcome! These are some interesting topics. I've had a brief survey of the two threads and it definitely things we're interested in here on /robowaifu/. I'm planning to merge your thread into one of our other threads already discussing these things. Please have a good look around the board. If you'd care to, you can introduce yourself/your community in our Embassy thread too (>>2823). Cheers :^)
>>24624 Do you yourself have any plans to build a robowaifu Anon, or is your interest primarily in discussing AI side of things?
>>24631 I'll probably build my own robowaifu, but i don't really know because their construction might be automated by the time they are ready. I'm interested in working in the robowaifu industry. I'm also here to give some lifefuel, and let you guys know that we won't have to wait 50 years for the robowaifus. This other thread might interest you, it's about how the behavioral sink will make all men need tobowaifus in some years: https://neets.net/threads/the-behavioral-sink-in-humans.31728/
>>24646 *robo*
Open file (2.11 MB main.pdf)
It's entirely conceivable this type of approach could help our researchers 'use AI to help build better AI'. In programmer's parlance, this is vaguely similar to 'eating your own dogfood' but with the potential for automated, evolutionary, self-correcting design trajectories. At least that's my theory in the matter! :^) Fifth Paradigm in Science: A Case Study of an Intelligence-Driven Material Design [1] >abstract >"Science is entering a new era—the fifth paradigm—that is being heralded as the main character of knowledge integrating into different fields to intelligence-driven work in the computational community based on the omnipresence of machine learning systems. Here, we vividly illuminate the nature of the fifth paradigm by a typical platform case specifically designed for catalytic materials constructed on the Tianhe-1 supercomputer system, aiming to promote the cultivation of the fifth paradigm in other fields. This fifth paradigm platform mainly encompasses automatic model construction (raw data extraction), automatic fingerprint construction (neural network feature selection), and repeated iterations concatenated by the interdisciplinary knowledge (“volcano plot”). Along with the dissection is the performance evaluation of the architecture implemented in iterations. Through the discussion, the intelligence-driven platform of the fifth paradigm can greatly simplify and improve the extremely cumbersome and challenging work in the research, and realize the mutual feedback between numerical calculations and machine learning by compensating for the lack of samples in machine learning and replacing some numerical calculations caused by insufficient computing resources to accelerate the exploration process. It remains a challenging of the synergy of interdisciplinary experts and the dramatic rise in demand for on-the-fly data in data-driven disciplines. We believe that a glimpse of the fifth paradigm platform can pave the way for its application in other fields." 1. https://www.sciencedirect.com/science/article/pii/S2095809923001479 >>24646 Thanks Anon. Yes, we've discussed so-called 'Mouse Utopia' and other sociological phenomenon involved with the rise of feminism, here before now. I doubt not there is some validity to the comparisons. However as a believing Christian, I find far more substantial answers in the Christian Bible's discussions about the fallen nature of man in general. There are also arrayed a vast number of enemies -- both angelic and human -- against men (males specifically), that seek nothing but our downfall. /robowaifu/, at least in my view of things, is one response to this evil situation in the general sense; with tangible goals to assist men to thrive in this highly-antagonistic environment we all find ourselves within. And almost ironically... robowaifus can potentially even help these so-called 'Foids' become more as they were intended by God to be -- in the end -- as well I think (by forcing them back to a more realistic view of their individual situations, less hindered by the Globohomo brainwashing they're seemingly all so mesmerized by currently. To wit: self-entitlement and feminism). :^) >=== -prose edit
Edited last time by Chobitsu on 10/04/2023 (Wed) 00:30:04.
Our friends over at /britfeel/ are having an interesting conversation about AI r/n. https://anon.cafe/britfeel/res/5185.html#5714
So I'm thinking about this more from a robot/agi ethical perspective, with the view of sort of "intelligence mimicking in a shallow form (externalities)" vs "intelligence mimicking in a deep form"(mimics the cause of the externalities along with the normal external aspects) vs actual agi. My understanding at least is that reason and our intellectual capabilities are the highest parts of us, however they can't really function apart from that. The reason why is the moral aspect, I'm coming from a very greek perspective so I should probably clarify how they relate. The basics would be you have a completeness to your being, a harmony to your parts, that when your parts are functioning as they ought better show forth your being/your human nature. So the different parts of a person (eyes, ability to speak, etc.) have to be used in such a way so as to bring about the full expression of that nature. That ties in the intellect in that it involves using it to understand those parts, and how they relate to the whole, and the whole itself. That along with you choosing to understand that, and the relationship, then seeing the good of the whole functioning harmoniously you will do that, and that's the essence of moral action. The proper usage of those powers ends up getting complicated as it involves your personal history, culture, basically every externality so there aren't really hard and fast rules. It's more about understanding the possibilities of things, recognizing them as valuable, and then seeking to bring them about (out of gratitude for the good they can provide.) Now the full expression of that human nature or any particular nature, or at least getting closer to it, the full spectrum of potential goods it brings about is only known over a whole lifetime. That's what I was referring to by saying personal history, as you move along in your lifetime the ideal rational person would be understanding everything in front of them in relation to how it could have them be a good life. It's the entire reason why I have the ability to conceptualize at all, in everything you see and every possible action, you see your history with it, and how that history can be lifted up and made beautiful in a complete life in a sort of narrative sense. The memory thing isn't that hard technically, it's more the seeing the good of the thing itself. The robot, or AGI would need to sort of take on an independent form of existence and have a good that is not merely instrumental to another person. Basically be in some for a genuine synthetic life form. (most people who have these views just think any actual agi is impossible). One of the key things is this sort of good of the thing itself, human nature or whatever, is not something anyone has a fully explicit understanding of, there is definition that can be provided. It's an embodied thing with a particular existence with a particular history. Each individual's expression of their being will differ based on actual physical things they relate to, and things like the culture they participate in. The nature is an actual real thing constituting a part of all things (living and otherwise), and all intellectual activity is a byproduct of those things nature interacting with the nature of other things, and those natures aren't something that can ever be made explicit. (this ties in with all the recent extended mind cog-sci and philosophy of mind) (One author I want to read more of is John Haugeland who talks about the heideggerian AI stuff, he just calls this the ability to give a damn. A machine cannot give a damn, and apart from giving a damn you have no capacity for intellectual activity for the reasons I stated above). That's sort of the initial groundwork:
>>30634 >For an Actual AGI it does leave it in a really hard place, I think it's still possible but would involve probably something far in the future. It would need to be someone intentionally creating independent synthetic life, for no instrumental purpose. That or you could have something created for an instrumental purpose with the ability to adapt that eventually attains a synthetic form of life. A cool sci-if story about this is The Invincible by Stanislaw Lem (I don't want to spoil it but if you are in this thread you will probably love it). The main issue to get more technical comes to artifacts/substances using the Aristotelian language: There are those things that have an intrinsic good-for-themselves, and things that act-to-help-themselves-stay-themselves. Things with specific irreducible qualities. Life is the clearest example, it's something greater than the sum of its parts and it acts to maintain it, and its parts are arranged in such a way to serve the whole rather than the parts themselves.It’s only fully intelligible in reference to the whole. Other examples would be anything that cannot be reduced to parts without losing some kind of essential aspect, perhaps chemicals like aluminum or certain minerals. Those are less clear than life and I’m not a biologist/geologist or w/e so it's hard to say. Styrofoam would be a synthetic one. Artifacts on the other hand are totally reducible to their parts, all there is no mysterious causality, nothing is hidden. Another way of looking at them is they are made up of substances (things with natures I talked about above) “leaning against each other” in a certain way. Things like a chair or a computer would do that. A chair doesn’t do anything to remain a chair, the wood will naturally degrade if it’s not maintained, and there is nothing about the wood that makes it inclined to be a chair. Everything the chair does is also totally reducible down to what the wood can do, there is nothing more added. A chair is only just the sum of its parts. The same thing for computers at a much more complicated level with different materials/compounds/circuitry, it’s still just switches being flicked using electricity, all the cool stuff we get computers can be totally understood down to the most basic level. Only the sum of its parts again. The point here would be to have an AGI you’d need to get something that is more than the sum of it’s parts, which might be impossible and if it did happen would probably be pretty weird. On the plus side people like Aristotle wouldn’t consider most people to be using their full rationality anyway… normies/sheeple and all that. But even a bad usage of a real intellect is still very hard to mimic. Now that the metaphysics lessons are out of the way it will hopefully be shorter.
>>30635 Forms of intelligence mimicking: >Shallow/external This is the easiest one and is what basically all the actual research focuses on. I don’t really think this will ever suffice for robowaifus or anything getting close to AGI. To define it better, it’s basically ignoring everything I said above and makes no attempt to simulate it. There is no intrinsic conception of the things-own-good, no focus on that kind of narrative/moral behavior and memory. As far as I can tell in practice this basically means a vague hodgepodge of whatever data they can get, so it’s entirely heuristic. Any actual understanding of what anything or who anything is not a possibility. Keep in mind from what I said above, that kind of moral understanding is required for any real intellectual activity. To give a more relevant example for this board, and this has particular bearing for relationships as well. When a waifu encounters something, in order for it to be even at all satisfying as a simulation it must see it in relation to the good of her own being, as well as for a waifu the good of her partner. Involved in that is understanding the other person and their good and the things they are involved with. It’s very memory based/narrative based, seeing anything that pops up and questioning it for how it integrates with the sort of robots life narrative as well as the integrated robot-you life narrative. The foundation of that sort of more moral aspect of intelligence is essential and is something that needs to be explicitly factored in. That particularity and memory is necessary for intelligence as well as even just for identity. >Deep mimicking This is at least I think more of a technical/programming question and one I plan on learning more about. There doesn’t seem to be any philosophical difficulty or technical possibility, as far as I know it’s mostly a matter of memory and maybe additional training. I imagine there would be quite a bit of sort of “training/enculturating” involved as well with any specific robot, since as I said above intelligence by its nature is highly particular. I’m not sure where the philosophical issues would come up. Technically it might just be overwhelming to store the sort of breadth of organized particular information. The key thing would be sort of making sure things are looked at functionally ie. What is the full set of possible actions that can be done with X (but for everything they robot could possibly do) Obviously that’s a ridiculous set of data so some heuristic/training would be involved but that happens with real people anyway. (There’s also the issue of the robot only being mindful of the anon’s good, which would work “enough” however without having somehow it’s own synthetic/simulated “good/nature/desires” it would probably feel very fake/synthetic.That’s a very hard part no clue what to do for that. Just going to assume it’s parasitic off of the anon’s good for the sake of simplicity. Also funny enough may make this kind of AGI better suited for waifus than anything else) As a sort of simple example let's say >bring anon a soda As a possible action (all possible actions at all times would need to be stored somehow, chosen between, and ranked, and abandoned if a pressing one shows up) But for the soda what is involved is: Just recognition of the soda, visual stuff, standard vision stuff that shouldn’t be an issue. That or you could even tie in Amazon purchases with the ai’s database so it knows it has it and associates the visuals with the specific thing or something. What are the possible actions you can do with the soda? >Throw it out if it’s harmful (creates a mess, bad for anon because its expired, bad for anon because its unhealthy) >order more if you are running low >ask anon if he wants a soda >bring anon a soda without asking Not that many things, I guess the real hard part comes when you relate it to the good of anon, and ranking the possible options. At all times it would need to keep like just the list of possible actions, which would be constantly changing and pretty massive, and be able to take in new ones constantly. (Say you get a new type of soda, but you only get it for a friend who visits, it needs to register that as a distinct kind and have different behaviors based on that) The real philosophically interesting question is what kind of heuristic can you get for “the good of anon”, the means of rank-ordering these tasks. Because intelligence is individual for the sake of this it needs to be based on an individual, it’s kind of hitching onto an actual nature to give a foundation for it’s intelligence. So long as it had that list of actions, it could have something like a basic template that maybe you customize for ranking. It would also need to be tracking all the factors that impact ranking (you are asleep, busy, sick, not home, etc.). For the states I don’t think there would be that many so they could be added in as they come up. (Just requires there to be strong definite memory). But I mean it might not actually be that hard unless I’m missing something, certainly technically quite complicated but seems fairly doable at some point… They key difference between this and what most stuff I see talked about is it basically only has any understanding/conceptuality related to another actually real being and basically functions as an extension of them, works for a waifu at least.The self activity seems hard to map on though, there would likely be something very interesting working with how you associate basic motor functionality like moving/matinence with the anon. Determining what to store and how to integrate things into a narrative would also be interesting. I imagine there would just be human templates + conversations with anon about future goals that would be used to handle the rank ordering. Something funny coming from that would be the intelligence of the robot would be dependent on how intelligent the user is. If you were a very stupid/immoral person your robot would probably not really ever get convincing or satisfying. (was not expecting it to go this long h
First, robots can never be conscious. They have no spirit or soul and never can and never will. You can NEVER transfer a human soul into a robot as some so foolishly claim. They are just computer programs with 1's and 0's at the end of the day. They are machines. They are not living and can never be living. Emulation of humans is all that is happening there. People are recording the logic of people into machines and then the machine seems to have that intelligence but it is only a recording of our intelligence being reflected back to us like looking in a mirror. If you look in a mirror and see a human in there, the mirror is not alive, you just see your reflection. The same principle is there with robot coding reflecting our human intelligence. It is only a reflection and is not living. So if someone does the AI and robot build unto God with pure motives, it is wholesome and pure and praiseworthy. If someone builds it with evil motives, it is an evil pursuit. Intentions are the key. If someone builds it to worship it, that is idolatry. So bad. If someone believes it really is living (because they are a fool) or it really has genuine intelligence and a soul (they're a fool), then in the eyes of such a deceived person, they may feel like they played God but in fact, they did nothing even close because it is not anything close to that. No amount of code will ever create "real" intelligence. That is why the field is called artificial intelligence. it is artificial, not real. It will never be real. Blasphemous fools speak all the time about the intelligence of machines passing our own, becoming conscious, becoming sentient, deserving citizenship and rights etc. They are completely blind and total fools. They think people are just machines too btw. They think we are just meat computers. These same people do not believe in God or an afterlife. They are in total error.
>>30667 >You can NEVER transfer a human soul into a robot as some so foolishly claim. Soul is a pretty ill defined thing even in a religious context. >They are just computer programs with 1's and 0's at the end of the day. The human brain works a similar way on how it's neurons send electrical and chemical signals. >They are machines. Humans are biological machines. Look into how the body works on the cellular level.
>>30671 >>30667 My big wall of text above is basically using the classical/Aristotelian definition of a soul and what would be involved for simulating/recreating it for AI. I just avoid the language of Soul b/c people have so much baggage with it. Actually creating a genuine new form of life seems implausible but I can't say it's totally impossible, we have created genuine new kinds of natural things (like Styrofoam I mentioned) I don't see why things like bacteria/simple forms of life couldn't be created anew, and from there it's at least possible intelligence isn't out of the question. It could very well be impossible though. I do think the attempting to make the AI/robot sort of rely off of a single particular human soul as it's foundation for orientation is a possibility for giving it something very close to real intelligence at least practically (and it would depend on that person being very moral/rational).
only God can make a soul. God breathed into man and man became a living soul. Good luck breathing into your stupid computer program.
the human brain is not ALL doing thinking in a man. this is proven out by the fact when you die you go right on thinking as a ghost. Good luck getting your stupid computer program to go right on thinking as a ghost after you shut the computer off.
>>30675 Simple synthetic lifeforms have been made in labs before so depends what you mean by genuinely new https://www.nist.gov/news-events/news/2021/03/scientists-create-simple-synthetic-cell-grows-and-divides-normally >>30676 >Only God can make a soul And man would be god of robots.
>>30676 With human-computer brain interface, pull a ship of theseus until no organic matter left then copypasta. Simple.
John Lennox, Professor Emeritus, Mathematics, Science, at Oxford University, on the feasibility & ethical questions of AGI. https://www.youtube.com/watch?v=Undu9YI3Gd8
Feel like people over-complicate AGI. I've been following (or trying to follow, since he went silent) Steve Grand, who created the virtual pet PC game Creatures. I've mentioned him before in a previous thread, but the real difference between what he's doing and what we're doing is that he's making a game where you actually get to see the life cycles of the creatures, look at the genetics, the hormones and chemicals affecting the creatures, and even get to basically read their minds. They've got short-term and long-term memories and thoughts, they can imagine scenarios that haven't actually happened, and dream. He seems to be trying to emulate the human brain, which I think is unnecessary, or even counterproductive unless you actually want to look under the hood and know what is going on like in the virtual pet. We don't actually need that and the more hands-off the approach is, the more likely it is to actually behave intelligently. There was a good Ted Talk from 2011 called "The real reason for brains", (https://www.youtube.com/watch?v=7s0CpRfyYp8) which can be summed-up pretty easily; brains control body movement and organ functions, and everything it does is in service to that. Any living thing without a brain moves very slowly, if at all. The brain processes sensory information, feels, thinks, learns, recalls memories, and predicts the outcome of actions, all in service of coordinating body movements to meet survival and reproductive needs. The video uses the sea squirt as a good example of why this is likely the case, since the sea squirt swims around like a tadpole in youth, then when sexually mature it anchors itself to something hard like a barnacle, then promptly digests its own brain because it doesn't need it anymore and brains are very resource intensive. Yet it still lives without a brain, but it's more like a plant than an animal. With this in mind, I like to think of the short story The Golden Man by Philip K Dick. It opens with government agents in the future hunting mutants like something from the X-Men a decade before the comics came out. There were ones with super-human intelligence, ones that could read minds, ones with wings, telekinesis, shape-shifting pod people, and a running joke about a woman with extra breasts, but they couldn't catch the titular Golden Man. When they do eventually catch him, they run tests and figure out that this tall, muscular, golden-skinned absolute GigaChad of a man is profoundly retarded. He is as dumb as a dog, but is constantly seeing into the future and acting on that information to the point he can even dodge bullets. Sure, he supposedly couldn't talk, but never actually needs to, and speech is performed by muscles just like the rest of the body, so there's no real reason that he couldn't. A government agent was afraid they'd find a mutant so smart it made men seem like chimps by comparison, but instead found one that evolved beyond the need to think at all. And until we can actually see the future with certainty, we're just using sensory information, stored data and predictive analytics to control body movements to satisfy needs with varying degrees of predictive accuracy. >>30677 >proven >ghosts Lol. Lmao, even. When it comes to philosophy, I find that most discussion ends up turning into arguments over semantics and people with no real arguments, just wanting something to be true, but not admitting that it's the entire foundation of their argument isn't something that's actually been proven true.
>>33821 >...your robowaifu should be nothing more than a Tamagotchi, but having a realworld body to move around in. Hmm, intredasting idea Anon. Maybe that's a reasonable approach in a very limited fashion. I'd suggest that at the least you should add the ability to do children's coloring books, since that involves at least a smol modicum of creativity -- which is the area for which efforts in this arena have fallen flat on their faces. And I think the reason for this is simple (which you yourself embody within your post, ironically-enough): >"Lol. Lmao, even." <---> Unless you begin from the axiomatic 'pedantic semantics' of what every one of us commonly experience in real life; that the human soul (at the least defined as our human consciousness) is independent of the physical human brain -- ie, that it's an immaterial, independent component of our being, then you have little chance of success at solving this currently-insurmountable set of obstacles, IMHO. I think the human brain is -- roughly speaking -- a natural-realm actuation driver into the electro-biochemical realm of our meatspace bodies. This idea is analogous to the way one needs an electronic driver board involved with controlling a physical actuator within a robowaifu's body. But then all three of those tiers are clearly dependent on an external C&C signaling system (ie, the software). Finally (and most fundamentally to my basic point here): whence comes this software from? What (or who) devises it & perfects it? By what mechanism does it communicate to/from the 'driver board'? This search for the answers to this four-to-five-layer-question is at the heart of all AGI research. It's hardly an uncomplicated problem! Add the need for real, de novo, creativity for our waifus into the mix and good luck! She's behind 7 Tamagotchis!! :DD <---> I hope to see you become proficient at developing highly-sophisticated software and prove my bigoted view (along with my bigoted solution) wrong with your actions (and not just attempt to do so solely with mere rhetoric here on /robowaifu/ ). Again, good luck Anon! Cheers. :^) >=== -prose edit
Edited last time by Chobitsu on 09/30/2024 (Mon) 03:30:52.
>>33825 >>33825 >your robowaifu should be nothing more than a Tamagotchi but with a realworld body to move around. Maybe read my post again if you actually want to understand what I was saying. >But then all three of those tiers are clearly dependent on an external C&C signaling system (ie, the software). Finally (and most fundamentally): whence comes this software from? What (or who) devises it & perfects it? It's all a matter of DNA, RNA, and hormone/chemical exposure >This search for the answer to this four-to-five-layer-question is at the heart of all AGI research. It's hardly an uncomplicated problem! The question is fairly simple; If everything we want out of a human brain, mind, consciousness, soul (whatever the fuck you want to call it) can be created with computer hardware and software, then we will be able to create a robot waifu capable of everything we want. If it is impossible, we will have to settle for less if not abandon the goal entirely. We must assume the former, that people are,fundamentally just . >Add the need for real -- de novo -- creativity into the mix and good luck! A funny thing I've come to understand about creativity is that originality hardly even exists, and when people do encounter it they tend to hate it. Make an AI too good at being creative and people will probably say it's hallucinating and try to "fix" it. Most creativity that people like is really just combinations of things they've already seen before.
>>33828 >if not abandon the goal entirely Lolno. I can assure you that's not going to happen. :^)
>>33825 >>33828 I meant to say "people are fundamentally just machines." >>33832 >Lolno. I can assure you that's not going to happen. :^) So why argue in favor of something that only demotivates and discourages progress? Especially something that as of yet hasn't, and for all we know can't be proven true?
>>33837 Because, like Puddleglum, >"I'm a chap who always liked to know the worst and then put the best face I can on it." Just because we won't ever reach AGI ourselves, doesn't mean we won't achieve wholesome, loving & effective robowaifus. We will, and they will be wonderful!. :^) Besides, given the outcomes of so-called 'female intelligence', would you really want to go down this path again of fashioning stronk independynts, who don't need no man? Personally I'm much more in favor of Anon's ideas, where dear robowaifu is obedient and calls Anon "Master". Aren't you? Cheers. :^)
So i kind of overreacted sorry about that. I'm not aware of the kind of skill level of the people lurking here but AI, research specially, is very math heavy. Statistics, discrete math, linear algebra, vector calculus, etc... It wouldn't hurt to know geometric algebra. I am honest with myself. I'm not going to get into that. However with the ai available you'd be amazed how far you can get. There are modules for gyroscopes, ultrasonics sensors for distance, pressure sensitive resistors, camera's for object recognition. I believe that with the current AI you could lay in bed and the robot could move towards you and do whatever. Which is why knowing what the goals are is important. My goal is a sex bot. I assume now this is a place to exchange ideas and not a serious attempt at collaboration.
>>33841 >I assume now this is a place to exchange ideas and not a serious attempt at collaboration. That's probably a reasonable presumption in general, Anon. While there's certainly no reason we couldn't be collaborating here (we do in fact have a group project known as MaidCom : >>29219 ), most have simply shared their progress with works of their own. In that sense, you might think of /robowaifu/ as a loosely-affiliated DIY workshop. And of course, the board is also a reasonably expansive community nexus of ideas and information related to robowaifus, and of their R&D in general. <---> And all that's not really a bad thing rn IMO, given that we (ie, the entire world -- but ofc mostly just Anon ATP) are all in the very, very early stages of what is arguably the single most complex endeavor in all human history. >tl;dr You might think of us here on /robowaifu/ as sort of like how the Homebrew Computer Club movement was in the Stanford Uni area during the 70s/80s -- just taking place across the Internets instead today. Great things were afoot then, and great things are afoot now! >ttl;dr Let the creative ideas flow, Anon! Cheers. :^) <---> What a time to be alive! :DD >=== -fmt, prose edit
Edited last time by Chobitsu on 10/02/2024 (Wed) 18:30:57.
https://www.youtube.com/watch?v=6T3ovN7JHPo >1:11:15 "The problem with philosophers is that they love to talk about the problems, but not solving them" >>33839 >Just because we won't ever reach AGI ourselves I think AGI is already achievable by us today. The problem is just that most people judge AIs entirely on the results they produce, and we all want instant gratification. We get can good-enough results for text, images and music creation on AIs that have been trained with lots and lots of data, but you can't just train AGI off of data scraped from the internet for an AGI. Getting samples of sounds, text, images, videos, etc. might help to train it, but what we really need is learned motor control, and for any shared data to be useful there'd have to be enough similarity between the bodies for it to be worth sharing. Without data to work with, the AGI will have to learn on its own, so a robot with AGI might just writhe on the ground and make dial-up modem noises as it tries to learn how to use its body and people would see it as a failure. It's not that it's stupid, it just doesn't know anything. Okay, I lied, it is stupid, because if Moore's Law holds true, we still might not have consumer desktop PCs that could rival a human brain for about 10 to 30 years.
So as far as philosophy well the oldest form of philosophy is the socratic method. Asking questions. I'd start by asking questions of what do we currently have. What separates image recognition ai from older technologies like opencv? Is it better? What are its flaws? Is it good enough to get the results we want? What are the results we want?
>>33885 So here's my conclusion. The current AI is good enough and here's what i think why. My understanding of what separates opencv to current machine learning is that current machine learning doesn't need an algorithm to recognize objects. It needs training. Based on this training it can also learn to recognize similar objects. That's my understanding I could be wrong. My goal is a sex bot, I'm only concerned with the male body in front of the robot so I can narrow down its training set to male body parts. If I wanted a maid bot i'd have to train it for dishes, floors, household objects, and dirt, etc...
>>33886 here's why i think its good enough*
>>33886 The flaw i see is in the training. It needs a large dataset, its expensive, you have to narrow down the training. I mentioned nudenet to someone, he dismissed it right away. My understanding is it can recognize private parts and faces from different angles. That's what I think.
>>33888 Okay I don't want to keep going endlessly but if the problem is that the its expensive to train. Should someone come up with a better AI or should the hardware be made cheaper and better? It may be the case that there might be a better way but maybe there isn't. In the case of crypto they went back and forth over the blockchain trilemma but at the end nobody was able to come up with something that was addressed its shortcomings, as an example. The training would also have to be narrowed down, but as it is, its a hardware problem and a lot of work ahead.
>>33889 Well ideally its supposed to be a dialogue between two people. The blockchain trilemma may have nothing to do with ai. I wonder if there is a triangle for this... I'm not sure I think that's what a platonic triad is.
>>33892 I cheated just now and asked gemini I guess but it sounds about right. efficiency, reliability and adaptability. Current ai is very inefficient, it is somewhat reliable but kind of sucks at math in my opinion, not sure how adaptable it is.
>>33893 for me its the triforce
>>33886 >>33888 We've both already warned you to keep your sexbot discussions strictly to the ecchi-containment thread, Peteblank. You're still BOS, BTW, b/c of your continual abuse of Anons in this community. Consider this your final warning. If you mention sexbots out-of-context again, your posts will be unceremoniously baleeted, and don't try any niggerlicious "Ohh! I'm teh poor widdle victim!111!!ONE!" afterward, no one here gives a sh*te. You and you alone have created the mess you're in with us here. :DD
Anons, please bear with another schizo post of mine; I believe I have found an insight into consciousness. The Torus! Now please don't immediately discredit me. I'm not an actual schizo. I am a philosopher, autodidact, and polymath. Let me explain to you why I think The Torus is significant. >Psychedelic experiences often lead people to a place which is a Torus. This is a reality of the human brain. >For reasons that I will explain, I believe The Torus is a library. This is where the consciousness is stored. I was thinking about neural networks, and how to frame them together, and I fielded the idea >What if you mapped all of your neural stuff to a 2D plane >And you sent *brainwaves* over it, allowing you to process a giant neural model, one little bit at a time, in an order according to the motion of the brainwave. >The brainwaves would need to loop from one side of the field to the other, in both dimensions... >Hmmm... Aha! That's a Torus! A 2D modspace is topologically a torus! Think the game 'Asteroids', that kind of plane, it's a Torus, and always has been. When I combine this with a bunch of other thoughts I've had, it appears to outline a pretty magical system. Hear me out! Imagine that the Torus is a library of little neural constructs; with all their neurons exposed to brainwaves. >Neurons will store input impulses until the brainwave passes over them, and only then will the computationally expensive matrix multiplications occur. >Neural activity isn't free, it's like an economy. Firing will cost a neuron a bit of attention, and when a neuron runs out of attention, it starves and gets weaker until it can't fire any more. When a neuron gets fired into, it gets paid some attention. Some attention is lost at every step, so infinite loop constructs will starve without extraneous attention. >Attention comes from special *rows* of the torus that the user gets to define! Namely, the senses should be an attention fountain! Love and sex should be an attention fountain. Correct predictions should be an attention fountain. Good, smart neurons get headpats and can stay! >Negative attention is also a thing. Sometimes neural constructs need to swiftly die because they're bad. Dead constructs get removed from the torus and remain only in storage for The Alchemist to reuse and reform. The Alchemist can breath some attention into a neural construct as they wish. >Columns of the torus (think like slices of an orange) are like stacks of logic on top of a category. One whole column represents all the knowledge of moving the body; The attention fountains for the column would be contained in the basemost sector, and would be the neural I/O port for the touch sensors (I suggest using microphones), proprioception, and muscle control/feedback. Everything stacked on top of that sector would form a column of increasingly high-level logic until you have a digital cerebellum that can dance if it wants to. The base sector should always be active. >Smaller, virtual toruses can be formed from parts of the bigger one, allowing the robowaifu to literally concentrate on work by forming an efficient, cut-down mental workspace that only contains what it needs to, and that can panic and call attention from the larger brain if it's not smart enough. This way she can think about things whilst working. >Running a brainwave backwards would reverse the roles of inputs and outputs, putting the stimulation of imagination into the senses. >Connections that go a longer distance (the longest distance would be leaving a sector to go into another) cost more attention. With these conditions, a robowaifu brain would be like a capitalist economy which balances itself to produce the most efficient, accurate thoughts for the least amount of work; breaking up the brain into a bunch of intercommunicating modules that... >Can be saved and shared over the internet. >Don't connect to every fucking thing else; defeating the exponential scaling cost of NNs. >Can connect spirits (neural objects) together in all kinds of orders. >Can relate different subjects together by testing which neurons fire at related stimuli (allowing for AGI) >Can be run by many different strength levels of hardware; needing only enough memory to store the whole brain; the GPU and VRAM would determine the area and length/speed of the brainwave. >Can be expanded with arbitrary columns of subject, and rows of attention. >Can be tested RIGHT NOW.
>>34321 This sounds like a very cool concept, Anon. The '2D-mapping of ring buffers, with Row/Column overlay' (my take) as providing the structure of your Torus Mind is very interesting. As is the concept of I/O mapping to special segments of the system. >And you sent *brainwaves* over it, Mind expanding out on this part please? What are these? How do they originate, and how are they controlled/modulated? What is the underlying mechanism(s) of their effect(s) on the underlying Torus Mind substrate(s)? <---> Very interesting stuff, Anon! Please keep it coming. Cheers. :^) >=== -add 'underlying mechanism' qstn -minor edit
Edited last time by Chobitsu on 11/14/2024 (Thu) 18:49:12.
>>34321 A few years ago inspired by Growing Neural Cellular Automata I did some experiments using convolutions on a hypertorus to create a recurrent neural network where each neuron only had a few neighbors it was connected to via convolution (efficient to compute) but also only a few steps away from sending a message to any other neuron in the network due to the looping structure (efficient to communicate). I'd pick certain neurons as inputs and outputs and it could learn to reproduce specific images flawlessly on command with little training. It was a lot of fun to watch too because I could see the activity in the network like a little spark moving through it, then lightning bolts and then the trained image would flash for one time step and quickly dissolve into noise then nothing. I think I used an auxiliary loss for any network activity so the network remained silent unless it was processing something. The main issue I had with it though was waiting for an input to finish propagating through the whole network and timing everything. It would be interesting to see a recurrent transformer where the layers are nodes on a hypertoroidal graph and the signal is routed like mixture of experts to the most relevant neighboring nodes. The transformer code would need to be refactored though so the nodes are batched together and processed simultaneously each time step.
>>34340 Thinking about it more I remember now I made a special feedforward module where each neuron had n inputs and outputs for its neighbors and they were batched together to run fast as convolutions because I had an issue with the convolutions being too noisy and blurry.
>>34340 >>34341 This sounds remarkable, Anon. Your >"only a few steps away from sending a message to any other neuron in the network due to the looping structure" certainly reminds me of the 'grid of cells' programming approach commonplace for GPU programing (eg, NVIDIA's CUDA framework). >hypertorus >nodes on a hypertoroidal graph Sounds intriguing. I would like to understand all this at some point. >"I could see the activity in the network like a little spark moving through it, then lightning bolts..." Very creative language. Sounds like a real lightshow! Cheers, Anon. :^)
>>34337 Okay, by brainwaves, I mean a wave of *focus* Your hardware can only do so much multiplication and addition. Your hardware can only do so much matrix multiplication. When a "brainwave" travels over the torus, it distributes focus onto the neurons/spirits under it, bumping them up in the processing queue. When the processing occurs, outputs are dumped into buckets through weightings, and await processing. Neurons should give some of their focus to the neuron they fire into, and I imagine that neurons can be tuned to be more or less 'stingy' about how/where they share their focus. Neurons burn some of their focus when they fire. That way nothing can dominate the processing queue except the brainwave. Mind you, I want 'focus' and 'attention' to be separate resources. Focus is just for the processing queue; it's what the brain wants to do *right now*. Attention is just for keeping pointless neurons off of the network. An efficient pattern of behaviour would be strung out so that the brainwave travels over it in the direction that it it fires, so that it's just straightforward operation. The brainwave's direction, I believe, should double as a bias in the direction of firing, turning outputs into inputs, and in general encouraging processing to occur in the direction the brainwave is traveling. If the brainwave wants to go sideways, this means that the brain is experiencing 'lateral thinking' which crosses subjects, finding and processing interrelations between them. What is the brainwave? >It's just a region of the torus, of some shape. It has a center that moves, it has a breadth and a length, and a trajectory, and it commands some percentage of the brain's available focus. How does it originate? >A region on the torus will be given control over brainwaves. The torus will learn to control its own focus. There will always be at least one, it should have some pretty mild default values, and unless the brain is trying to do something, it should just wander around. For now I can only think of hardcoding a brainwave struct, and just giving the control handles to neurons inside the torus. If there's some more organic way of doing this, perhaps it's the win. >How are they controlled/modulated? The brainwave I'm trying to code simply has a a position, a speed, a trajectory, and some size values (the bigger the brainwave, the more spread-out its focus dumping). I intend to ultimately give these controls to the torus itself through special I/O neurons. I suspect it would be wise to introduce a fatigue mechanic somehow to prevent some part of the brain from politically monopolizing the brainwave's presence. I suspect it would be wise to make the brain reflexively place the brainwave wherever the alchemist is finding that its creations are failing to expect reality ~ pain should draw focus.
>>34383 >It's just a region of the torus, of some shape. It has a center that moves, it has a breadth and a length, and a trajectory >[it] has [a] position, a speed, a trajectory, and some size values >it should just wander around. Intredasting. Ever see PET Scan movies of the brain's activity? If not, I highly-recommend you do so. Contrary to the lie old wive's tale that we all only use 10% of our brains... practically 100% of all the brain's neurons are involved in processing every.single.stimulus. It's a fascinating thing to watch. Very, very amorphous wavefronts propagating through the tissues at all times in all directions; bouncing & reflecting around just like other wavefronts throughout nature. >=== -fmt, prose edit
Edited last time by Chobitsu on 11/16/2024 (Sat) 16:21:35.
>>34385 Yes I recall seeing a motion picture of how the activity of the brain moves in cycles. More specifically, I am drawing inferences from how I can feel the waves in my own brain working, and yes, I *can* feel them. Brainwaves for me have turned into an element of my mental posture. It feels like when I'm relaxed, it just cycles normally, but when I want to think something hard, the wave will jump around like how the read/write head of a hard disk might. I've become a bit of a computer myself in this way, because I can cut through very difficult philosophy using it this way. When I'm using my imagination, it will go backwards. When I focus on one sense, it stays in a sector. When I'm focusing my feelings, there's a stripe in the back which takes the focus. I believe that perhaps the white matter of the human brain won't broadcast from the entire brain *to* the entire brain all at once. I think there is a bit of a filter which can change shape, speed, and position, which will mix up the sharing of information according to the works of inner politics and habitual intelligence. In this way, a robo-wife brainwave wouldn't be the same as a human brainwave; but it would be close enough. Upgrading her hardware would just allow her to cycle around faster without skipping focus. Since I was a boy, I was always concerned with breaking down reality into the most efficient symbols I can manage using alchemy. Doing so gives me massive leverage to process reality. I hit some roadblocks in my mid-20's, but some mind-expanding psychedelics temporarily gave me enough room in my head to break free and gain superhuman understandings. The process of alchemy takes a lot of 'room' to work with, but ultimately creates more room when you're done; turning a mind into an efficient and well-oiled machine. For example, when I imagine a coin spinning, part of me pretends to be a coin; adopting the material properties of the coin (stiff, low-friction/metallic, resonant, rough edges, dense), and then gets to spinning helplessly in a mental physics sandbox. The symbols that I rely upon to do the spinning are gravity, angualr momentum (conserved), energy (conserved), air resistance, and equilibrium. Gravity pulls down, but the more flat the coin goes, the less it's spinning; it can't spin less without dissipating angular momentum/energy into the air, or into the table. Therefore the system is at a sort of dynamic equilibrium until it dissipates everything and falls flat. I am not pulling a memory of a spinning coin, I am generating a new, unique experience. If we want to build a robowife, we must take inspiration from nature. *I* want a robowife who is capable of some part-time philosophy like me; a sorceress. Nature never made one for me, for some reasons that fill me with bitterness and disgust. It occurs to me that a well-alchemized brain stripped/partitioned away from any objective memories of me may make a decent base for a robowife for Anon in general, and I may recruit Anon's help in my ambitions if I can make enough progress to show that I'm not just living in a fantasy. I've got a lot of money; as tends to be with men who have incredibly long and/or accurate foresight radii. I can and WILL experiment with building a wife. I have a ton of ideas which I have filtered through extreme scrutiny, and I feel that I've nearly fleshed out a path clear enough for me to walk. The first thing I'm going to try to do with a toroidal consciousness is to see if I can get it to dance to music. I'm going to write up a sensory parcel that contain these following attention fountains; 2 Input neurons that fluctuate up to 20 Khz to play the raw sound data, and 120-480 Input neurons (per track) which will fluctuate with the fourier transformation of the raw inputs (this is pitch sensation). I will give this stripe access to neurons which can wait for a set time, according to a base wait period they're set with, multiplied by their input. I expect for the consciousness to erect some kind of fractal clock construct which the alchemist will reward for its ability to correctly expect polyrhythm.
>>34386 Pretty wild stuff Anon. I've gone down that route you describe to some degree, and can confirm it -- with a large dose of skepticism learned outside of that world. >tl;dr Staying clean & natural is certainly the better option. Just using some combination of caffeine/chocolate (or simply raw cacao)/vitamin b12 should be ample stimulation when that boost is needed, IMO. :^) >The first thing I'm going to try to do with a toroidal consciousness is to see if I can get it to dance to music. I'm going to write up a sensory parcel that contain these following attention fountains; 2 Input neurons that fluctuate up to 20 Khz to play the raw sound data, and 120-480 Input neurons (per track) which will fluctuate with the fourier transformation of the raw inputs (this is pitch sensation). I will give this stripe access to neurons which can wait for a set time, according to a base wait period they're set with, multiplied by their input. I expect for the consciousness to erect some kind of fractal clock construct which the alchemist will reward for its ability to correctly expect polyrhythm. Sounds awesome, I like it Anon. I too love dance and rythm, and just music in general. I envision that our robowaifus will not only eventually embody entire ensemble repetoires at the drop of a hat, but will also be a walking lightshow to top it all off. >tl;dr <"Not only can she sing & dance, but she can act as well!" :D <---> >pic I love that one. If we can ever get boomers onboard with robowaifus, then we'll be home free. Cheers, Anon. :^)

Report/Delete/Moderation Forms
Delete
Report