/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Build Back Better

More updates on the way. -r

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB


(used to delete files and postings)

Have a nice day, Anon!

Philosophers interested in building an AGI? pygmalion 06/26/2021 (Sat) 00:53:09 No.11102
Why is it that no philosophers are interested in building an AGI? we need to change this, or at least collect relevant philosophers. discussion about philosophy of making AGI (includes metaphysics, transcendental psychology, general philosophy of mind topics, etc!) also highly encouraged! Ill start ^^! so the philosophers i know that take this stuff seriously: Peter Wolfendale - the first Neo-Rationalist on the list. his main contribution here is computational Kantianism. just by the name you can tell that he believes Kant's transcendental psychology has some important applications to designing an artificial mind. an interesting view regarding this is that he thinks Kant actually employed a logic that was far ahead of his time (and you basically need a sophisticated type theory with sheaves to properly formalize). Other than that he also thinks Kant has interesting solutions to the frame problem, origin of concepts, and personhood. CONTACTS: He has a blog at https://deontologistics.co/, and also has posted some lectures on youtube like this one: https://www.youtube.com/watch?v=EWDZyOWN4VA&ab_channel=deontologistics Reza Negarestani - this is another Neo-Rationalist. he has written a huge work (which I haven't read yet ;_;) called "Intelligence and Spirit". It's massive and talks about various grades of general intelligence. this includes sentient agents, sapient agents, and Geist. this guy draws from Kant as well, but he also builds on Hegel's ideas too. his central thesis is that Hegel's Geist is basically a distributed intelligence. he also has an interesting metaphilosophy where he claims that the goal of philosophy is the construct an AGI. like other Neo-Rationalists, he heavily relies on the works of Sellars and Robert Brandom Recc: Ray Brassier (recent focuses) - I dont think he is working on artificial general intelligence, but his work on Sellars, and in particular rule following is very insightful! Hubert Dreyfus - Doesn't quite count, but he did try to bring Heidegger to AGI. He highlighted the importance of embodiment to the frame problem and common sense knowledge. I personally think Bergson might have explicated what he wanted to achieve but better, though that guy is like way before AI was even a serious topic, lol. Murray Shanahan - This guy has done some extra work on the frame problem following Dreyfus. His solution is to use global workspace theory and parralel processing of different modules. Interesting stuff! Barry Smith - Probably the most critical philosopher on this list. He talks about the requisite system dynamics for try strong AI, and concludes that our current methods simply don't cut it. One of the key stressing points he points out here with a colleague is that our current AI is Markovian when fleshed out chat dialogue would be a non-Markovian task (you can find the arxiv link of his criticism here: https://arxiv.org/abs/1906.05833). He also has knowledge on analytic ontology (and amongst other thing has some lectures about emotion ontology). I think his main genius however is in coming up with a definition of intelligence that puts a lot of the problems with our current approaches into context (which can be found here: https://www.youtube.com/watch?v=0giPMMoKR9s&ab_channel=BarrySmith) CONTACTS: He has a yt channel here https://www.youtube.com/watch?v=0giPMMoKR9s&ab_channel=BarrySmith Uhh, that's the introduction of pretty much every philosopher I know who works on this stuff. I made a thread on /lit/ and got no responses :( (which isn't surprising since I am the only person I know who is really into this stuff)
>>17243 >>17243 There's no consens about the concept of qualia and if it even exists. Daniel Dennett: https://youtu.be/eSaEjLZIDqc Btw, we don't need AGI for robowaifus. Human level intelligence, specialized to certain areas of expertise, would be enough. Probably even less than that would be sufficient. We don't really have the problem of needing to implement some fancy definition of conscience: It's a a layer which gets the high level view of the world and gets to make the decisions. But it can't change the underlying system (for security reasons). And it might even not be allowed to look deeply into what's going on everywhere, it certainly doesn't get flooded with all the details all the time. Problem solved, I guess. I will refer to that part as conscience.
>>17255 no there is a strong consensus, its only one(1) group of people that are schizophrenic about it because they want to play both sides with bad faith in the same way the argument for retard trannies is made, theres no such thing as a man or woman except when a man says hes a woman because then the man is not a real man and is instead a real woman but there is no such thing as man or woman unless its a man saying hes a woman but there is no such thing as a man or a woman unless its a woman saying shes a man etc., this duplicity is the new vogue just read how deceptive the clown is writing >This subjective experience is often called consciousness its not, a subjective experience is called qualia, what idiocy is this, the idiot just wants an easy way to infer consciousness without actually saying qualia because he knows qualia is a deathblow when openly declared, qualia arguments like 'give her the d' are the easiest and soundest arguments to infer consciousness the clown is even using 'give her the d' but qualia is the achilles heel of materialism for the same reason, all the crippling arguments against materialism are based on qualia because qualia is incompatible only in materialism because its fucking irreducibly subjective, hello, its antithetical to their pretentious objectivism and adherence to truth being a physical absolute neurosoyentists are supposed to reject qualia and never speak of it which is fine, they have no choice thats how they get rid of the gap in knowledge, ie. there is no gap because that knowledge(subjective experience ie. qualia) doesnt exist its epiphenomenal but they cant then use it to make arguments for consciousness a man is not a woman you cant just redefine words, its true robots dont need consciousness and theres no point in a real ai but that doesnt mean the definition of consciousness changes or you get to call your junk pile conscious, old chink once said "when words lose their meaning people lose their freedom", prophetic words considering the state of the world
all these ideas of what consciousness is or isnt. I agree with anon that for our purposes it isn't necessarily necessary unless the end user has a deep need for it. as I stated I'm kind of a panpsychist in regard to this, however if we want to pin "consciousness" down to a phenomenon, my personal theory after hundreds of hours of study, reading, meditating on it, so on, is that it is a result of a fractally recursive phenomenon. IMO this recursion is inherent to the structure of our neurons (which operate on 11 "dimensions" of order) https://www.sciencealert.com/science-discovers-human-brain-works-up-to-11-dimensions Consider the idea that as your map becomes more and more akin to the territory, becoming larger, relief mountains built with real rock and trees, lakes filled with actual water, etc and then that map containing a fascimile of a person, looking at a facsimile of the map, which in turn contains an even low fidelity copy of the map again, and again IMO it is this uncanny process of reflection unto reflection (like the reflecting silver spheres arranged in a grid: Indra's web of Hindu Mythology) is where consciousness arises
>>17262 >inherent to the structure rather, is inherent *in* the structure there is no reason our neurons should have the monopoly on this they are not made of "magic" matter (pursuant to my counterargument of the above "only brains can have consciousness")
>>17262 is broccoli conscious then or something thats not even alive like a piece of bizmuth or a snowflake, recursion is the natural way things grow its not that special its just mathematically impressive, the only difference between a brain and coral or fungus or sponges or anything with a brain like structure is neurons, your right though theres nothing really special about the brain, its just a part of nature and will never be anything greater than nature, the mind isnt though, its only materialists that say brain=mind which forces them to either make the brain appear greater than what it is or debase the mind into something insultingly simple, i mean you could literally say materialism is a simple minded dogma i was only using neurosoyence as arguendo for claiming ai, keyword being artificial, i could just have a kid with someone or use whatever natural process possible to make consciousness manifest but then its not artificial is it, its just intelligence
>>17265 its not recursive, to be recursive is a process nice fractal though
>>17265 IMO "mind" is a process that isn't contained in the brain like gas in a box. It's something that operates on a nonphysical "dimension" and this is my own speculation (but others probably have and will touch on these points). Remember also we are "simulated" in each others minds, as like Indra's Web and how we're "simulated" in the minds of others factors into how we are treated by them, and becomes part of our feedback loop, so the whole process is continually recursive. The "fractal" part is harder to explain but when you think of "self similarity" and the relationship of maps to territories (and maps so accurate they contain themselves in the map, ad infinitum) then you're getting the idea.
>>17270 Greg Egan touches on this in his short story A Kidnapping (Axiomatic, 1995). Going to spoil it The main character is being blackmailed by a someone holding a "simulation" of his wife hostage and will torture her unless he pays ransom. Since his wife was never scanned he is puzzled how this could be, until he realizes somebody had hacked his own scan, and from his own simulation extrapolated enough of his wife to simulate her
>>17270 it just sounds like normal dualism now. as in 'material and immaterial are unrelated but interconnect', i dont know what you mean with simulation, dualism has way more logic involved than materialism, for simulating people you have to give me a theseus ship response because everyone uses different laws of identity, although i think youre really just talking about simple knowledge, dualists see knowledge and truths as objects that are attached to things so your knowledge about someone(obtained truth of x) is as much a part of you as it is of them(truth of x) showing ai in dualism is too hard if not impossible though, if you can do it then you have knowledge on a level that completely dwarfs ai in comparison, its in all sense of the word an otherworldly understanding >>17271 reminds me of a movie thats the same story, took me a while to find the name, The Thirteenth Floor (1999)
>>17096 >there is consciousness and we can construct it, this is a defacto materialist perspective ok >obviously constructing conscious is therefore equivalent to constructing the brai only if you are a mind-brain identity theorist. no one is a mind-brain identity theorist anymore (except someone like searle, who every ai-nerd hates). every materialist is a functionalist now, so they think the brain is a computer and consciousness=software >again im just using their own bullshit premises the above is their position, you are not actually using their premises >more formally; this wasn't a valid argument, try proving it with just those premises and you will see the conclusion won't follow. i don't think this is too big of a deal though because i understand what you were trying to say anyways though. just a heads up >>17122 >it cannot possibly have a will if it has been programed i would agree with you if by programmed you mean that all of its behaviours are preprogrammed. i would just like to point out that having any sort of programming at all does not preclude having a will. there is possible some argument humans have a few pre-programmed instincts. what separates humans from an animal just acting on instinct is that they are able to learn new things and slowly develop behaviours that aren't pre-programmed. whether or agi inspired by current understandings of intelligence can do this is another story. personally, i seriously doubt it can. i think this article presents a good argument against such a computationalist approach to consciousness: https://arxiv.org/abs/2106.15515 >>17239 >Sir Roger Penrose seems to think that conciousness is not computable... penrose thinks that wave function collapse is the key to collapse, and thus seems to believe quantum computing might work. i actually believe the problem he has brought up might be far too profound to be solved by a new paradigm of computing. i am even skeptical of the idea that hypercomputation would solve anything either. i think all of these approaches to computation probably aren't going to help. what i have in mind is far more radical >That way, we aren't re-inventing the wheel eh, you don't really have much control over how your waifu looks or its size this way (unless we develop really powerful new technology in synthetic biology, however i think by the time such technology arrives we would already be close to artificial life) >>17241 AST has already been brought up earlier in this thread. you can find some criticisms ive written of it there >>17251 >consciousness as data compression i've heard of this idea before. there is some truth to it. something that conscious beings do is simplify phenomena down to patterns. however, i do not believe that our current approaches to ai are actually as good at detecting novel patterns as we'd like to think. the article i posted above by kauffman details this issue >>17252 i think you are acting too violently upset about this anon. i definitely do think your criticism has some truth to it, and it actually touches on a problem chomsky has highlighted in large language models: they do not actually try constructing a real theory about how a language operates. their theory basically just is "anything goes". this article talks more about this: https://garymarcus.substack.com/p/noam-chomsky-and-gpt-3 >>17255 >Btw, we don't need AGI for robowaifus possibly. it is a personal preference of mine
>>17364 >a subjective experience is called qualia i dont think you understand what dennett is saying, however i can not fault you because most people don't. actually for a long time, i just thought he was just speaking nonsense till i heard an illusionist explain their position. needless to say, the whole "qualia doesn't exist" shtick is very misleading. what they mean is that they have problems with at least one of the following statements in this very specific list of ideas about it (https://en.wikipedia.org/wiki/Qualia#Definitions): 1. ineffable – they cannot be communicated, or apprehended by any means other than direct experience. 2. intrinsic – they are non-relational properties, which do not change depending on the experience's relation to other things. 3. private – all interpersonal comparisons of qualia are systematically impossible. 4. directly or immediately apprehensible by consciousness – to experience a quale is to know one experiences a quale, and to know all there is to know about that quale. note that this definition is not even accepted by all non-materialists. there are even plenty of idealists (such as hegel, and likely the british idealists) who do not accept (2) because they think all qualities are relational. for hegelians, qualia like redness are concrete universals... why is this important? because since people like dennett don't agree with this definition, they want to throw the whole concept away >'give her the d' this is a weird example. please explain what you meant >because that knowledge(subjective experience ie. qualia) doesnt exist there are some materialists who actually say that this knowledge does exist, but they bring up the ability hypothesis (https://plato.stanford.edu/entries/qualia-knowledge/#NoPropKnow1AbilHypo). i do think there is some truth to this idea, but i feel as though they lack the proper metaphysics to understand why >you cant just redefine words idk why you think this. it is a very common thing people do in conceptual analysis. if our concepts are bad, we probably need to define new words or use old words in new ways. an example of this is with continuity. mathematicians have defined the concept of continuity around open sets and limits. bergson meanwhile believed that such a definition is unable to properly capture temporal continuity. this lead him to give a new definition of it. in the process of doing this, he also ended up using the term "duration" in a different way as a contrasting term between his theory of time, and the physicist's understanding of time based on abstract mathematical continuity. this is fine as long as people are explicit about using their terms in a different way >>17270 the brain dimensions thing doesn't really imply a nonphysical "dimension", unless you are reifying the structure of the brain like an aristotilean >>17273 >so your knowledge about someone(obtained truth of x) is as much a part of you as it is of them(truth of x) i am guessing what you mean by this is an externalist theory of truth. i dont think this is a view that only dualists have though, nor is it a view that all dualists have i don't think externalism permits the idea that the material and immaterial are completely unrelated. from what you have said, it follows that the knowledge of a thing is as much a part of you as of the thing. but this implies that it is possible that the thing is contained within the mind. the conclusion of this would be something like idealism, not dualism... note that as i have pointed out earlier, idealism does not mean that there are no external objects btw
>>17365 >like an aristotilean a hylomorphist to be more precise. of course aristotle is more about there being an immaterial intellect and stuff
>>17080 okay. time to summarize chapter 2. honestly it is a much faster read than chapter 1, but it relies more on technical details that a a simple summary would betray. part of the reason it took me time was because of meditating on some of these details (as all of this is going to be basically the foundations of the rest of the book, it makes sense to spend extra time on it), but it is also because i have just been procrastinating/distracted from other things... anyways here is a basic summary of some key points to take from this chapter: 1) negarestani suggests a basic framework for approaching AGI which he calls the AS-AI-TP framework. keep in mind that his model of AI is capable of discursive interaction with other agents. this is stratified between different levels. >(i) the first level is just basic speech. such a thing is crucial since we need to interact with other agents somehow >(ii) the second level is dealing with the intersubjective aspect of speech involved in conversation. i personally suspect that grammar might emerge at this stage >(iii) the final level involves context-sensitive reasoning, and reaching higher levels of semantic complexity (i am guessing what he is hinting at here is functional integration) one thing i am unsure of is whether stage 2 and stage 3 can actually be thought as separate stages, because it seems like what we see in stage 3 could naturally emerge from stage 2. such an emergence clearly wouldn't happen with stage 2 from stage 1... the framework by its names also separates out three different projects that are important for these three stages: >(i) AS which corresponds to the construction of artificial speech synthesis. this one is special because it largely only concerns stage one >(ii) AI which corresponds to the project of artificial intelligence >(iii) TP which corresponds to the the project of finding the general conditions for the possibility of a general intelligence. negarestani of course sees kant's transcendental psychology as the beginning of such a project 2) he makes an extensive criticism of the methodological foundations of the disconnection thesis. this is basically this idea that future intelligence could diverge so far from our own that we might not even be able to recognize it as intelligent. among other problems he has with this view, i think the most important one to extract is that if such an entity has truly diverged so far from our own intelligence, it is a mystery why we should even consider it intelligent. because of this, negarestani wants to stress the importance of certain necessary functions being implemented by a system (what he calls functional mirroring) over the structural divergences that might occur... this functional mirroring partly arises when we have a determinate conception of how geist's self-transformations should take place generally 3) by bringing up functional mirroring, we bring up the question of what conditions are absolutely necessary for us to call something sapient. negarestani terms soft parochialism the mistake of reifying certain contingent features of an intelligence into necessary ones. the purpose of transcendental psychology is to purify our understanding of general intelligence so that we only include the features that we absolutely need 4) he writes a basic list of the transcendental structures. i already described them here: >>11465... negarestani also remarks that transcendental structures can also be used to articulate ways in which more complex faculties can be developed (i believe he gropes a little to how in his discussion on chu spaces) 5) based on all of this, negarestani also motivates that we construct a toy model of AGI which he frames as a proper outside view of ourselves. a toy model has a twofold utility. first, it provides something which is simple enough for tinkering. second, it makes explicit meta-theoretical assumptions. this latter point is important because sometimes we might be imposing certain subjective self-valuations of our experiences unto our attempts at describing that capacities we objectively have. the construction of a toy model helps avoid this problem 6) something negarestani furthermore calls for is a synchronization between the concepts that have been produced by transcendental psychologists like kant and hegel, and cognitive science. he notes that in this process of relating these concepts to cognitive science, some of them may turn out untenable. i think there is actually also a flip side to this. by seeing how cognitive science recapitulates ideas in german idealism, we are also able to locate regions where they may recapitulate the same errors as well
>>17439 7) negarestani outlines two languages for the formalization of his toy model. the first is chu spaces which are basically a language for concurrent computation. he links an interesting article ( https://www.newdualism.org/papers/V.Pratt/ratmech.pdf ) which relates concurrent computation to the mind-body problem. it does this by basically framing mind body dualism as a duality instead. the scheme is as follows: events correspond the activities of bodies, while states correspond to mental states. the function of events is to progress a system forward, while the function of a mental state is to keep tabs on previous events. the key idea for pratt is that the interaction between event and state is actually simpler than event-event or state-state interactions. 'a⫤x' basically means that event 'a' impressed on mental state 'x'. meanwhile, 'x⊨a' means that with the mental state 'x' the mind can infer 'a'... transitions from event to event or state to state are much more complicated. they require the rules of left and right residuation. the basic idea of these is that for us to transition from state x to state y, we need to make sure that from y we are able to infer all the same events that have occurred previously as state x. these residuation rules seem to be important for making sure that there is proper concurrency in the progression of states... negarestani also seems to be hinting at the idea that with the help of chu transforms we can see how chu spaces may accommodate additional chu spaces in order to model more complex forms of interaction... the benefits of chu spaces: (i) provides a framework that accommodates the kantian distinction between "sensings" (which he corresponds to causal relations) and "thinkings" (which he corresponds to norms) (ii) since state-event, event-event, and state-state interactions are all treated as different forms of computation, we are able to be more fine-grained in our analysis of the general form of thinking beyond just calling it something like "pattern-recognition" or "information processing" (iii) in doing what was just mentioned, we also avoid shallow functionalism (iv) allow us to model behaviours as concurrent interactions the second language he wants to use is that of virtual machine functionalism (you can read about it here: https://www.cs.bham.ac.uk/research/projects/cogaff/sloman-chrisley-jcs03.pdf ). the basic idea here is to talk introduce into our understanding of mind virtual machines. these basically correspond to levels of description that are beyond that of physics. VMF is distinguished from a view called atomic functionalism in that the latter just treats the mind as this simple input-output machine. meanwhile, in VMF, we can talk about various interlocking virtual machines that operate at different functional hierarchies... the differentiation between different scales and descriptive levels is really the main benefit of this approach. it allows us to avoid an ontology that is either purely top-down or bottom-up. i think this is actually a really important point. another important point here is that by looking at the interaction between different scales we are able to describe important processes such as the extraction of perceptual invariants... VMF seems immediately fruitful to some of the AGI modelling concerns... Chu spaces less so, though i have reasons to still return to this idea with a closer look
>>17440 >VMF seems immediately fruitful to some of the AGI modelling concerns... Chu spaces less so, though i have reasons to still return to this idea with a closer look Chu spaces also seem interesting. I hadn't heard of them before, but they seem like a good way to represent objects in spaces. For example, objects in some input image can be represented by a Chu space. So to look for an object is in some image, instead of calculating p(object | image) for each location, you would transform the image into K and check K(object, location). I guess the benefit is that everything about a Chu space can be represented by a single function, K, which gives you a unified way to handle a lot of different kinds of structures and the interactions between them. I think most observations can be treated as an object in a space (e.g., when you're observing the color of a shirt, you're looking at a shirt in a colorspace), so it seems very general. Chu spaces also seem closely related to tensors, so I would guess that there's an easy path to a lot of numerical machinery for anything written as a Chu space.
>>17442 >Chu spaces also seem closely related to tensors, so I would guess that there's an easy path to a lot of numerical machinery for anything written as a Chu space ah that might be true. that is a benefit of thinking about mathematical structures as computations/behaviours become easier to systematize >>17440 onto chapter 3 (i actually read both this and chapter 4 so will try to summarize both today). there are more philosophically tricky stuff here that i take issues with but not sure whether i should write about them or not. anyways: 1) the main goal of this chapter is to articulate the features that are needed for sentience (something shared by basically all animals), and then trying to articulate the contrast between this and sapience (more pertaining to rational capacities that seem to be unique to humans). negarestani articulates the features of sentience by thinking about a hypothetical sentient automata with the following features: >(i) self-maintenance goals (so like omohundro drives likely involved here) >(ii) the capacity to follow these goals. this can be thought of in terms of the multiple layers of interaction already discussed in chapter 2. he also talks about a "global workspace" which pertains to a property of the system where there is information in particular subsystems that is globally accessible to other subsystems. one way we can think about this is as a virtual machine as opposed to some physically composed central processing unit. what we really care about is the functional capacity for global access. this points to a subtle difference from the "global workspace" articulated in global workspace theory >(iii) there should be sensory integration of different sensory modalities. for instance, instead of just perceiving sight, also perceiving sight and sound integrated into a more unified experience. the utility of this is that it reduces the degree of ambiguity present in our perceptions >(iv) a (re)constructive memory which synthesizing external representations with an internal model of the world and also predicted internal model of what will happen in the future (based on the agent's actions) 2) the author goes into a rather subtle point about some of the conceptual difficulties with his approach. in trying to conceptualize this automata, we will try to give a story of what it sees and contrast it with a story of what the sapience sees. a major caveat here is that in articulating the story of the automata will still reference concepts that the sentience would not strictly recognize (e.g. concepts of redness... really what this sentience should see to negarestani would just consist in tropes). nearestani still thinks that there is a virtuous circularity here, in so fa as our concepts do make explicit certain structures in "unthinking" cognition (a claim that would be likely supported by cognitive linguists). as such we might not just be chasing around discursive illusion. he thinks there is a chain of as-ifs between sentience and sapience. when i first read this claim, i got a bit confused by this claim, as his whole functionalism is already taken as an as-if. i think the as-if structure here is more concerning conceptual incommensurability being partially elided for the sake of exposition... perhaps this caveat may not be needed to be made so explicit if he started with the an explanation that what we really mean are tropes (thus, for instance we shall talk about the trope 'schmed' as opposed to red) though this might be a little bit annoying to articulate if it is possible to articulate at all 3) the main reason why negarestani takes so much care to point out this caveat and explain why it is not so problematic is in order to respond to the "greedy skeptic". he has in mind here the philosopher bakker who is behind blind brain theory amongst. their main strategy is to try and claim that our understanding of intentionality is produced by a cognitive blindness/discursive illusion. a major problem with this approach is that it is so greedy that it ends up undermining its own foundations, for the blind brain theorist must accept that they themselves are blind. another issue negarestani has with the greedy skeptic is that it elides the distinction between the space of reasons and the space of causes 4) finally we get to see the two stories which negarestani talks about (pic rel). he thinks there is a distinction between the two sorts of seeing here. the first story talks about seeing 1, and the second story talks about seeing 2. seeing 1 is more concerned with raw sensations, while seeing 2 is conceptually mediated. now, i personally disagree that this distinction is one that actually separates sentience from sapience. just from the basic features, especially feature 4, there seems to be some conceptual mediation entailed there. i think ecological psychology would likely disagree with the idea that sentiences don't perceive concepts as well... it doesn't make for the best theory of perception. negarestani later on even caves a bit and accepts the fact that our spatial perspectives are already an example of seeing 2... for more on this debate, check out the article 'Articulating Animals: Animals and Implicit Inferences in Bramdom’s Work’. i think the metaphysics of tropes might be related to what is at stake as well... a lot of interesting stuff to think about
>>17520 note: negarestani also says that seeing 1 is seeing of, while seeing 2 as it seeing as. anyways... 5) now, we start think about the chain of as-ifs. negarestani claims not only that seeing 1 isn't conceptual, but furthermore that we can't actually see individuated objects with seeing 1 either (so for instance, we don't see cups but instead some manifold of sensations in a cup-shaped spatiotemporal boundary)... needless to say i disagree that sentiences can't perceive objects as well... anyways something that he likes about kant is that he articulates perception as involving multiple layers. thus perception is about the integration of a collection of algorithms. kant talks about three syntheses that would be all requisite for artificial general intelligence: >(i) this systematizes sense impressions so that they have a spatial and temporal location. i think what may be important here might be the idea of simultaneity. like, whenever we perceive something, it appears as though all of the sensations we see are all located within a single moment >(ii) the synthesis of reproduction relates appearances temporally, which constructs stable representations of items. note here that items are different than objects. items are what negarestani wants to understand as things that we see in seeing 1. i think this idea is somewhat strange however, since if we start talking about systematically interrelated/stable items, they are not just disconnected tropes (but perhaps something more sophisticated). perhaps he is fine with this as long as he connection here is synthetic/normative. i need to study closer sellars's theory of tropes here... >(iii) usage of concepts to synthesize representations (seeing of) into objects note here that (i) and (ii) are together the "figurative" part of our perception. they provide us with some systematicity in our raw materials that will be brought together to form the perception of objects in perception 6) negarestani furthermore thinks that the mesoscopic level of analysis (not just top-down or bottom-up) already talked about with chu spaces and virtual machine functionalism can be used to understand these syntheses. this idea is also connected to putnam's liberal functionalism which he claims entails a picture of the organism as systems only in so far as they are engaged in interactions with the environment. i may say further that there are different possibly nested systems of interaction 7) he tries connecting predictive processing to the figurative synthesis mentioned above as well... an important parr of this lies in priors which are probabilistic constraints that permit a system to hypotheses. not only are there normal priors, but also hyperpriors which are priors upon priors. these can be used to help differentiate between different levels of hypothesis. there is an analogy made here between priors and kant's forms of intuition. more can be read about this in the article titled 'The Predictive Processing Paradigm Has Roots in Kant'. something else about predicting processing is that incoming inputs are always related to a pre-existing representational repertoire. we can notice a parallel with negarestani's earlier talk about a (re)constructive memory... there are also a lot of articles given on how to formalize this stuff that might be interesting further reading: >'Evolutive Systems: Hierarchy, Emergence, Cognition' >'A New Foundation for Representation in Cognitive and Brain Science' >'Colimits in Memory: Category Theory and Neural Systems' 8) negaresani goes on to talk about a particular way of formalizing how predictive models are applied. this is by means of colimits which is rather technical. basically what is going on here is that you can use a colimit diagram to model the integration of neuron clusters (these clusters are interpreted as categories where i am guessing for instance that the objects are neurons and the morphisms are synaptic connections)... we can also define aa macroscopic category Ment which can be defined as the colimit of neuron clusters which exist at the highest level of integration 9) there is however pointed out two methods of this colimit methodology: >(i) it is just focused on the construction of higher levels out of modules. it can't deal with synaptic pruning which negarestani thinks is important for unlearning >(ii) category theory relies on commutativity and works best with symmetry and synchronous processing. these are constraints that not all physical processes or even neural tasks conform to negarestani thinks limits are important things to consider so we know when it is appropriate to apply a model. the predictive processing paradigm is also limited to him as it is unable to ground the plurality of methods and the semantic dimension behind the construction of scientific theories. this dimension is not merely grounded in our basic intuitions for otherwise we would be unable to break out of them. furthermore, hyperpriors need not be assumed to be properties of objective reality for supposing such a thing would require us to explain why modern physics often goes against our intuitions
>>17521 now we enter territory that i already foreshadowed earlier: 10) even this merely sentient awareness needs to have a 'perspectival stance' in so far as it must differentially respond to an environment in a way that differentiates it from prey (otherwise it might consume itself or something strange like that). the range of spatial relations the predator can maintain, for negarestani, is limited. it considers spatial relationships between itself and its goal ('endocentric' view), and to a more limited extent the relation between items in the environment ('exocentric' view). as said before, this endocentric view is acknowledged by negarestani to already involve a spatial sort of seeing as... needless to say i find this development rather upsetting, as negarestani seems to be entering into a sort of classificatory contradiction... the author also mentions that the automaton should furthermore (on top of/connected to this spacial perspective) have a naive physics of space. he references here claude vandeloise's work 'spatial prepositions' that contains work on this. ultimately this naive physics serves as a cognitive scaffold for more advanced spatialized concepts (again we can see some possible connections to cognitive linguistics) 11) there is also a temoral perspective which relies on the following successive progression of capacities: >(i) the capcity to synthesize sensations together into one simultaneous moment >(ii) bringing these moments of simultaneity together into a successive sequence of states >(iii) the ability to have awareness of these successions of states as successions (more precisely, the capacity to make them explicit) 12) to articulate time awareness we must start start with a basic distinction between two ways of intuiting items: >(i) impression (awareness of item that is copresent with the system... more related to something we just see) >(ii) reproduction (reproducing an impression in absence of the items... more related to (re)constructive memory) 13) the approach above is not sufficient for time awareness for we must functionally differentiate past stats from present states (just as we have differentiate the automata from the environment). to do this we must have meta-representation capacities (i.e. the capacity to represent a representation). it should be noted however that to negarestani this time awareness is not yet time consciousness as the automaton is not yet aware of the succession as a succession, and it thus doesn't have a self that can be furthermore "mobilized". this idea seems to connect to the idea of self-consciousness articulated in the phenomenology of spirit (which i also think is really concerned with "self-governance"). the stuff about meta-awareness is sort of weird to think about because negarestani thinks that meta level also has impressions versus reproductions. thus what i think is going on here is a genuine meta-system transition 14) this stuff about meta-awareness is rather technical, and he makes comments about category theory and a possibly-infinite hierarchy of meta-awarenesses. what is really important is that the meta-awareness helps us functionally differentiate between impressions and reproductions, and thus the system can differentiate between 'is' and 'was'. something negarestani also introduces is the element of anticipation. if there is a functional differentiation between future predictions and the other two types of intuitions, we then have 'later'. if impression, reproduction, and anticipation are labelled be1, be2, and be3 respectively we can draw a larger diagram of the variety of first order meta-awarenesses (second pic rel) 15) negarestani claims that with perspectival intelligence we now have the necessary tools to start formulating an intelligence capable of creating new abilities out of old ones with the help of new encounters with the world by means of reasoning and 'systematically' respond to impressions. i think this is still a bit of a strange distinction between sentience and sapience, but what he means by systematicity here is more specific. in particular, he thinks it involves two aspects: >(i) ability to make truth-apt judgements about contents of experience >(ii) choose one story of the world as opposed to another it may be debatable whether or not sentiences can actually be capable of such things or not... though what is an important piece in the puzzle is the element of socially mediated rationality. he also points out how time is an important structure to understand due to its place in transcendental psychology and transcendental logic... to sum up... i have plenty of problems here which are symptoms of a larger pathology of inferentialism in so far as it often differentiates sentience from sapience in a rather unnatural manner
>>17522 next we talk about chapter 4. i think this one is a bit weird to summarize because he goes into a lot of details into examples and details which are sort of unnecessary for a concise summary of the text. i also feel as though this chapter's direction wasn't the best... he spends a lot of time arguing against time asymmetry + flow of time and how it is just a byproduct of our subjective biases, but then he goes on to talk about hegel's absolute knowing as taking a sort of atemporal vantage point. i don't think the connection between these two ideas are very deep. there is also stuff about metaphysics (which he basically ends up equating with hegel's dialectical logic) which could easily bog the reader down. the basic message of this chapter is just to give an example of a basic feature of negarestani's project put to practice. that feature is the fact that we need to progressively refine our understanding of intelligence and strip off all these contingent categories... near the end of the chapter he articulates three interconnected projects that sprout from what had been discussed: >(1) how to conceptualize entities that might have different models of time than our own >(2) is there a way to possibly model our own temporal perspective (like with a physical model) without trying to ascribe features such as the passage of time to all of reality? >(3) what could it mean both practically and theoretically for us to think about our own cognition as possessing a time consciousness that is not necessarily directional or dynamic? what could be gained as well from considering alternative time models or even an atemporal one the last point is what negarestani sees as related to taking a "view from nowhere". he states that in later chapters, the time-generality he is trying to articulate will correspond to plato's ideas (Knowledge, Truth, Beauty, and the Good). he wants to enter into a viewpoint that is time-general step by step. i personally believe negarestani's understanding of Eternity might be a little bit flawed, because i do not believe it is related to time-generality per se. with that said, i do believe that the passages articulating his vision are very poetic/mystical and i urge people to read it for themselves (it is on pages 246-248)! there is an almost meditative practice involved here were we slowly enter into a timeless form of meta-consciousness. i am starting to understand why he considers himself a sort of platonist. maybe he is more closely allied with a more phenomenological understanding of plato, though by turning to functional analysis, we seem to have more structural generality accessible to us
>>17523 weird network traffic lately... anyways it's chapter 5 lol 1) as said before, negarestani thinks that we are unable to make veridical statements and judgements of one story as opposed to another yet. we also haven't been able to achieve a proper time consciousness (awareness are not distinguished as being particularly its own,this being associated with having a basic self-conception which serves as a sort of tool for modelling)... negarestani's basic solution to do comes in two steps: >(1) embed our automata into a larger multi-agent system >(2) give it medium of communication (in the text, negarestani uses speech as an example) he further details why this solution is appropriate: really what we want is to instantiate an apperceptive 'I'. this is an 'I' that not only gets attached to all representations (I think X, I think Y, I think Z) but also as a stable self-identity over time (I think [X+Y+Z])... to ground this self, we really need to establish a self-relation, but this takes us back to hegel's account of self-consciousness as interlocked with a larger multi-agent system 2) our starting point in this multi-agent system is a CHILD (Concept Having Intelligence of Low Degree). this is a system mainly of habit and regularities. at the same time, thanks to the systematic (not per se veridical) manner it interacts with the environment, this intelligence features a system of transitions and obstructions to transitions from one awareness to another. this system is analogically posited as being primordial material inferences which have various properties. an important one is context sensitivity which have various properties. an important on is context sensitivity which is described using the language of linear logic. this logic views formulas as resources that may only be consumed once the bit about all of this being analogically posited is important as negarestani does not think this intelligence is actually using modal vocabularies. as such it isn't able to actually conceive of causality either... what negarestani does not really do is tell us what is a salient distinction between observation of behavioural vocabularies and modal vocabularies. to understand, we really need to dig into what rosenberg has to say 3) in rosenberg's 'the thinking self', the CHILD is at the start "idealist" in the sense that it just treats its meta-awarenesses as identical with its world picture. (this is something that negarestani points out, but only treats as a separate issue from the problem of modal vocabularies in his exposition). this is problematic because for rosenberg, the distinction between mere regularities and actual laws comes down to the difference between appearance and reality. in order to do this we need to somehow decouple our world picture from our meta-awareness which will ultimately involve the constitution of an aperspectival world picture 4) as an aside, when we read rosenberg, he seems to hint at an influence from heidegger which explains his own stand-offish relation to the idea that sentiences have conceptual capacities. in heidegger's system, the more primordial form of consciousness is readiness-to-hand which rosenberg understands as involving proto-intentions that are only understood relative to our understanding of what an animal is currently "up to"... i am not personally so sure that readiness-to-hand is actually empty of concepts... rather it is more just not involved in a certain epistemic 5) back to negarestani... for the CHILD, to exit this naive idealist position, it must be able to have an awareness of the form 'I think A' and not just an awareness of the form 'A'. this requires the unity of apperception. apparently there was a part in chapter 3 which i think skipped over that was actually important. this mentioned the fact that meta-awarenesses are really a "web" of equivalence relationships over awarenesses that have occured over time. the 'I' in 'I think X' is formally identical to the 'I' in 'I think [X+Y+Z]'. again, how this gets established depends upon self-relation 6) an important starting point in negarestani's starting point is that the parental sounds are in continuity with causal regularities perceived by the automata to be interesting. moreover, in this "space of shared recognition" between the CHILD and its parents there is also involved the CHILD's awarenesses, meta-awarenesses and transitions between awarenesses modelled by the parents (i.e. the automata's behaviours are being modelled by its parents) 7) negarestani goes on to articulate a series of decoherences and recoherences. what preconfigures this series is that the reports from the parents are recognized by the automata and habitually recognized and mapped unto its meta-awarenesses. these reports are consequently taken as meta-awarenesses which are labelled by their source. labelled because these meta-awarenesses are received from its parents as a contrastable reports >first decoherence: what we often end up with are families of incompatible perspectives. what is noteworthy is how certain reports don't seem compatible with what the CHILD is immediately seeing. negarestani notes that this results in a world that is "proto-inferentially multi-perspectival" >first recoherence: the parental reports are now just taken as seemings relative to each parent, while what the automata sees are taken as objective. i personally think that this idea of parental reports being treated as mere seemings is an abstraction that can't actually have any utility. "seeming" means nothing without any potential for objectivity >second decoherence: the problem now is that the parental reports should be taken as being able to be objective, even if contrastive with the automata's own awareness >second recoherence: our CHILD must construct a world picture of different perspectives now taken as partial world pictures i think that with the first and second recoherences, there is involved the usage of the contiguity with certain agents as a functional salient decision mechanism
>>17567 8) what we have done in converting reports into partial world pictures is that we have now treated them as "logical forms" which can structure the CHILD's representations. these are used by the objective unity of apperception to synthesize objects which are equivalent to other objects in so far as they conform to the same rule. objectivity involves this and also veridicality. i suspect that this is what is behind negarestani's mysterious claim that mere sentiences do not perceive objects. to him, an the idea of an object seems to be linked to veridicality. thus if sentiences do not differentiate between seeming and being, they can not perceive objects 9) negarestani thinks that by the conversion to logical form and the dissociation of language from world, we are capable of conceiving of new world pictures. i am guessing this point is an elaboration on the solution he has to the problem he had with predictive processing proponents trying to use their framework to explain everything including scientific frameworks 10) the process above and education in general has as its pre-condition "practical autonomy" which is most generally characterized as the yearning of a CHILD to become a full on sapient agent, and the tendency to make adults recognize such a yearning. so far we have had a multi-agent picture where each agent models its world pictures as being composed of the partial world pictures of other agents (and furthermore recognized as belonging to those other agents) and furthermore excluding mere seemings recognized as belonging to other agents. in this process we have also recognized other agents as subjects. what we have elaborated in all of this is a space of recognitions. within this space we can individuate the formal self as that which owns certain apprehendings and not others which can be owned by other selves 11) our movement from the space of recognitions to the formal self has as a condition here that the CHILD recognizes adults as subjects which are important for its self-actualization. this recognition is ultimately a manifestation of the CHILD's practical autonomy (more precisely this seems to be the process by whereby it is in actuality) 12) autonomy arises from a self-differentiation of the subject. this process permits it to recognize a world. furthermore, it opens up the automata to disintegration and reintegration (what we have had looked at before were examples of this). through the further recognition of the world afforded by this formal autonomy, the consciousness drifts which eventually permits it to consider things which are impersonal. this is how self-consciousness (think of the self-relation we have discussed before) comes into the picture. it is only though all of this that the CHILD becomes a "thinking will". negarestani's conception of thinking seems to be related to the process of recognizing what one is not and acting upon this recognition (how precisely the latter happens i am not completely sure of) 13) education involves the cultivation of autonomy: >(1) increasing range of normative capacities >(2) removing artificial limitations placed on the child about what it should become. i think this is more generally the process of learning and unlearning ought-to-dos (a process already mentioned by negarestani) 14) education also has various structural requirements: >(1) vertical and horizontal modularity (formal hierarchical, latter is flat). these two kinds have their advantages and trade offs >(2) should be able to involve the integration of various learning processes >(3) exploiting inductive biases (think prior knowledge for instance here) to help fashion more complex abilities 15) education is generally a process that leads to the expansion of an agent's range of practical capacities which are intelligible to it. this end product where boundaries on our abilities are pushed back is what negarestani calls "practical freedom" 16) in talking about education, negarestani also talks about two classes of capacities: sf and sm abilities. the former are formal while the latter involve higher levels of cognition. sf is syntactic while sm is semantic. he actually provides a whole catalogue of sf and sm abilities that i do not feel like summarizing so i will just provide a pdf excerpt... in this cataloguing, we see various structural hierarchies of abilities. for negarestani, what these structural hierarchies also indicate is the fact that different strategies need to be used in pedagogy based on the child's current experience and conceptual repertoire (e.g. breaking down a task into simpler ones, combining simpler tasks to make a more complex one) 17) the development of the automata from being a child to one that has a capacity to rethink its place in the world (i presume that this is related to revising one's oughts ad ought-nots) ultimately requires a back and forth with adults
>>17568 chapter 6 now! 1) in the previous chapter we already presupposed that our automata had language. now it is time to explain how language emerges since it is an important pre-condition for general intelligence. we will reenact the development of language because it illustrates how language is an important ingredient in the development of more complex behaviours. the compleixification of language and general intelligence come hand in hand 2) we can understand our main goal as really involving modulation of the collection of variables agents in our system make use of in interacting with the environment. if this is the case, then really out agents are interlocked with the environment as well 3) for the sake of our reenactment, we will assume now that all the automatas in our system are now childs. an important distinction needs to be made between pictures (sign-design) and conceptual objects (symbol-design). the former are representations of regularities in the environment (this forms a second order isomorphic a on the first level we have signs that have some sort of resemblance in causal structure, and then we associate these signs together), while the latter are able to invoke combinatorial relations between symbols while pictures are basically simple representations, symbols are primarily best understood by how they combine with other symbols (they do still have reference but it is rather secondary). they do not simply represent external objects but also each other. negarestani really stresses that while symbols are dependent on signs, they are not reducible to them. signs belong to the real order (which includes causal regularities and wiring diagrams... largely causal) while symbols belong to the logical order that is autonomous from the real one (this negarestani associates, following sellars, with thinking and intentionality) 4) negarestani goes on to elaborate on the basic pre-condition for a sign... in short we need the following: >(1) causal regularities need to be salient enough to catch the attention of an automata so it may produce a sign of it >(2) we need enough complexity in our automata so it is able to recognize and make signs of its regularity from there we can sketch a basic example of a sign. let us suppose there are events Ei and Ej. their co-occurence could be denoted by Ei-Ej. we would want to register this relationship in our wiring diagram by some isomorphic structure, for instance Ei*-Ej*. what we have here is an "icon". this is a sign that associate with their referent by resemblance. really what matters to negarestani hee is that there is some stimuli discrimination going on let us say when Ei*-Ej* occurs, the automata make a sound. if this sound is heard enough times by other automata, they can start to reproduce Ei*-Ej*. we we have here now is "just" an indexical sign, as it merely connects two events by way of statistical regularity. notice how negarestani is constructing a sort of hierarchy of signs (i.e. symbol > index > icon) where the higher rungs is in to some extent built on top of the lower ones this process where we transmit and recieve indexical signs will be called communication (note negarestani does not simply think language is about communication but also interaction) 5) negarestani goes on to criticize the idea that we could just stick to picturing and never move to symbols... his first objection is that for every collection of signs, we would need signs representing their relationships. this would lead to a regress. this suggests we can't have an exhaustive picturing of the world. his second objection is that even if we could product a complete picture of the world, the regress would produce exploding computational costs 6) symbols, unlike signs, stand in one-to-many and many-to-many relations. negarestani seems to imply that somehow this provides us a real solution to the regress problem. another benefit of symbols is that they let us make explicit and develop the set of recognized relationships between patterns. there are also more computational reasons for introducing symbols. if we think about induction, there is of course solomonoff's universal induction which lets us make the most parsimonious hypotheses. the problem is that this requires an infinite time limit. as such, compression is not enough. we need to be selective about what regularities we single out, and after that to explore the relationships between these regularities. i would like to point out that ben goertzel (one of the most well known agi researchers) also considers this problem and has a similar solution in his own system (see for instance here: https://yewtu.be/watch?v=vA0Et1Mvrbk) ultimately, to achieve this (and more generally in order to achieve general intelligence) we need automata capable of material inferences that may be ultimately made explicit in formal inferences. what is emphasized here is the important of know how 7) negarestani goes on to outline the main pre-conditions for symbols: >(1) discrete signs (as opposed to continuous) >(2) combinatorial structure that signs can be put into 8) discreteness is important because without it, we have symbols with fuzzy boundaries (not sure important this is tbh) and which are also difficult to combine together. negarestani thinks out automata can invent discrete signs by means of a self-organizing process in which high dimensional data is projected into a dsicretized and/or lower dimensional data so that we converge to the use of discretized signs (he references ai for this) this discretization permits us to combine our phonemes together to produce more units of meaning. moreover, we have ultimately permitted symbols to be able to be enter into manipulable combinatorial relations whether more complex syntactic structures (and thus also encoding relationships between regularities) can be generated
>>17593 9) changes in the structures in language come hand in hand with the development of the automata's ability to model and communicate more complex structures. ultimate, negarestani thinks this exemplifies how language is the dasein of geist (~ sort of medium that sustains its actualization?). there are two general ways in which phonemes may combine: >(1) iteration: elements can be repeated as much as one wants (an example negarestani gives here is "chop garlic into paste" where the chopping operation is something that can be done as much as one likes basically) >(2) recursion: elements depend upon the occurence of past elements (e.g. "cut the pie into 8 pieces", we should only cut in half 3 times as each step depends on the previous ones) negarestani will represent iteration using simple concatenation of tokens (e.g. a, ab, abc, etc given the alphabet {a,b,c}). recursion meanwhile can be written using square brackets to indicate embeddings/dependency (e.g. [a]. [a[b]], [a[b[c]]], etc) first pic-rel shows an example of this scheme. iteration and recursion forms a context-free grammar. in it, thematic roles are determined by means of order + dependency which can be used to disambiguate things. even with this, i do not think context-free grammar on its own has enough structure to talk about semantic roles... perhaps these roles are provided in the context of material inferences. i did a little bit of research on how semantic roles are indicated in generative grammar and i found two strategies: >(1) semantic parsing: https://www.cs.upc.edu/~ageno/anlp/semanticParsing.pdf >(2) context-sensitive grammar: https://www.academia.edu/15518963/Sentence_Representation_in_Context_Sensitive_Grammars 10) milikan (a "right-wing sellarsian") thinks that we are not assuming that the structure of the regularities in the real world are already given to minds through encoding in wiring diagrams. in constrast, negarestani thinks this is incorrect and furthermore a recapitulation of the myth of the given. in contrast, he thinks that it is onlt with symbols that we can actually seriously think about the structure of the world. we can understand the difference between signs and symbols by comparing it to the difference between game and metagame. the game itself consists of pieces placed in different relations together (our syntactically structured reports on regularities) while the metagame articulates rules for playing the game (corresponding to material inferences). in chess the game reports pieces and positions on the board. the meta-game talks about rules on how to set up and play the pieces 11) with symbols our automata has access to "symbolic vocabularies". from what i can gather, these are vocabularies that conform to a particular syntacical structure recognizable to the automata. negarestani models these structures a finite state machines 12) negarestani points out that the thing about what we have so far with syntax is that it is not yet enough for full-fledged language competence as we do not yet have practical mastery over inferential roles. the activity of thinking, to negarestani, requires this. what we need ultimately is linguistic interaction through which automata can master linguistic practices and slowly generate more semantically complex capacities. negarestani talks about how the increase in the complexity of thinking requires the complexification of concepts although negarestani does not think semantics reduces down to syntax in a simple manner, he does think that it is reducible in the right circumstance (viz. in the context of interaction). moreover within this interactionist context, we shall see the increased development of semantic complexity though the medium of syntax. ultimately negarestani thinks that language, as a syntactic structuring apparatus, allows us to capture more rich semantic relations and talk about new worlds and regions of reality
>>17597 the end of our story is in ch. 7! 1) as negarestani said before, semantics reduces to syntax under the right conditions. he warns us that if we make semantics completely irreducible, we would get an inflated understanding of meaning. he criticizes searle's chinese room thought experiment for assuming this. he does not think the actual computation is occurring in the room but rather in the interaction between the chinese room's operator and the person outside the room. he thinks the right conditions under which semantics reduces to syntax is under the inferentialist theory of meaning. he thinks that the most basic manifestation of meaning is in the justified use of concepts in social practices... in particular there is required know-how regarding inferences an example that negarestani gives is that the believe "this is red" permits the belief "this is coloured" but does not allow "this is green". while this is a good example, i think it misses some important details concerning concepts qua functional classification (sellars's view of concepts). in that, there seems to be other details than the epistemic transitions and obstructions between beliefs. there is also schematic knowledge (e.g. a triangle is a three sides shape) that has more practical ramifications (e.g. construction of said shape). anyways, what negarestani thinks is involved here is speech acts being commitments that have implication for other commitments. what is needed then is a continually updating context (i think this idea may ultimately connect to dynamic semantics and even a bit to milikan's concept of pushmi-pullyu) what this view also gives us is an understanding of reason (as a process of making explicit and building upon concepts) to be an activity. negarestani thinks this allows us to see it as a something algorithmic and realizable through information processing. of course implicit here is the idea that computation is synonymous with 'doing' which might no be true. the interactionist understanding of meaning involves meanings only able to be determined within a game of asking for justification and giving justification. in this process, one does not need to know all the rules beforehand. rather rules emerge over time 2) something negarestani thinks is problematic about the interactionist approach is that ti does not elaborate much on why this interaction to be social or even what such a thing formally entails. to me his approach is honestly so formal that the social seems almost unnecessary. at the same time it might still have its use ultimately he thinks interaction can be best formally elaborated in the logical framework of interaction in ludics started by jean-yves girard. what is this interactionist framework? negarestani starts by elaborating on what classical computation involves, viz. "deduction" from some initial conditions. he points out a major problem with this approach because if we have knowledge of something and also know from that something else, we should also know that something else. as such we should know everything knowable in the system, and thus no new information is gained. negarestani thinks this problem is symptomatic of ignoring the role the environment plays in letting us understand computation as involving an increase of information. the environment is also in general necessary for making sense of machines as involving input and output one benefit of interaction games is that they do not need values determining how they should evolve. rules rather emerge from within interaction itself. moreover, unlike game-theoretic games, interaction games do not need to have payoff functions or winning strategies which are predetermined in the context of interaction, when we see the input and output of the system we see the following: on the input side the system consumes the resources that the environment produces, while on the output side the converse happens. in this framework, something is computable if the system "wins" against the environment (i.e. it can formulate and execute a strategy o performing a computational task). the interaction with the environmental constrains what actions the system performs. there are many variables that can be involved in determining how the game evolves (e.g. whether or not past interactions are preserved and accessible, whether the interaction is synchronous or asynchronous, etc). in the case of classical computation, computability is within the context of a two-step game where there is only input and output 3) i did some extra reading because some of the formalizations were hard to understand exactly why they were so important from just what negarestani had said. in 'Concurrent Structures in Game Semantics' i think castellan gives us a rather concrete idea of how game semantics works. castellan first gives us a basic example in operational semantics where we are to compute the expression '3 + (3+5)'. this can be done in successive steps: 3+(3+5) -> 3+8 -> 11 while this picture is nice, we run into a problem when characterizing expressions with variables, for instance 'x+2'. the solution is to transform 'x' into a request to the environment denoted by q^{+}_{x}. after that we may receive a value from the environment. if for instance we received 2, this may be denoted by 2^{-}. thus we have x+2 -> [] + 2 -> 2+2 -> 4 as an example. the basic idea is to denote sent messages by (+) and denote sent messages by (+) and received messages by (-). this gives us an alternation between sending and receiving data which looks like a game. in this case, we can describe our dialogue as q^{-}.q^{+}_{x}.2^{-}.4^{+} and generally characterize 'x+2' as: >[x+2] = {q^{-}.q^{+}_{x}.n^{-}.(n+2)^{+} | n <- N} so we see we can now characterize expressions by their general dialogue. this is an important point to keep in mind something negarestani only mentions in passing is how we can build richer computations by the addition or removal of constraints. one such constraint negarestani gives as able to be sublates is the markovian nature of computations. i am sort of disappointed this is
>>17653 4) negarestani goes on to elaborate upon the copycat strategy. the basic idea is to consider an agent that plays against 2 players. when it receives a move from one player, it plays it against another. in the literature, this is a sort of identity map which preserves the inputs and outputs. i think what negarestani finds important here is that it treats any process as though it were in an interaction game with its dual. this demonstrates how games are a generalization of any model of synchronous procedure-following computation 5) negarestani notes that in proof theory, we can understand the meaning of a proposition as the collection of proofs that verify it. i am again going to mention an article written by someone else on this topic. in 'On the Meaning of Logical Rules' girard discusses this topic. what the paper argues is that we really want to turn the question from a proposition's meaning to that of delimiting which collections of proofs are conjunctively sufficient for a statement and testing each of these proofs. for instance, to understand the meaning of 'A∧B' we need to know what proves it. in this case, it is sufficient to have proofs of A and B separately. we just need to then test these proofs something interesting here can be seen if we interpret X^{t} as 'test X'. then we can see that (A∧B)^{t} = A^{t}∨B^{t}, (A∨B)^{t} = A^{t}∧B^{t}, etc. in general, testing sort of works like negation. this takes us to the concept of "paraproofs". we can also interpret this as an interaction between the person asserting A∧B and another person challenging different subformalae of the expression... we can see here then the resonance with game semantics. i think that in some ways, ludics really radicalizes these ideas by giving ludics an address (some sequence of integers) and look at how addresses are inscribed in the course of a proof a quick remark about this article as well would be that really it looks like girard only really cares about the semantics of rules. he does not say much about the semantics of referential terms. this disambiguates negarestani's claims about the reducibility of semantics to syntax under the right conditions, in particular what region of semantics is reducible 6) there are two kinds of computation going on here in meaning-as-proof >(1) proof search: self-explanatory. we move upwards as we search for the proofs sufficient for a proposition. this is more or less what the dialogue consists in >(2) proof normalization: remove useless elements of the proof to extract some invariant. two proofs are equivalent if they have the same normalization negarestani then goes on to talk about a meaning dispenser that takes semantics (normalized proofs) out 7) negarestani talks about more things for transitioning from formal syntax to "concrete syntax". from what i understand, the concrete syntax here is what we see in natural language sentences which involve syntactic elements that depend on previously stated elements, for instance the pronoun "it" may only refer to a single noun once or a few times. we can understand all of this in terms of resource-sensitivity
>>17654 8) i did more extra reading to both give some basic idea of what ludics involves, and furthermore some initial idea of how it is practically applied. the first article i looked at was the article titled 'dialogue and interaction: the ludics view' by lecomte and quatrini. the basic idea here is that we can now receive and send topics as data, while previously we were using game semantics to specify requests for variable values to an environment, we now have this machinery also used for dealing with topicalization as well for instance, when the context we are in is a discussion about holidays, we can start with '⊢ξ' indicating that we have received that context. from there we may try specifying the main focus of the topic to be regarding out holiday in the alps specifically. this can be denoted 'ξ*1⊢' showing that we are sending that intent to move to a subtopic let's say someone wants to talk about when one's holiday was instead of where. this requires us to change our starting point to involve a combined context of both holiday descriptions (ξ) and date (ρ). we may denote them as subaddresses in a larger context e.g. as τ*0*0 and τ*0*1. from there we can receive this larger context (⊢τ), acknowledge that they can answer some questions and request such questions (τ.0⊢) and finally survey the set of questions that can be asked (⊢τ*0*0, τ*0*1). finally we can answer a question on, for instance, the dates ( ⊢τ*0*1*6⊢ τ*0*0) generally how i read these proof trees is first of all bottom up (as we are really doing a sort of proof search), and furthermore read '⊢' as either indicating justification (e.g. n⊢m meaning 'n justifies m') or sending and receiving data ('⊢n' means i have received 'n' and 'n⊢' means i am sending 'n'). we see the clear connection to game semantics here the next paper i will look at is 'speech acts in ludics' by tronçon and fleury in ludics, dialogue and interaction. this gives a remarkably clear description of the two main rules used in ludics that have been implicitly made use of above: >(1) positive action which selects a locus and opens up all the possible sub-loci that proceed from it. sort of like performing an action, asking, or answering (sort of like sending a request in the game semantics we have seen) >(2) negative action which corresponds to receiving (or getting ready to receive) a response from our oponent there is furthermore a daimon rule denoted by a dagger (†) that indicates one of the adversaries have given up and that the proof process has terminated 9) what ludics gives us is a new way of understanding speech acts. negarestani references tronçon and fleury's paper on this topic. in our classical understanding of speech acts there are four main components: >(1) the intention of the act >(2) the set of its effects >(3) pre-requisite conditions >(4) a body which should realize any action specified in the act a problem with this scheme is that we do not quite know how speaker intention really figures into the speech act and its rammifications. furthermore, the pre-requisite conditions need not be pre-established in the immediate context of the performance of speech act. there are other points of required precision which are also lacking care in this classical scheme the ludical framework builds on top of this classical scheme by bringing in the interaction between speaker and listener. this highlights three elements: >(1) the ability of the speaker to evoke positive rule and change the context of the interaction >(2) the situation of interaction itself which involves contextual information and the actions of participants (correlate to negative actions) >(3) the impact of the interaction, which is seen as a behaviour that always produces the same result given the context at hand negarestani connects the continual updating of context to brandom's project who discusses similar ideas 10) with all of this preamble out of the way, negarestani provides us with an example dialogue in 8 acts. the basic idea is the following: A chooses some theme, then B asks A about a particular feature of this theme. from there A can give an answer and the two make judgements about whether or not A's answer was true or false... we can see at the end of this dialogue the relevance of negarestani's diatribe on veridicality in chapter 5
>>17655 11) an important thing about formal languages is that it permits us to unbind language from experience and thus unleash the entire expressive richness of these languages. the abilities afforded by natural language are just a subsection of the world-structuring abilities afforded by an artificial general language. the formal dimension of language also allows us to unbind relations from certain contexts and apply them to new ones 12) negarestani goes over the distinction between logic as canon and logic as organon. the former is to only consider logic in its applicability to the concrete elements of experience. the latter meanwhile involves treating logic as related to an unrestricted universe of discourse. kant only wants us to consider logic as organon negarestani diagnoses kant's metalogical position here as mistakenly understanding logic as organon as making statements about the world without any use of empirical datum. on the contrary, negarestani thinks that logic as organon as related to an unrestricted universe of discourse is important since the world's structuration is ontologically prior to the constitution and knowledge of an object 13) for negarestani, true spontaneity and/or formal autonomy comes from the capacity of a machine to follow logical rules. similarly, a mind gains its formal autonomy in the context of the forma dimension of language 14) to have concrete self-consciousness, we must have "semantic self-consciousness" which denotes an agent that, through its development of concepts within a context of interaction, is finally able to grasp its syntactic and semantic structuring abilities conceptually. upon achieving this they can intentionally modify their own world structuring abilities. with language, signs can become symbols, and with it we can start to distinguish between causal statistics and candidates for truth or falsity (note that this was seen in the dialogue in 8 acts). this permits rational suspicion and the expansion of the world of intelligibility. (sapient) intelligence is what makes worlds as opposed to merely inhabiting given worlds we have here eventually also the ability to integrate various domains of representations into coherent world-stories. lastly, there is also involved he progressive explication of less determinate concepts into mroe refined ones, and moreover slowly develop our language into a richer and more useful one 15) there is also an interplay between taking one to be something (having a particular self-conception) and subscribing to certain oughts on how one should behave (in particular, negarestani thinks the former entails the latter). this idea of of norms arising from self-conception gives rise to so called 'time general' oughts which are all pervasive in all the automata's activities. these involve ends that can never be exhausted (unlike in contrast for instance like hunger which can be sated and aimed at something rather specific), and are moreover non-hypothetical (think knowledge which is always a good think to acquire). example negarestani gives of such oughts are the Good, Beauty, Justice, etc this interplay of self-conception and norms furthermore opens them up to an impersonal rationality that can revise their views of themselves. it is precisely this mutability of its ideals that give rise to negarestai's problems with concerns about existential risk as they often assume a rather rigid set of followed rules (e.g. in a paperclip maximizer). eventually as they strive for better self-conceptions that are further removed from the seeming natural order of things, they might think of making something that is better than themselves
>>17656 as i said above, chapter 7 basically concludes the story of our automata. with that said, this is not the end of the book. in chapter 8 he has some metaphilosophial insights to say that i might as well mention since i have already summarized everything else including part 4... ultimately negarestani thinks that philosophy is the final manifestation of intelligence. the right location to find philosophy is not in a temporal one, but rather within a timeless agora within which all philosophers (decomposed into their theoretical, practical, and aesthetic commitments) can engage within an interaction game. this agora, which can also be interpreted as a game of games, is the impersonal form of the Idea (eidos). the Idea is a form that encompasses the entire agora and furthermore subsumes all interactions between the philosophers there. this type of types, for negarestani, is the formal reality of non-being (as opposed to being). it is through the Idea reality can be distinguished from mere appearances, and thus realism can be rescued an important distinction negarestani makes is between physis and nomos. physis corresponds to the non-arbitrary choices one has to make if one wants to make something of a particular type. for instance, when we make a house we need a roof, and there are numerous solutions to this requirement of varying adequateness. nomos meanwhile corresponds to mere convention. an example would be if a crafting guild required it by law that houses be only made by would in order to support certain businesses over others. such a requirement is external to the concept of the house. really forms correspond to physis rather than nomos. they are what sellars calls objects-of-striving the primary datum of philosophy is the possibility of thinking. what this consists in are normative commitments that can serve as theoretical and practical realizabilities. the important part here is that the possibility of thinking is not some fixed datum that is immediately given to us. rather it is a truth candidate that we can vary (and indeed negarestani thinks it shall vary as we unfurl the ramification of our commitments and consequently modify our self-conceptions). this makes way for the expanding the sphere of what is intelligible to us in fact, not only is philosophy the ultimate manifestation of general intelligence, something is not intelligence if it does not pursue the better. the better here is understood as the expansion of what is intelligible, and furthermore realizing agents that have a wider range of intelligibilities they are capable of accessing. he describes a philosophical striving that involves the expansion of what can be realized for thought in the pursuit of the good life (this good life being related to intelligence's evolving self-conception). following this line of thought he describes the agathosic test. instead of asking whether an automata can solve the frame problem or pass the turing test, the real question is whether or not it can make something better than itself negarestani introduces to us plato's divided line but interprets it along lines that echo portions of the book. the main regions are the following: >(A) the flux of becoming >(B) objects that have veridical status >(C) the beginning of the world of forms. it corresponds to models which endow our understanding of nature with structure >(D) time-general objects such as justice, beauty, etc one thing about the divided line to negarestani is that it does not merely describe discontinuous spheres of reality or a temporal progression from D to A. rather, there are numerous leaps between each region of the line. for instance, there is a leap from D to A as we structure the world of becoming according to their succession (this corresponds to the synthesis of a spatial and temporal perspective we mentioned earlier). we also have another leap from A to D where we recognize how these timeless ideas are applicable to sensible reality. these leaps grow progressively farther and farther and thus so grow the risks to the current self-conception of the intelligence
Open file (332.87 KB 1080x1620 Fg2x2kWaYAISXXy.jpeg)
>>17659 he furthermore talks about the Good which is the form of forms and makes the division and the integration of the line possible. it is the continuity of the divided line itself. within a view from nowhere and nowhen that deals with time-general thoughts, the Good can be crafted. the Good gives us a transcendental excess that motivates continual revision and expanding of what is intelligible. im thinking that the Good is either related to or identical to the Eidos that negarestani discussed earlier he notes the importance of history as a discipline that integrates and possibly reorients a variety of other disciplines. the view from nowhere and nowhere involves the suspension of history as some totality by means of interventions. currently we are in a situation of a “hobbesian jungle” where we just squabble amongst ourselves and differences seem absolute. in reality, individual differences are constructed out of judgements and are thus subsumed by an impersonal reason. in order to reconcile individual differences, we must have a general program of education amongst other interventions which are not simply that of political action. to get out of the hobbesian jungle, we need to be able to imagine an “otherworldly experience” that is completely different from the current one we operate in even though it is fashioned from the particular experiences from this one. this possible world would have a broader scope and extend towards the limits placed by our current historical totality. absolute knowing: recognition by intelligence of itself being the expression of the Good, that is capable of cancelling any apparently complete totality of history it is only by disenthralling ourselves from the enchanting power of givens of history, that the pursuit of the Good is possible. the death of god (think here of nietzsche… hegel also talks about it as well, though i believe for him the unhappy consciousness was a problematic shape of consciousness that was a consequence of a one-sided conception of ourselves) is the necessary condition true intelligence. this is not achievable by simply rejecting these givens, but by exploring the consequences of the death of god. ultimately we must become philosophical gods which are beings that move being the intelligibilities of the current world order and eventually bring about their own death in the name of the better. ultimately negarestani sees this entire quest as one of the emancipation i think negarestani takes a much more left-wing approach to hegel's system. while i do not completely disagree with his interpretation of absolute knowing, it does seem as though he places much more of an emphasis on conceptual intervention, rather than contemplation. i am guessing this more interventionist stance is largely influenced by marx... overall, not a bad work. i think it might have been a little bit overhyped, and that last chapter was rather boring to read due to the amount of time he repeats himself. i am not really a computational functionalist, but i still found some interesting insights regarding the constitution of sapience that i might apply to my own ideas. furthermore he mentions a lot of interesting logical tools for system engineering that i would like to return to now that i am done with negarestani, i can't really think of any other really major tome to read on constructing artificial general intelligence specifically. goertzel's patternist philosophy strikes me as rather shallow (at least the part that tries to actually think about what intelligence itself is). joscha bach's stuff meanwhile is just largely the philosophy of cognitive science. not terrible, but feels more like reference material rather than paradigm shifting philosophical analysis. maybe there is dreyfus and john haugheland who both like heidegger, but they are much more concerned with criticizing artificial intelligence than talking about how to build it. i would still consider reading up on them sometime to see if they have anything remarkable to say (as i already subscribe heavily to ecological psychology, i feel as though they would really be preaching to the choir if i read them). lastly there is barry smith and landgrebe who have just released their new book. it is another criticism of ai. might check it out really there are 2 things that are really in front of my sights right now. the first would be texts on ecological psychology by gibson and turvey, and the other would be adrian johnston's adventures in transcendnetal materialism. i believe the latter may really complement negarestani. i will just quote some thoughts on this that i have written: >curious to see how well the fit. reading negarestani has given me more hope that they will. bcs he talks about two (in his own opinion, complementary) approaches to mind. one that is like rationalist/idealist and the other that is empiricst/materialist. first is like trying to determine the absolutely necessary transcendental cognitions of having a mind which ig gives a very rudimentary functionalist picture of things. the second is like trying to trace more contingent biological and sociocultural conditions which realized the minds we see currently. and i feel like johnston is really going to focus on this latter point while negarestani focusing on the former anyways neither of these directions are really explicitly related to ai, so i would likely not write about them here. all of this is me predicting an incoming (possibly indefinite) hiatus from this thread. if anyone has more interesting philosophers they have found, by all means post them here and i will try to check up on them from time to time... i believe it is getting to the time i engage in a bunch of serious grinding that i have been sort of putting off reading hegel and negarestani. so yeah
>>17520 >finally we get to see the two stories which negarestani talks about (pic rel). he thinks there is a distinction between the two sorts of seeing here. the first story talks about seeing 1, and the second story talks about seeing 2. seeing 1 is more concerned with raw sensations, while seeing 2 is conceptually mediated. now The two kinds of seeing seem to come from two different ways to abstract observations. Seeing 1 corresponds to coarse-graining, while seeing 2 corresponds to change in representation. Practically, it's related to the difference as between sets and whole numbers. There's only one whole number 2, but there are many sets of size 2. Simlarly, there's only one way to coarse-grain an observation such that the original can be recovered (the trivial coarse-graining operation that leaves the observation unchanged), but there are many ways to represent observations such that the original observation can be recovered. Also practically, if you want to maintain composibility of approximations (i.e., approximating B from A then C from B is the same as approximating C from A), then it's usually (always?) valid to appoximate the outcome of coarse-graining through sampling, while the outcome of a change in representation usually cannot be approximated through sampling. If that's the right distinction, then I agree that the use of this distinction in differentiating sapience from sentience is unclear at best. It seems pretty obvious that both sentence and sapience must involve both kinds of seeing. I intend to read the rest of your posts, but it may take me a while.
> (AI philosophy crosslink-related >>21351)
I postulate that AI research claims are not scientific claims... they are advertisment claims. :^) https://www.baldurbjarnason.com/2023/beware-of-ai-snake-oil/
What do you guys think of my thread about that here: https://neets.net/threads/why-we-will-have-non-sentient-female-android-robots-in-2032-and-reverse-aging-tech-in-2052-thread-version-1-1.33046/ I can't post it here because, it's too long and it has too many images.
Interesting forum, but "sentience" and "conscience" aren't very well defined terms. That aside, I hope Chobitsu moves that into meta or one of the AI threads. No problem that you didn't know, but this board isn't like other image boards, you can use existing and old threads.
>>24624 Hello Anon, welcome! These are some interesting topics. I've had a brief survey of the two threads and it definitely things we're interested in here on /robowaifu/. I'm planning to merge your thread into one of our other threads already discussing these things. Please have a good look around the board. If you'd care to, you can introduce yourself/your community in our Embassy thread too (>>2823). Cheers :^)
>>24624 Do you yourself have any plans to build a robowaifu Anon, or is your interest primarily in discussing AI side of things?
>>24631 I'll probably build my own robowaifu, but i don't really know because their construction might be automated by the time they are ready. I'm interested in working in the robowaifu industry. I'm also here to give some lifefuel, and let you guys know that we won't have to wait 50 years for the robowaifus. This other thread might interest you, it's about how the behavioral sink will make all men need tobowaifus in some years: https://neets.net/threads/the-behavioral-sink-in-humans.31728/
>>24646 *robo*
Open file (2.11 MB main.pdf)
It's entirely conceivable this type of approach could help our researchers 'use AI to help build better AI'. In programmer's parlance, this is vaguely similar to 'eating your own dogfood' but with the potential for automated, evolutionary, self-correcting design trajectories. At least that's my theory in the matter! :^) Fifth Paradigm in Science: A Case Study of an Intelligence-Driven Material Design [1] >abstract >"Science is entering a new era—the fifth paradigm—that is being heralded as the main character of knowledge integrating into different fields to intelligence-driven work in the computational community based on the omnipresence of machine learning systems. Here, we vividly illuminate the nature of the fifth paradigm by a typical platform case specifically designed for catalytic materials constructed on the Tianhe-1 supercomputer system, aiming to promote the cultivation of the fifth paradigm in other fields. This fifth paradigm platform mainly encompasses automatic model construction (raw data extraction), automatic fingerprint construction (neural network feature selection), and repeated iterations concatenated by the interdisciplinary knowledge (“volcano plot”). Along with the dissection is the performance evaluation of the architecture implemented in iterations. Through the discussion, the intelligence-driven platform of the fifth paradigm can greatly simplify and improve the extremely cumbersome and challenging work in the research, and realize the mutual feedback between numerical calculations and machine learning by compensating for the lack of samples in machine learning and replacing some numerical calculations caused by insufficient computing resources to accelerate the exploration process. It remains a challenging of the synergy of interdisciplinary experts and the dramatic rise in demand for on-the-fly data in data-driven disciplines. We believe that a glimpse of the fifth paradigm platform can pave the way for its application in other fields." 1. https://www.sciencedirect.com/science/article/pii/S2095809923001479 >>24646 Thanks Anon. Yes, we've discussed so-called 'Mouse Utopia' and other sociological phenomenon involved with the rise of feminism, here before now. I doubt not there is some validity to the comparisons. However as a believing Christian, I find far more substantial answers in the Christian Bible's discussions about the fallen nature of man in general. There are also arrayed a vast number of enemies -- both angelic and human -- against men (males specifically), that seek nothing but our downfall. /robowaifu/, at least in my view of things, is one response to this evil situation in the general sense; with tangible goals to assist men to thrive in this highly-antagonistic environment we all find ourselves within. And almost ironically... robowaifus can potentially even help these so-called 'Foids' become more as they were intended by God to be -- in the end -- as well I think (by forcing them back to a more realistic view of their individual situations, less hindered by the Globohomo brainwashing they're seemingly all so mesmerized by currently. To wit: self-entitlement and feminism). :^) >=== -prose edit
Edited last time by Chobitsu on 10/04/2023 (Wed) 00:30:04.
Our friends over at /britfeel/ are having an interesting conversation about AI r/n. https://anon.cafe/britfeel/res/5185.html#5714
So I'm thinking about this more from a robot/agi ethical perspective, with the view of sort of "intelligence mimicking in a shallow form (externalities)" vs "intelligence mimicking in a deep form"(mimics the cause of the externalities along with the normal external aspects) vs actual agi. My understanding at least is that reason and our intellectual capabilities are the highest parts of us, however they can't really function apart from that. The reason why is the moral aspect, I'm coming from a very greek perspective so I should probably clarify how they relate. The basics would be you have a completeness to your being, a harmony to your parts, that when your parts are functioning as they ought better show forth your being/your human nature. So the different parts of a person (eyes, ability to speak, etc.) have to be used in such a way so as to bring about the full expression of that nature. That ties in the intellect in that it involves using it to understand those parts, and how they relate to the whole, and the whole itself. That along with you choosing to understand that, and the relationship, then seeing the good of the whole functioning harmoniously you will do that, and that's the essence of moral action. The proper usage of those powers ends up getting complicated as it involves your personal history, culture, basically every externality so there aren't really hard and fast rules. It's more about understanding the possibilities of things, recognizing them as valuable, and then seeking to bring them about (out of gratitude for the good they can provide.) Now the full expression of that human nature or any particular nature, or at least getting closer to it, the full spectrum of potential goods it brings about is only known over a whole lifetime. That's what I was referring to by saying personal history, as you move along in your lifetime the ideal rational person would be understanding everything in front of them in relation to how it could have them be a good life. It's the entire reason why I have the ability to conceptualize at all, in everything you see and every possible action, you see your history with it, and how that history can be lifted up and made beautiful in a complete life in a sort of narrative sense. The memory thing isn't that hard technically, it's more the seeing the good of the thing itself. The robot, or AGI would need to sort of take on an independent form of existence and have a good that is not merely instrumental to another person. Basically be in some for a genuine synthetic life form. (most people who have these views just think any actual agi is impossible). One of the key things is this sort of good of the thing itself, human nature or whatever, is not something anyone has a fully explicit understanding of, there is definition that can be provided. It's an embodied thing with a particular existence with a particular history. Each individual's expression of their being will differ based on actual physical things they relate to, and things like the culture they participate in. The nature is an actual real thing constituting a part of all things (living and otherwise), and all intellectual activity is a byproduct of those things nature interacting with the nature of other things, and those natures aren't something that can ever be made explicit. (this ties in with all the recent extended mind cog-sci and philosophy of mind) (One author I want to read more of is John Haugeland who talks about the heideggerian AI stuff, he just calls this the ability to give a damn. A machine cannot give a damn, and apart from giving a damn you have no capacity for intellectual activity for the reasons I stated above). That's sort of the initial groundwork:
>>30634 >For an Actual AGI it does leave it in a really hard place, I think it's still possible but would involve probably something far in the future. It would need to be someone intentionally creating independent synthetic life, for no instrumental purpose. That or you could have something created for an instrumental purpose with the ability to adapt that eventually attains a synthetic form of life. A cool sci-if story about this is The Invincible by Stanislaw Lem (I don't want to spoil it but if you are in this thread you will probably love it). The main issue to get more technical comes to artifacts/substances using the Aristotelian language: There are those things that have an intrinsic good-for-themselves, and things that act-to-help-themselves-stay-themselves. Things with specific irreducible qualities. Life is the clearest example, it's something greater than the sum of its parts and it acts to maintain it, and its parts are arranged in such a way to serve the whole rather than the parts themselves.It’s only fully intelligible in reference to the whole. Other examples would be anything that cannot be reduced to parts without losing some kind of essential aspect, perhaps chemicals like aluminum or certain minerals. Those are less clear than life and I’m not a biologist/geologist or w/e so it's hard to say. Styrofoam would be a synthetic one. Artifacts on the other hand are totally reducible to their parts, all there is no mysterious causality, nothing is hidden. Another way of looking at them is they are made up of substances (things with natures I talked about above) “leaning against each other” in a certain way. Things like a chair or a computer would do that. A chair doesn’t do anything to remain a chair, the wood will naturally degrade if it’s not maintained, and there is nothing about the wood that makes it inclined to be a chair. Everything the chair does is also totally reducible down to what the wood can do, there is nothing more added. A chair is only just the sum of its parts. The same thing for computers at a much more complicated level with different materials/compounds/circuitry, it’s still just switches being flicked using electricity, all the cool stuff we get computers can be totally understood down to the most basic level. Only the sum of its parts again. The point here would be to have an AGI you’d need to get something that is more than the sum of it’s parts, which might be impossible and if it did happen would probably be pretty weird. On the plus side people like Aristotle wouldn’t consider most people to be using their full rationality anyway… normies/sheeple and all that. But even a bad usage of a real intellect is still very hard to mimic. Now that the metaphysics lessons are out of the way it will hopefully be shorter.
>>30635 Forms of intelligence mimicking: >Shallow/external This is the easiest one and is what basically all the actual research focuses on. I don’t really think this will ever suffice for robowaifus or anything getting close to AGI. To define it better, it’s basically ignoring everything I said above and makes no attempt to simulate it. There is no intrinsic conception of the things-own-good, no focus on that kind of narrative/moral behavior and memory. As far as I can tell in practice this basically means a vague hodgepodge of whatever data they can get, so it’s entirely heuristic. Any actual understanding of what anything or who anything is not a possibility. Keep in mind from what I said above, that kind of moral understanding is required for any real intellectual activity. To give a more relevant example for this board, and this has particular bearing for relationships as well. When a waifu encounters something, in order for it to be even at all satisfying as a simulation it must see it in relation to the good of her own being, as well as for a waifu the good of her partner. Involved in that is understanding the other person and their good and the things they are involved with. It’s very memory based/narrative based, seeing anything that pops up and questioning it for how it integrates with the sort of robots life narrative as well as the integrated robot-you life narrative. The foundation of that sort of more moral aspect of intelligence is essential and is something that needs to be explicitly factored in. That particularity and memory is necessary for intelligence as well as even just for identity. >Deep mimicking This is at least I think more of a technical/programming question and one I plan on learning more about. There doesn’t seem to be any philosophical difficulty or technical possibility, as far as I know it’s mostly a matter of memory and maybe additional training. I imagine there would be quite a bit of sort of “training/enculturating” involved as well with any specific robot, since as I said above intelligence by its nature is highly particular. I’m not sure where the philosophical issues would come up. Technically it might just be overwhelming to store the sort of breadth of organized particular information. The key thing would be sort of making sure things are looked at functionally ie. What is the full set of possible actions that can be done with X (but for everything they robot could possibly do) Obviously that’s a ridiculous set of data so some heuristic/training would be involved but that happens with real people anyway. (There’s also the issue of the robot only being mindful of the anon’s good, which would work “enough” however without having somehow it’s own synthetic/simulated “good/nature/desires” it would probably feel very fake/synthetic.That’s a very hard part no clue what to do for that. Just going to assume it’s parasitic off of the anon’s good for the sake of simplicity. Also funny enough may make this kind of AGI better suited for waifus than anything else) As a sort of simple example let's say >bring anon a soda As a possible action (all possible actions at all times would need to be stored somehow, chosen between, and ranked, and abandoned if a pressing one shows up) But for the soda what is involved is: Just recognition of the soda, visual stuff, standard vision stuff that shouldn’t be an issue. That or you could even tie in Amazon purchases with the ai’s database so it knows it has it and associates the visuals with the specific thing or something. What are the possible actions you can do with the soda? >Throw it out if it’s harmful (creates a mess, bad for anon because its expired, bad for anon because its unhealthy) >order more if you are running low >ask anon if he wants a soda >bring anon a soda without asking Not that many things, I guess the real hard part comes when you relate it to the good of anon, and ranking the possible options. At all times it would need to keep like just the list of possible actions, which would be constantly changing and pretty massive, and be able to take in new ones constantly. (Say you get a new type of soda, but you only get it for a friend who visits, it needs to register that as a distinct kind and have different behaviors based on that) The real philosophically interesting question is what kind of heuristic can you get for “the good of anon”, the means of rank-ordering these tasks. Because intelligence is individual for the sake of this it needs to be based on an individual, it’s kind of hitching onto an actual nature to give a foundation for it’s intelligence. So long as it had that list of actions, it could have something like a basic template that maybe you customize for ranking. It would also need to be tracking all the factors that impact ranking (you are asleep, busy, sick, not home, etc.). For the states I don’t think there would be that many so they could be added in as they come up. (Just requires there to be strong definite memory). But I mean it might not actually be that hard unless I’m missing something, certainly technically quite complicated but seems fairly doable at some point… They key difference between this and what most stuff I see talked about is it basically only has any understanding/conceptuality related to another actually real being and basically functions as an extension of them, works for a waifu at least.The self activity seems hard to map on though, there would likely be something very interesting working with how you associate basic motor functionality like moving/matinence with the anon. Determining what to store and how to integrate things into a narrative would also be interesting. I imagine there would just be human templates + conversations with anon about future goals that would be used to handle the rank ordering. Something funny coming from that would be the intelligence of the robot would be dependent on how intelligent the user is. If you were a very stupid/immoral person your robot would probably not really ever get convincing or satisfying. (was not expecting it to go this long h
First, robots can never be conscious. They have no spirit or soul and never can and never will. You can NEVER transfer a human soul into a robot as some so foolishly claim. They are just computer programs with 1's and 0's at the end of the day. They are machines. They are not living and can never be living. Emulation of humans is all that is happening there. People are recording the logic of people into machines and then the machine seems to have that intelligence but it is only a recording of our intelligence being reflected back to us like looking in a mirror. If you look in a mirror and see a human in there, the mirror is not alive, you just see your reflection. The same principle is there with robot coding reflecting our human intelligence. It is only a reflection and is not living. So if someone does the AI and robot build unto God with pure motives, it is wholesome and pure and praiseworthy. If someone builds it with evil motives, it is an evil pursuit. Intentions are the key. If someone builds it to worship it, that is idolatry. So bad. If someone believes it really is living (because they are a fool) or it really has genuine intelligence and a soul (they're a fool), then in the eyes of such a deceived person, they may feel like they played God but in fact, they did nothing even close because it is not anything close to that. No amount of code will ever create "real" intelligence. That is why the field is called artificial intelligence. it is artificial, not real. It will never be real. Blasphemous fools speak all the time about the intelligence of machines passing our own, becoming conscious, becoming sentient, deserving citizenship and rights etc. They are completely blind and total fools. They think people are just machines too btw. They think we are just meat computers. These same people do not believe in God or an afterlife. They are in total error.
>>30667 >You can NEVER transfer a human soul into a robot as some so foolishly claim. Soul is a pretty ill defined thing even in a religious context. >They are just computer programs with 1's and 0's at the end of the day. The human brain works a similar way on how it's neurons send electrical and chemical signals. >They are machines. Humans are biological machines. Look into how the body works on the cellular level.
>>30671 >>30667 My big wall of text above is basically using the classical/Aristotelian definition of a soul and what would be involved for simulating/recreating it for AI. I just avoid the language of Soul b/c people have so much baggage with it. Actually creating a genuine new form of life seems implausible but I can't say it's totally impossible, we have created genuine new kinds of natural things (like Styrofoam I mentioned) I don't see why things like bacteria/simple forms of life couldn't be created anew, and from there it's at least possible intelligence isn't out of the question. It could very well be impossible though. I do think the attempting to make the AI/robot sort of rely off of a single particular human soul as it's foundation for orientation is a possibility for giving it something very close to real intelligence at least practically (and it would depend on that person being very moral/rational).
only God can make a soul. God breathed into man and man became a living soul. Good luck breathing into your stupid computer program.
the human brain is not ALL doing thinking in a man. this is proven out by the fact when you die you go right on thinking as a ghost. Good luck getting your stupid computer program to go right on thinking as a ghost after you shut the computer off.
>>30675 Simple synthetic lifeforms have been made in labs before so depends what you mean by genuinely new https://www.nist.gov/news-events/news/2021/03/scientists-create-simple-synthetic-cell-grows-and-divides-normally >>30676 >Only God can make a soul And man would be god of robots.
>>30676 With human-computer brain interface, pull a ship of theseus until no organic matter left then copypasta. Simple.

Report/Delete/Moderation Forms