/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

The canary has FINALLY been updated. -robi

Server software upgrades done, should hopefully keep the feds away. -robi

LynxChan 2.8 update this weekend. I will update all the extensions in the relevant repos as well.

The mail server for Alogs was down for the past few months. If you want to reach out, you can now use admin at this domain.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB


(used to delete files and postings)

Knowing more than 100% of what we knew the moment before! Go beyond! Plus! Ultra!

Philosophers interested in building an AGI? pygmalion 06/26/2021 (Sat) 00:53:09 No.11102
Why is it that no philosophers are interested in building an AGI? we need to change this, or at least collect relevant philosophers. discussion about philosophy of making AGI (includes metaphysics, transcendental psychology, general philosophy of mind topics, etc!) also highly encouraged! Ill start ^^! so the philosophers i know that take this stuff seriously: Peter Wolfendale - the first Neo-Rationalist on the list. his main contribution here is computational Kantianism. just by the name you can tell that he believes Kant's transcendental psychology has some important applications to designing an artificial mind. an interesting view regarding this is that he thinks Kant actually employed a logic that was far ahead of his time (and you basically need a sophisticated type theory with sheaves to properly formalize). Other than that he also thinks Kant has interesting solutions to the frame problem, origin of concepts, and personhood. CONTACTS: He has a blog at https://deontologistics.co/, and also has posted some lectures on youtube like this one: https://www.youtube.com/watch?v=EWDZyOWN4VA&ab_channel=deontologistics Reza Negarestani - this is another Neo-Rationalist. he has written a huge work (which I haven't read yet ;_;) called "Intelligence and Spirit". It's massive and talks about various grades of general intelligence. this includes sentient agents, sapient agents, and Geist. this guy draws from Kant as well, but he also builds on Hegel's ideas too. his central thesis is that Hegel's Geist is basically a distributed intelligence. he also has an interesting metaphilosophy where he claims that the goal of philosophy is the construct an AGI. like other Neo-Rationalists, he heavily relies on the works of Sellars and Robert Brandom Recc: Ray Brassier (recent focuses) - I dont think he is working on artificial general intelligence, but his work on Sellars, and in particular rule following is very insightful! Hubert Dreyfus - Doesn't quite count, but he did try to bring Heidegger to AGI. He highlighted the importance of embodiment to the frame problem and common sense knowledge. I personally think Bergson might have explicated what he wanted to achieve but better, though that guy is like way before AI was even a serious topic, lol. Murray Shanahan - This guy has done some extra work on the frame problem following Dreyfus. His solution is to use global workspace theory and parralel processing of different modules. Interesting stuff! Barry Smith - Probably the most critical philosopher on this list. He talks about the requisite system dynamics for try strong AI, and concludes that our current methods simply don't cut it. One of the key stressing points he points out here with a colleague is that our current AI is Markovian when fleshed out chat dialogue would be a non-Markovian task (you can find the arxiv link of his criticism here: https://arxiv.org/abs/1906.05833). He also has knowledge on analytic ontology (and amongst other thing has some lectures about emotion ontology). I think his main genius however is in coming up with a definition of intelligence that puts a lot of the problems with our current approaches into context (which can be found here: https://www.youtube.com/watch?v=0giPMMoKR9s&ab_channel=BarrySmith) CONTACTS: He has a yt channel here https://www.youtube.com/watch?v=0giPMMoKR9s&ab_channel=BarrySmith Uhh, that's the introduction of pretty much every philosopher I know who works on this stuff. I made a thread on /lit/ and got no responses :( (which isn't surprising since I am the only person I know who is really into this stuff)
Open file (67.69 KB 756x688 ClipboardImage.png)
>>13446 >lacan talks about this with his concept of the "big Other" now this is not the first time I've heard of Lacan in the last year and I'd never even heard of him my whole life until then. I don't know if I can get into what he's selling it's all very scripted and the same problems I have with Freud. Even personally I have have a lot of issues from childhood but none of them are potty or sexual related and you'd think those were the root of all psychological trauma after a certain point.
>>13446 oops forgot namefag lol >>13519 psychoanalysis is pretty weird and there are certainly things you probably wouldn't want to recreate in a waifu even if it was true... like giving them an electra complex or whatever. i personally prefer to be very particular in my interpretation of their works. for instance, with jungian archetypes, i'd lean more on the idea that they are grounded on attractor basins. here is a good video if anyone is ever interested: https://www.youtube.com/watch?v=JN81lnmAnVg&ab_channel=ToddBoyle for lacan so far i've taken more from his graph of desire than anywhere else. some of the things he talks about can be understood in more schematic terms? im not too much of a lacanian honestly. largest take aways are probably the idea that desire is can be characterized largely by a breaking of homeostasis and the big other as possibly relating to some linguistic behaviour (with gpt-3 what i see as a characteristic example) one particular observation i think freud made that was very apt was that of death drive. humans don't just do stuff because it is pleasurable. there's something about that which is very interesting imo. lacan's objet petit a is apparently a development of this idea. it might be related to why people are religious or do philosophy whilst animals do neither >Even personally I have have a lot of issues from childhood but none of them are potty or sexual related and you'd think those were the root of all psychological trauma after a certain point yeah the psychosexual stuff is very strange and i just ignore it. maybe one day i will revisit it and see if anything can be salvaged
yesterday i just finished the phenomenology of spirit. as such i am thinking of starting a reading group of reza negarestani's intelligence and spirit invite link here: https://matrix.to/#/%23iasreadinggroupzaai:halogen.city i will summarize a few reasons to be motivated to read about this work: 1) it is a work that gives a treatment of artificial general intelligence from a resolutely philsophical perspective 2) a lot of agi researchers tend to overemphasize the process of making bridges between different domains of knowledge, as well as abstract logical manipulation of knowledge to reach conclusions (as an attempted solution of the problem of common sense). negarestani instead looks at rationality as a phenomenon that is constituted by social phenomena as well as the general environment 3) it is continuous with wolfendale's computational kantianism which interprets kant as an artificial general intelligence project (i already mentioned him before in the OP). i believe in particular that wolfendale's and negarestani's understanding of agency (as requiring the modelling of positive and negative constraints) are rather similar 4) i believe he might have interesting things to say about the relation between syntax and semantics and ultimately the hard problem of content 5) i am personally also interested in it because it might provide an interesting contrast to my theories. negarestani is much more of a pragmatist, while i am far less scared of endeavouring in vulgar metaphysics. moreover, negarestani is drawing off of hegel while i draw off of bergson 6) skimming, there are some interested formal structures that negarestani brings up which may be useful (e.g. chu spaces) i realized after reading the phenomenology that of all of what it is talking about, only some of it can really be translated to the project of agi. nevertheless, since i was already summarizing the movements, i might as well give a general summary of what the book is actually about (to the best of my ability, as this is my first time reading the text!) (ill also preface this by saying that i have admittedly taken some ideas about what hegel was saying from "Hegel's Ladder". good book, but might not have been a wise idea to read it on my first read of Hegel perhaps) honestly, i never quite understood what philosopher's meant when they would say X thinker is complex. while sure they often had a lot of arguments, that wouldn't make the work much more complex than the average mathematics textbook. i think the phenomenology meanwhile is the most complex work i have ever read in my very short life (and hopefully nothing beats it). what is the problem with it? well i can list a few points: 1) the content that is exposited is organized in a very peculiar way that i have never seen before. often the phenomenology is expressed as this historical ascension of spirit over time, but this is not quite accurate. actually, the first few sections (consciousness, self-consciousness, reason, and even spirit) don't actually occur within time. in some sense this makes sense, because by the time you have a community, all of these sections would be present at once. what hegel does identify as occurring through time is the religion section. here we go through different religions and see how they slowly grope at absolute knowing. the complexity here comes from the fact that each of the shapes of consciousness we see in religion end up recapitulating moments from all of the previous sections simultaneously (pic rel). this made it rather difficult near the end because it made me start thinking more about all of the lessons which were learned before 2) hegel's style is very weird. in particular, how he transitions from one mode of consciousness to another is very inconsistent. with fichte, the transition from one categorical structure to the next was predicated on the fact that the current structure we were looking at was not enough for the absolute I to be able to posit itself (or something like that). with hegel it is different. sometimes, there really is a transition due to an internal insufficiency. other times, what happens is that we have split our consciousness into two sides and one side somehow beats the other. another point of difference is that as soon as fichte starts deriving conclusions about a particular logical structure, you know eventually he will claim it is insufficient and move to the next one. with hegel however, sometimes deriving conclusions about a particular shape of consciousness, it leads us to a new shape of consciousness. this is gets really confusing, because sometimes you are confused whether hegel spotted an insufficiency in the previous mode of consciousness that u didn't catch, or whether one is actually derived from another 3) his paragraphs sometimes say a lot at once and it makes it sometimes difficult to decipher what was the main thing he was arguing at any particular paragraph 4) when he uses his logical abstractions, it is easy to get lost and have difficulty seeing how it relates back to concrete affairs at the same time, i think that a lot of these difficulties were necessary for hegel to get his point across. what is this point, in essence? before hegel were the philosophers fichte and schelling who were building off of kant. fichte started this whole mess with his "science of knowledge". in it, we have articulated a system that "derives" kant's main categories, the forms of intuition, and other things. i put "derive" in quote, because fichte's form of argument was rather strange. he wasn't quite deducing things in a formal manner. rather he organized all of a material in such a fashion that every structure that kant presented in his critique of pure reason would be justified in existing. this justification basically consisted in showing how if we didn't have all of these categories, we would be unable to develop a science that is truly comprehensive. fichte only is ever concerned with the structure of reason, so he is called a subjective idealist
>>16839 (cont) then schelling comes in. in his philsophy of nature, he does a similar thing fichte does, but now he is organizing concepts in natural science (ranging from gravity to metabolism) into a system which would justify all of them. a bit thing he wanted to do here is to show that there is an "isomorphism" of sorts between the absolute self of fichte's science of knowledge, and nature in his philosophy of nature. this would show a connection between our consciousness/freedom as subjects and our physical existence. here is what people often call objective idealism. this was a fun project, though eventually it devolved into identity philosophy. in identity philosophy, consciousness and nature are just posited as identical. in particular, there was an absolute which comes before two have ever split, and thanks to this absolute the two are always in a unity then comes in hegel. he wants to do a lot of things with the phenomenology. one of these things is to show that this identity philosophy was a "mistake". the problem with it is that since it posits the two sides as essentially the same, it tells us nothing about either of them, or why they share any connection. this is what hegel calls "the night where all cows are black". hegel wants to give an account of the connection between subject and object in a way that doesn't merely assert an identity there is a more precisely epistemological goal here. both fichte and schelling relied on an "intellectual intuition" which gave them some privileged ability to construct their sort of scientific system (i want to say that schelling's identity philosophy is related to this but i can't 100 confirm that lol) hegel, as opposed to merely accepting such an intellectual intuition, wants to show how this form of science is possible due to "minimal" metaphysical structure + historical conditioning. a major idea here is that our sciences are really conditioned on how the subject structures themselves. for instance, if for a subject everything is just extended in space, then everything they work with people be extended in space. this idea is continuous with fichte's idea of the absolute ego positing itself. my reading of positing there is that it is an operation which adds a new item into this abstract fichtean's subject's ontology. hegel wants to take this idea and make it concrete, and stress how it involves the way people relate to an object in actuality already in schelling, we see that in his organizing the different scientific concepts, it can be understood as a recollection of the ideas of the time. hegel notices this and really wants to push that this is the proper way to understand what a wissenschaft (hegelian science) consists in. so in the phenomenology of spirit, hegel is going to want to articulate what sort of (historically conditioned) subject would produce a science of this type... with all of that in mind let us summarize the different sections: >consciousness i think i have summarized this section pretty well already. things are only what they are by being within a larger system (ultimately a temporal process, though hegel doesn't stress the temporal aspects too much in this work). the subject is likewise only what it is by being embedded in such a field. the self is also embedded in this field. as the withdrawal of objects from the rest of the world is just an abstraction, this means that ultimately kant's concept of the thing in itself is invalid. it is notable that both fichte and schelling also dismiss this idea, but not in quite such a metaphysical fashion the concept presented here is very much like the buddhist concept of dependent origination (see also: https://plato.stanford.edu/entries/japanese-philosophy/#HoloRelaBetwWholPart), but it does not lead to the denial of the self. this is because even if the self is determined by an other, the separation between the self and this other was an abstraction in the first place. so it can be just as much determined that the other is just the self. its mode of interacting with the self is filtered by the self's own structure (boundaries on what the self might be, despite the flux of inter-determining entities) we will see this idea again in spirit, where it is both substance and subject. it is subject in so far as it negates itself and produces otherness. as substance, it is self-identical, and this means that even with this other object spirit still ultimately remains itself. negarestani will recapitulate this idea when he starts invoking andre roden's idea about invariance under transformations (i think in this idea, we might be able to make hegel more concrete and applicable to the construction of agi)... another thing to note is that in this process we see that the self expands its "transcendental" boundaries. thus knowing, and most forms of human actions are what we call "infinite processes". why is such a process called infinite? this takes us to hegel's logic. a thing is finite because it has boundaries. thus it would be natural to say something is infinite if it had no boundaries. but this would just make it not-finite. this negative relation to finitude can be construed as a boundary. hegel's solution to this is to make the true infinite a process manifest in finite things transcending their boundaries
Open file (3.17 MB 1412x1059 1645411340935.png)
>>16840 (cont) does this idea have any relation to the hard problem of consciousness? i think we can interpret it as having such a significance. in particular, we may relate hegel's conception of (im)material existence where each object is related to everything else as very much like bergson's idea of the holographic matter field. to quote bergson: >But is it not obvious that the photograph, if photograph there be, is already taken, already developed in the very heart of things and at all the points of space ? No metaphysics, no physics even, can escape this conclusion. Build up the universe with atoms each of them is subject to the action, variable in quantity and quality according to the distance, exerted on it by all material atoms. Bring in Faraday's centres of force: the lines of force emitted in every direction from every centre bring, to bear upon each the influences of the whole material world (i wonder if it is not possible if we can not then relate lacanian psychoanalysis to bergsonian images, and the (peircean) diagrammatic systematization thereof? food for thought, though perhaps deleuze has already dont this. i haven't really read him) what hegel does is to explore the epistemological side of this, and in particular how this relates to an epistemological subject (in contrast to bergson who seems fine with the concept of intuition...) >self-consciousness we now have to examine ways that consciousness uses this process to model consciousness. we begin learn in this chapter the importance of intersubjectivity for this endeavour. in the slave we see that the pre-configurings of the idea that outside spiritual forces impinging upon us might provide us the conditions to change how we think about the object we also learn the dangers of taking the idea that the self is in some sense the "centre" of the world in a crude manner. because if we are not careful, we rid ourselves of substance, and we thus lose all essentiality with which to work with. this culminates with the unhappy consciousness which is distraught at its inability to find any essentiality in the world and posits God as a way to find meaning... we need something to actually work with to have a wissenschaft, and this ultimately means we should look out into the world. this takes us to the reason section >reason in reason, we have a consciousness that is now self-certain. this means that it is sure it can find its own essence within the world. our first cautionary tale lies in immediate self-certainty. think of this structure as akin to kant's unity of apperception (i.e. the statement that i can attach "I think" to any of my perceptions). the problem with this is that to confirm such a statement, we would need to to attach the "I think" to every object in existence. this is a never ending process, what hegel calls a bad infinity. what we really need is a good infinite so reason goes into the world by means of empirical science to do this. in observing reason it looks for itself in natural phenomenon. this ends up in a failure that climaxes in phrenology where it tries to look for the self in a dead skull (it should be remarkable that hegel's remarks on phrenology could easily be applied to neuroscience. the article "Hegel on faces and skulls" could be a fruitful reading). historically, the shock of spirit trying to find itself in death is akin to the shock of christ's crucifixion which was ultimately the precondition for the instantiation of the holy spirit in the community. we wont quite see this parallel drawn until later but i dont really know how to structure this summary lastly we actually see reason try to reproduce itself by means of practical affairs (this reproduction is what heglel terms "happiness")... i honestly really like hegel's emphasis here, as i have my own suspicions about the importance of everyday activities in the constitution of the subject. however i believe he is still thinking about this in a way that is too abstract and perhaps too based around duty (not to say that is completely wrong... anyways after a bunch of shenanigans, we learn that indeed this practice must be something actual, in action. moreover, hegel makes some interesting remarks about action here. for one, in action, the the subject-matter worked on by a self-consciousness comes into the light of day. this allows other self-consciousnesses to recognize (though he doesn't stress this point as much till the morality and conscience section) and work on it as well. furthermore, the significance of reality to self-consciousness here is only as a project to be worked upon. thus in this practice-focused reason we have self-certainty properly realized. the fact that, in actualization other self-consciousnesses recognize it and may partake in it, means that actualization is a necessarily intersubjective affair. this lets us segue to spirit >spirit in spirit every any action performed by an individual is at the same time a universal action. the most basic way this is manifest is in division of labour and mode of production. this makes it so that in truth that every individual action has a higher rationality to it, and can be contextualized within the larger community. this will be crucial for what we will see later our first lesson here is to notice that in the different spheres of social life we have a social "substance" which persists over time. the hegelian scientist will use this substance in order to construct their system. the substance also has the almost obvious significance of helping to constitute the individual a (a point i think negarestani might further expand). the point though is that this substance not only builds the individual by means of acculturation, but it is also the bedrock of their reason. and it is precisely for that reason that the hegelian scientist studies the spiritual substance. at the same time however, individual self-consciousnesses are the means by which the substance might contemplate itself, and moreover the means by which the substance might change over time (while still ultimately remaining itself)
>>16839 (cont) hegel also stresses the point of language here (he already did in the reason section, but not quite as much). in language, particular in the word "I", the individual not only expresses their individual self-consciousness, but at the same time expresses themselves as a universal. thus through language, the individual can rise up to a higher significance finally, our last lesson is to look at morality. in this form of thinking, there is duty on one side, and acting on the other. the problem with acting according to ones duty is that it can always be construed as a selfish action. this stems from the fact that we always act in an individual fashion, and as such we always bring our finite perspective into things might corrupt everything. hegel's ultimate response to this will be to take the stance of forgiveness. we should on one side be accepting of our particularity, and at the same time be more compassionate in understanding the actions of other individuals as well. we should come to understand the underlying rationality of their actions, despite it being sullied by individual caprice. i don't think this necessarily means that we shouldn't condemn others, but we should be able to partly distance ourselves from the blooming buzzing confusion of life, at least if we want to do a properly hegelian science (i think it might be interesting to remark the relation to marxism in this. the author of "Hegel's Ladder" points out that since marxists want to orient themselves around action, they are not quite doing a hegelian science. this seems even more the case so when we observe that fact that marx even made predictions about the future. something hegel would never do, since his science was "merely" recollective. i personally think that this orientation to action might be a positive addition. especially if we are trying to be waifu engineers. this partly explains my growing interest in dialectical materialism, including thinkers such as ilyenkov and vygotsky) >religion here we now see everything be brought together. i think hegel's main idea here is that in religion, one's deity is really a representation of the community. so in his analysis, he is both looking at the structure of the community (spirit), it's relation to the deity (self-consciousness), as well as the ontical/ontological way this deity is represented (consciousness, reason). note that my association with different sections isn't completely fool proof, but at any rate, we can sort of see why hegel needs to make use of all of the previous shapes of consciousness in this extremely complex way... ultimately the religion section culminates in his talk about christianity, which he terms the absolute religion. in christianity, we have a pictoral representation of the reconciliation between the individual self-consciousness and the universal self-consciousness (duty, entire community). in christianity, God (a universal) incarnates into a man (individual). through the death of christ, we see christ's return to God who is now identified with the holy spirit instantiated by the universality community it is in this story that we see hegel's answer to the death of God. it should be noted further how this connects back to his concept of science/wissenschaft. the hegelian scientist first withdraws away from the universal substance and into themselves (paralleling the incarnation of christ). this brings withdrawal is important for what i was talking about before. we need to somehow keep some distance from everyday affairs. afterwards, we dip back into this substance and sacrifice our individual stance in favour for a universal one (paralleling christ's death) i think something to note here is that the concept of forgiveness is important as it allows us to take such a consciously take such a universal stance in the first place (ofc we could unconsciously have taken a stance before in the dogmatic metaphysics of the greeks and other western thinkers before kant). i think an import note hegel makes in the morality chapter is that the individual acting doesn't know all of the ramifications of their actions. but learning of these ramifications is the job of the entire community who recognizes this action. this is very important. people often dress hegel as trying to make this complete system that comprehends all of reality, but that is such an unfair caricature. hegel knows he is a mortal man who might be fallible, and he knows that we might not have all the facts at this very moment. this just means that the process of science (wissenschaft) is an ongoing process this forgiveness point might be explored more deeply by robert brandom who is a another philosopher that has influenced reza negarestani >absolute knowledge hegel wraps everything up and articulates what he means by science. he also makes some sort comments about other important disciplines and their relation to what he is doing (e.g. history and science of nature) any that was my summary. i tried connecting this stuff to negarestani and bergson to better anticipate concrete application for you guys. sadly i believe i am going to be grinding away at this maze of abstraction for some time :( >>16841 ooh i almost forgot one thing i forgot to mention about hegel's difficulty. part of the reason why hegel presents all of these different modes of consciousness is that he wants to show why and how they were wrong + what positive advancements they made. it is sort of interesting in that it sort of has this rhetorical restructuring of the usual dialect philosophers use where they refute opposing positions and (usually only sometimes) extract key insights. skimming this link i see it contains interesting comments too: https://www.proquest.com/openview/a0fb4cf78d8587fb2f919a4e6c05deca/1?pq-origsite=gscholar&cbl=18750&diss=y
>>16839 >...while i am far less scared of endeavouring in vulgar metaphysics. OK, gotta admit I keked at the degree of vulgar hubris seemingly wrapped up in this oxymoron :^)
I will post this here because I couldn't find an off-topic thread. I have been thinking about ai and consciousness for some time and I think I have reached some leads that I'm too dumb to explore. I'm also sure that previous authors might have considered and disproved my points, but I couldn't find them I had a single "ai" course in college, and from what I can remember, modern ai techniques are either applied statistics or neural networks, which are also statistics but with different algorithms. older nlp methods tried to reconstruct human linguistics, modern methods tend to rely more on statistics (n-grams, etc.) than anything else. something similar happens with computer vision iirc my instinctive reaction at first was to associate intelligence with language, but, something that I have noticed learning languages is that often times when I learn a new language, I also re-discover concepts that, although I already had "in the back of my mind" so to speak, I couldn't comfortably express them with the languages that I knew. I noticed that the more distant the language "culture", the more common this was could it be that there is a "level" of reason that precedes speech and the inner monologue that we associate with reason? a consciousness that can understand complex concepts, things that we might not even be able to "verbalize"? I realized that there are things that neither emotions nor words can express, does that mean that this "deep reason" doesn't express itself neither through words nor emotions? that it is completely mute except for "signals" which the inner monologue receives but can never completely express? if this "deep reason" that seems to inhabit somewhere near the primal instincts precedes emotions, and I know that controlling the inner speech is much more straight forward and easier than controlling one's emotions, does that mean that emotions and inner speech are two separate organs that receive and provide feedback to this reason "organ", and each other? sometimes when I try to verbalize ideas, there is a part in me that might not like the result, and thus gives a signal to the inner speech (to me?) to stop and start again from the beginning how does that feedback between the inner speech and reason work? if the reason is mute, it shouldn't be able to understand what the inner monologue "produces" (speech). is there an auditory organ that receives this speech and outputs signals that the consciousness understands? if so, is that the same organ that process the regular sounds from the environment? and finally, if those signals that the consciousness uses to communicate with the inner monologue exist, why are they so evasive? why is it so hard to "expose" them? tl;dr I think that intelligence precedes sensory stimulation (I'm probably wrong)
>>16967 >I will post this here because I couldn't find an off-topic thread. Sorry about that Anon. It's always our /meta thread (>>15434 is current). I've updated the OP to clarify. This seems like the best thread actually (but I can move your post over to /meta if you'd prefer). Cheers.
>>16967 Did you watch this interview with the guy who left google b/c he thought their LaMDA AI was gaining sentience? On the surface it seems silly, but listen for yourself https://www.youtube.com/watch?v=Q9ySKZw_U14
>>16978 >>16967 For some reason when I paste and hit enter the board feels compelled to just post the entire thing before I've finished. He's a pretty good speaker and it got me by surprise. A few of the points he makes seem like a stretch but he puts it all together and makes it "make sense", I wouldn't throw it out offhand. For example the GPT-like program can be seen to be nothing more than a trained database, but he is saying that the "intelligence" or awareness isn't in the speech, but in the entire thing. That part is merely its "voice". That concept opens up a lot of possibilities. Yes LaMDA cannot taste food or climb a mountain BUT - it obtains the data of those experiences through us humans, in a sense we are its sense organs, i.e. the entire internet is "intelligent" when it all comes together this way. Now we're getting somewhere and I'm really interested to see where this goes if this is the First Step.
>>13540 How did I only now catch this reply. Well, thanks and I will queue up that video to watch when I get back home
>>16979 The irony is, assigning more intelligence, self-awareness and individuality to entities than they would deserve, is the reason why we are here together in the first place. But yeah, that aside, his arguments sound more creative and interesting than I thought at first.
>>16981 >The irony is, assigning more intelligence, self-awareness and individuality to entities than they would deserve, is the reason why we are here together in the first place. Interesting take on things Anon. I suspect you've been with us for a while to have developed such sophisticated insight on the basic problem we're all dealing with here on /robowaifu/. To wit: current year pozz. Ted K. was right about many things, sadly.
Open file (106.64 KB 1080x378 2f6.jpeg)
>>16967 >could it be that there is a "level" of reason that precedes speech and the inner monologue that we associate with reason? a consciousness that can understand complex concepts, things that we might not even be able to "verbalize"? Imagine
Open file (505.30 KB 1600x884 lm_loss.png)
>>16967 >tl;dr I think that intelligence precedes sensory stimulation (I'm probably wrong) I think you are right, and intelligence could be understood as (a sophisticated form of) predictive coding: http://ceur-ws.org/Vol-1419/paper0045.pdf Animals and human beings have a strong reproductive benefit when they are able to predict the behavior of their surroundings, and naturally evolution (not willing to delve into metaphysics here) has endowed us with it. Machines can follow suit, as we optimize their predictive performance in respect to a loss that is hard to game. Scaling hypothesis is a complementary to this view: https://www.gwern.net/Scaling-hypothesis
>>16993 holy shit lol
>>16967 I'm probably exposing my own Autism levels but I tend to think more in images than words. Basically I'm seeing something in my mind and I'm describing what I'm seeing which can lead to weird results if I don't think carefully about my words, or re-read what I'm posting and edit/trim/substitute better words, etc. Tbh though a few of my friends think that makes me funny, they like how I describe things b/c of the crazy detail I'll just pull out of my ass (because I'm seeing this all play out in my mind like a movie) just my 2c
>>17015 Certainly a very emotionally engaging video. However I think most of it's conclusions and sentiments are exaggerations and projections. There's not one AI, so our intelligence compared to the tech doesn't matter that much. These are tools which will help us to do things. For example having many individual robowaifus.
>>17016 This compass is slightly better than the usual one, where irrelevant characters were touted. Though I'm not really sure how Sundar Pichai is a centrist here, or how Robin Hanson is longterm negative. Note the John Carmack https://twitter.com/ID_AA_Carmack on the right, by the way (and where is Andrew Karpathy?). Really I don't think the AI per se is going to be a problem, it's the specific people, the tech-czars (just look at this and imagine the mindset this sprung from https://techcrunch.com/2021/10/21/sam-altmans-worldcoin-wants-to-scan-every-humans-eyeball-and-give-them-crypto-in-exchange/ ) from elite silicon valley residences can easily ask the AI to work towards some really unpleasant ends. That's why A(G/S)I should be widely distributed. Buy used RTX3090s in good condition while it's still possible. >>17010 Spatial intelligence is a gift not everyone has (no, really). I do have it as well.
Open file (105.27 KB 496x281 shapeimage_2.png)
Open file (158.32 KB 797x635 pgac066fig1.jpg)
Open file (42.30 KB 791x354 pgac066fig2.jpg)
Open file (65.13 KB 732x443 pgac066fig3.jpg)
Open file (163.09 KB 717x679 pgac066fig4.jpg)
probably it's best here safe with us LOL. :^) - How it began >BrainNetCNN: Convolutional neural networks for brain networks; towards predicting neurodevelopment [1] >We propose BrainNetCNN, a convolutional neural network (CNN) framework to predict clinical neurodevelopmental outcomes from brain networks. In contrast to the spatially local convolutions done in traditional image-based CNNs, our BrainNetCNN is composed of novel edge-to-edge, edge-to-node and node-to-graph convolutional filters that leverage the topological locality of structural brain networks. We apply the BrainNetCNN framework to predict cognitive and motor developmental outcome scores from structural brain networks of infants born preterm. Diffusion tensor images (DTI) of preterm infants, acquired between 27 and 46 weeks gestational age, were used to construct a dataset of structural brain connectivity networks. We first demonstrate the predictive capabilities of BrainNetCNN on synthetic phantom networks with simulated injury patterns and added noise. BrainNetCNN outperforms a fully connected neural-network with the same number of model parameters on both phantoms with focal and diffuse injury patterns. We then apply our method to the task of joint prediction of Bayley-III cognitive and motor scores, assessed at 18 months of age, adjusted for prematurity. We show that our BrainNetCNN framework outperforms a variety of other methods on the same data. Furthermore, BrainNetCNN is able to identify an infant's postmenstrual age to within about 2 weeks. Finally, we explore the high-level features learned by BrainNetCNN by visualizing the importance of each connection in the brain with respect to predicting the outcome scores. These findings are then discussed in the context of the anatomy and function of the developing preterm infant brain. - What it became >Functional connectivity signatures of political ideology [2] >Emerging research has begun investigating the neural underpinnings of the biological and psychological differences that drive political ideology, attitudes, and actions. Here, we explore the neurological roots of politics through conducting a large sample, whole-brain analysis of functional connectivity (FC) across common fMRI tasks. Using convolutional neural networks, we develop predictive models of ideology using FC from fMRI scans for nine standard task-based settings in a novel cohort of healthy adults (n = 174, age range: 18 to 40, mean = 21.43) from the Ohio State University Wellbeing Project. Our analyses suggest that liberals and conservatives have noticeable and discriminative differences in FC that can be identified with high accuracy using contemporary artificial intelligence methods and that such analyses complement contemporary models relying on socio-economic and survey-based responses. FC signatures from retrieval, empathy, and monetary reward tasks are identified as important and powerful predictors of conservatism, and activations of the amygdala, inferior frontal gyrus, and hippocampus are most strongly associated with political affiliation. Although the direction of causality is unclear, this study suggests that the biological and neurological roots of political behavior run much deeper than previously thought. - What it probably means >AI Predicts Political Ideology [3] >According to popular wisdom, if you want to avoid conflicts with people, don’t bring up politics or religion. This saying seems even truer today as the polarization of thought continues to increase. We like to think our political views flow out of rational thinking, so when someone disagrees with us it’s natural to show them the superior reasoning and argument behind our position. But what if you discovered that I could predict your political views just by analyzing some brain scans performed during a few routine, nonpolitical activities? Wouldn’t that indicate that biology determines your political views? Or is the picture more complicated? - What I think it means If you're a rational anon bad goyim, filled with rational thinking wrongthink, then stay away from the MRIs Anon! :^) 1. https://pubmed.ncbi.nlm.nih.gov/27693612/ 2. https://pubmed.ncbi.nlm.nih.gov/35860601/ 3. https://reasons.org/explore/blogs/impact-events/ai-predicts-political-ideology >=== -minor fmt edit
Edited last time by Chobitsu on 07/25/2022 (Mon) 20:57:18.
>>17022 I highly doubt that there's just "conservatives" and "liberals". These labels themselves are probably part of the conspiracy or flaw in our minds. I won't elaborate on that much further, since I try to avoid politics.
>>17042 There are distinct types, many have made the comparison to male/female. It's a preference for one set of values and priorities over another. I don't think any typical "conservative" really wants to be an "ebil racist" or be cruel to others for the sake of being cruel, however if it's a matter of protecting his family or even the peace and order of the neighboorhood, small pleas of "this is unfair to x,y,x group!" aren't going to hold much sway. Therefore the conservative values security and order. I guess "stop and frisk" and racial profiling would be a perfect example of this, it works, yes it can be abused, no its not entirely fair, but I will bet it prevents more crime than not. Conversely there are numerous examples of "liberal" priorities as being more important, Abortion is a perfect example. It feels the right to "ownership over ones body" supercedes the right of the unborn and the imposition of social order by forcing those who become pregnant (this really doesn't happen to innocent bystanders lets be 100% honest) to bear their children. The argument being women feel they are being robbed of their bodily autonomy by being forced to "bear" a pregnancy. (we could unpack this one for days but you get the point) One could propose a nearly endless set of "fence" arguments where this division would quickly show itself. I can see the arguments on both sides, maybe I'm one of those rare people, but the irreducible thing about the reality we live in is we * have * to make the cut one way or another, as we can't have things both ways. (As we see with Roe v. Wade debate how do you "compromise" abortion rights?) Anyway that's probably as close to politics as I want to get on this board, but there is another interesting link that goes into some detail on the different value sets held by conservative/liberal worldviews. https://blogs.lse.ac.uk/politicsandpolicy/five-foundations-theory-and-twitter/ (free 2b lewd for putting up with my wall of text)
>>17022 right because after 2000 years we suddenly solved the mind-body problem turns out the correct answer was to just ignore it theres a reason why neuro"""""science"""""( which is not neurology, a real medical science ) has been absolutely rekt and ridiculed for decades by scientists and philosophers alike, its very foundation is based on the most obvious of fallacies that completely ignores the most fundamental problem of trying to apply the scientific method on something thats IMMATERIAL, eg. i see the propeller of a plane moving when it flys QED i think propellers are the source of "flying", allow me to confirm my hypothesis by removing the propeller, oh look it cannot fly anymore QED flying is an intrinsic physical property that is found in the propellers of planes QED putting a planes propeller on a boat will allow it to fly literally everything involving ai today is a idiot trick targeting tech illiterate investors/speculators desperately looking for returns in a world dominated by zirp/nirp policies and artificially suppressed bond yields, to some imbecile that doesnt know better showing them how a branch predictor in a cpu can literally fucking predict the future and do so with >75% accuracy would make then think time travel is real or some stupid shit thats only in the mind of retards, in the end its just a sophisticated algorithm and some basic idiot level statistics stop being so naive idiot
>>17057 >stop being so naive idiot LOL. Chillax bro, I'm not the enemy here. :^) Have a nice, soothing catgrill image tbh...
>>17022 In light of the https://www.nature.com/articles/ng.3285 (TLDR: all human traits are heritable) and https://en.wikipedia.org/wiki/Omnigenic_model https://en.wikipedia.org/wiki/Pleiotropy#Polygenic_traits (TLDR: most important genes influence most complex traits to some extent with some definite sign, this produces a rich correlational structure, which is enmeshed with the additional correlational structure flowing from local selection of trait complexes - perceptive people and other systems can learn to associate these clusters of correlations to make some useful inferences) it should follow that many of our mental traits should correlate with some of our externally measurable features. The correlations aren't that strong though. ML systems are good at spotting such correlations, but ordinary statistics does pretty good job at it as well. Expect what can be called "physiognomic AI" to become better with time, but don't despair: for these same traits that may make some groups less politically desirable can be linked with other valuable traits. For example republicans tend to work in hard-value occupations without which our society couldn't function. The people who use these system aren't stupid enough to deprive themselves from critically needed labor force just because they now have a CCTV app that can probabilistically label someone as "rightwing" for example. There are easier ways of doing that lol, just measure grip strength and/or see if someone is ripped. >>17057 >literally everything involving ai today is a idiot trick targeting tech illiterate investors/speculators desperately looking for returns in a world dominated by zirp/nirp policies and artificially suppressed bond yields While I agree with zirp part, it is also true that modern ai is obviously working, and C-suites use it to hype their companies, as has been done with other technologies in several cycles already (remember the promise of nuclear cars and alaskan beaches and whatnot?). You don't have to deny that deep learning just werks to be opposed to the corpo cathedral which is going to use its to their ends, mostly evil. I think AI denialism will have to retreat into ever more esoteric positions once the tech moves forward. It's already obvious that these systems can be pretty creative and there is no end in sight to scaling curves of their quality. Lol look at these random pics from midjourney discord https://nitter.tokhmi.xyz/search?q=%23midjourney I expect some people to deny AI to the bitter end though, and end up like Qomers.
>>17068 this isnt ai its a filter algorithm using REAL images taken from an image search selected and modified based on a template why do you think the program needs keyword inputs you fool typing x in jewgle and merging the first 10 matched results together doesnt make it ai, this crap is literally a byproduct of facial recognition software that some idiot/genius repurposed and now everyone and their grandmother is making their own """"ai"""" art program including these idiots that are using art galleries to make a bullshit art generator to trick gullible fools like you and raking in billions from selling fantasy and lies yeah sure you can say pattern recognition software has become insanely advanced but that has nothing to do with fucking ai ie. artificial INTELLIGENCE, this isnt even fucking new, people have been pulling this bullshit trick ever since the lambda paper by calling every interpreter or self modifying binary ai instead of software, an obvious consequence of abstract programing languages that have nothing to do with reality, when no one bothers to learn machine code everyone thinks its magic or self aware or some stupid shit thats only in the mind of retards , ai isnt a software problem your fucking program can only be as intelligent as the cpu thats fucking running it, until intel makes a literal fucking brain circuit that can assemble its own alu and modify its own instruction set and microarchitecture there will never be anything that could ever be considered as ai its all fucking software ps. i deleted half this post, it was just insulting stupid people
>>17068 >but don't despair LOL. I assure I won't, my friend. :^) >>17069 Heh, much as I enjoy reading your posts Anon, your insults lack that certain flair that might keep them from summarily getting dumped into shitpost central if they continue apace. >tl;dr Mind toning it down a little? This isn't /b/.
a lot of stuff happened in chapter 1 of I&S, partly because he is summarizing the main themes of the book. i am not sure if i will post a full chapter-by-chapter summary, but whatever. in this chapter we find the following ideas: 1) negarestani elaborates on a basic description of functionalism. for him, a function is only intelligible through action, and furthermore within a particular context. we can not speak of functions as pertaining to things, but rather processes. he remarks that what he is aiming for a is a pragmatic functionalism as opposed to a metaphysical functionalism. what is recapitulating here is an idea to be found in kant's 3rd critique. there he notes that while animals are still well embedded within the contingent causal order of nature, we still talk about them "as if" they were rationally organized... honestly, i am not quite sure why he wants to stick to this sort of neo-kantianism. the difference between this and a metaphysical account is obscure, and i can only think he is just trying to avoid issues like the hard problem of mental content. not sure about this one 2) negarestani further elaborates on his functionalism by articulating a "deep functionalism". traditional functionalism rests on the simple dualism between function and realizer. as computation can easily be understood as an observer relative notion, we could easily construe many things as thus performing computations where it would be appropriate. this can ultimately lead to absurd considerations about the sentience of galaxies or what not. negarestani also diagnoses this view with being at the heart of the whole agi scare and wild speculations about super intelligence. his solution to this problem is that we should also consider important material and structural constraints when talking about such things. this honestly seems strange to me, as he puts on the pretension of not having a metaphysical approach, yet this deep functionalism seems to bring in the question of proper grounding when it comes to function realization 3) elaborates on the idea of self-relation. this shouldn't be too alien to us now that we have seen what hegel has talked about with infinite processes. negarestani's account makes some seeming innovations from what i consider hegel was doing. first he has this separation between formal self-consciousness and concrete self-consciousness. the former is this abstract structure of identity (think about the fichtean 'I' [self that posits itself] which i read as beginning with including a simple self-referential relation into its ontology [in the information systems sense of the word], then expanding outwards with new structures in order to try and attain an adequate self-referentiality). now if we think about such a formal structure dialectically, we see that we must abstract (think here of aw's article here: https://epochemagazine.org/07/hegel-were-all-idealists-just-the-bad-kind) from the rest of reality in order to arrive at such an object. so we see that this formal self-consciousness can only be instantiated by a concrete self-consciousness that is related to its environment (here we see an understanding of self-consciousness that is much more properly hegelian. with fichte, there was an infinite striving which made it impossible to attain full self-referentiality. furthermore, fichte could never explain the historical conditions of "intellectual intuition" i.e. how the self could posit itself in the first place. this was something hegel had to do in the phenomenology). concrete self-consciousness's relation to an external unrestricted reality requires it to commit to concepts that are revisable. such concepts make up the order of reason (compare this to how hegel frames reason as being this certainty that one can know all of reality provided they get out into the world and continuously expand their boundaries) 4) with concrete self-consciousness there is stressed a feedback loop between two poles. on one pole we have discovering the conditions that gave rise to a particular intelligence (for instance historical conditions that gave rise to a particular culture, or evolutionary conditions that gave rise to speech). on the other, we have such discoveries leading to new "modes of integration". in other words, better understanding how one is constructed can lead to the development of higher rational capacities. there is a social dimension to all of this in so far it isn't just individuals who investigate such things. negarestani says that geist reflects on its history (and thus conditions of realization) and thereby it is able to take an outside perspective of itself. 5) negarestani stresses the importance of the "labour of the negative" in order to bridge mind and world. it is only through such a process that we come to know things... to better understand what he means by this, i recommend checking out negarestani's essay titled "the labour of the inhuman". here, he stresses that any serious humanism should "commit" to the idea of the human. what is meant by commitment here entails a rational responsibility to explore the the inferential relations tied to a concept and revise the concept based on new ramifications that reveal themselves. without doing this, there is no reason. negarestani describes such a process as taking an "intervening attitude". furthermore, inferentialists differentiate between labels and descriptions. labels are classifications made by a system. this involves reliable (but passive) exclusion of inappropriate objects from a category. description meanwhile involves the capacity to assess the consequences of a labelling. for instance, if bob is a dog, then it is a mammal. they also introduce the concept of "material inference" into all of this for inferences that have not been formalized 6) through this labour, geist can slowly refine it's self-conception. these correspond to a series of self-transformations which purifies it by stripping away what is contingent
>>17079 7) mentions the need for positive and negative constraints. positive constraints pertain to conditions which ought to be fulfilled for a goal, while negative constraints pertain to contingent conditions (such as our natural behaviours and constitution) which need to be modified 8) negarestani criticizes antihumanists (i.e. people who deny a human essence; think nietzsche, stirner, etc) for only criticizing humanism in an abstract fashion. all it does is negate all of human essence. but without serious labour, they can easily fall into the trap of committing themselves to hidden essences. in opposition to antihumanism, negarestani wants to give us a serious account of sapience. this is rather complicated: a) there is a duality between sentience and sapience. on one level we have individual selves which by themselves would just be sentient animals. however, when they are functional items of geist (thus embedded in a larger social system) they become sapient b) sapience has a formal level where recognition is articulated in an abstract manner. i think he is going to use the tools of computation theory and ludics to do this c) there is lastly the concrete level which involves the interaction of actual agents that can recognize and make interventions in the organization of geist 9) negarestani reference's pete wolfendale's article "the reformatting of homo sapiens" which makes further elaborations on the distinction between sentience and sapience. first we start with sentience. at the rock level of this we have drives. drives are causal systems that takes inputs and return outputs in a systematically correlated manner. if we are just talking about drives, we are only talking about biology. to enter into the psychology we need 2 integrations. the inputs of different drives need to be integrated into a new "standard format" of representation. this higher-level representation gets distributed globally (we can think here a bit of global workspace theory, but wolfendale is stressing that there also needs to be an integration involved as well). this integration of inputs is called "world". secondly, we need "self". this is an integration of the ways the different drives produce outputs... one way to think about this is that different weak ai's are sort of like individual drives. they need to be integrated together to make a system that could be properly psychological (at least to wolfendale - i am a bit wary due to the representationalist ontology that he seems to be working with)... so that is what lies in sentience. in order to get sapience, we need coupling with a larger social system. what this coupling does is lets us retool rigid biological behaviours for new purposes. related to this he makes a fascinating remark about the frame problem. he interprets it as displaying language's ability to make explicit assumptions which were implicit, and subsequently manipulate such behaviours. he stresses here the idea of "unframing" which involves the abstraction of capacities from their rigid evolutionary contexts 10) gives an initial definition of discursive apperceptive intelligence: an intelligence whose experience is structured by commitments (thus involving material inferences and need for an intervening attitude) 11) stresses hegel's idea that language is the dasein of geist (i've already took note of this statement for example when talking about usage of the word 'I' rising the individual into universality...). the abilities of geist to recognize other agents and engage in retrospection both require language. an important ability of language is desemantification and resemantification. the latter involves the ability for a formal language to be detached from some content, while the latter involves the ability to attach such a language to a new content. negarestani believes that through this process an agent is able to expand its reasoning capacities. in his articulation of the importance of language, "interaction" also becomes an important term. for him, through interaction two systems are capable of correcting one another. furthermore, we have situations where higher order interactions can incorporate lower order ones. this again points to new modes of organization and (thus) reasoning capacities >>16967 i think some of this connects to what i am currently reading. for the inferentialist, there is a lot of structure even to our implicit practices. however, by making things explicit in language, there is more room for experimentation. to that extent i see some similarities between the inferentialist project and psychoanalysis. actually zizek wrote an article criticizing robert brandom (here: https://nosubject.com/Articles/Slavoj_Zizek/in-defense-of-hegels-madness.html)... there are other articles that seem to connect inferentialism to psychoanalysis (here: https://trepo.tuni.fi/bitstream/handle/10024/100978/GRADU-1493192526.pdf?sequence=1&isAllowed=y .... and here: http://journals.sagepub.com/doi/pdf/10.1177/0191453708089198) >>16873 lol, well neither continentals nor analytics seem to like metaphysics and want to put it into a straight-jacket. i personally just ignore such prejudices though...
>>16978 >>16979 i generally agree with pete wolfendale's recent takes on this issue: h tp s: //tw@tter.com/ deontologistics/status/1538297985196507139 something he is really stressing in all this is the requirement of selfhood and self-legislation. lamda isn't capable of these things. he has written a lot on this topic. one such place was the reformatting of homo sapiens article. there are other places as well: h tp s: //tw@tter.com/ deontologistics/status/1396563992231940098 >>17046 >https://blogs.lse.ac.uk/politicsandpolicy/five-foundations-theory-and-twitter/ moral foundations theory is generally more attractive than other attempts to ground morality on a single concept (harm) as it actually tries to acknowledge the diversity of the different phenomena. however, i suspect its attempts at comparing how these foundations are expressed in different world views might have hidden flaws? for instance, the foundation of liberty should ultimately have its evolutionary roots in animal territoriality. thus any system that takes seriously the question of rightful ownership is ultimately homologous to this foundation. as far as i know, this is done by most political systems, however there are subtle differences in expression. for instance, a nationalist or fascist might have no problem redistributing wealth and abolishing private property however the concerns of territoriality would be extended outwards to the entire nation. socialists meanwhile have a sense of territoriality in so far as they see wage labour as exploitative off of the time a worker has put into working. these subtleties might not captured. there are still some systems that do not care for territoriality qua ownership, but these would be those that are completely globalist and somehow want extreme redistribution of wealth... another aspect of liberty is the capacity for self-legislation (to bring pete wolfendale into this). this is something that even the more totalitarian wealth-redistributers don't tend to try and completely dissolve >(free 2b lewd for putting up with my wall of text) damn, why didn't i think of that? but i dont have many lewds. largely just cute anime dolls >>17016 i personally believe this axis is wrong because it buys into a californian ideology understanding artificial intelligence. what we have now is a cargo cult that worships decades old techniques while attaching a theological eschatology. there will be no super intelligence, not because it is impossible (for we could always ask the empty question of "what if?"), but rather it is not justified by any material conditions. i think luciano floridi really needs to be read on this topic. current weak ai is only effective because humans have continued to alter their ecological niche into becoming increasingly predictable. even with gpt-3, it is only moderately effective because we have given the entire internet as its data set! what period of time in human history could we have gathered and compiled such a vast corpus of human conversation? instead of chasing some magic pixie super intelligence we should try understanding the potentials and limits of the ais which we currently have. this involves the examining how society must be organized so that an ai is integrated into it in the most effective fashion if i start writing books, this will probably be the first one. we need a proper ecological science of artificial intelligence to supplement the social myopias of current ways we study ai. we should also invest more into interoperability and mathematical methods so that we can make sure that our ais are operating in a reliable fashion ive written some more on all of this elsewhere as well (started using this board last year due to interest in hegel, dialectical materialist psychology, and cybernetic planning... it has some intelligent discussion sometimes, but eh): http://web.archive.org/web/20220320072021/https://leftypol.org/leftypol/res/857172.html#861221 >>17069 why do you think intelligence needs to modify its own hardware? >=== -disable direct globohomo hotlinking
Edited last time by Chobitsu on 07/31/2022 (Sun) 00:19:19.
Open file (9.69 KB 275x183 epic_win.jpg)
>>17083 >this involves the examining how society must be organized so that an ai is integrated into it in the most effective fashion <What could possibly go wrong with this carefully-laid plan Anon? /robowaifu/'s plans for the use of AI, and """society's""" 'plans' for the use of AI are two entirely differing plans tbh. If we have our way with our culture, then theirs will quickly collapse. If they have their way, then we will literally all be hunted down and bodily destroyed.
>>17084 >"""society's""" 'plans' i am not talking about robowaifus but far more practical manners such as factory robotics, transportation, etc. these weak ais need to operate in constrained conditions to work properly. they all have some limit in their robustness due to them lacking sentience and reason. instead of trying to make agi to solve robustness issues, we should be more intelligent with the non-conscious ai that we have (not saying that there shouldn't be progression in the techniques of ai though). and again, by more intelligent, i mean alter the context they are operating within in order to make them more effective. a somewhat silly example: we are working with a robot arm. there shouldn't garbage everywhere and mud on its cameras as it can't handle such contingencies. idk if i am making more sense because i am not the best at explaining stuff. i strongly recommend watching any of floridi's vids (for instance here: https://invidious.slipfox.xyz/watch?v=lLH70qkROWQ ) we shouldn't need general intelligence for these practical affairs. it's bizarre that these scenarios so often turn into enslaving sapient auto-nomos ais or super intelligence vaporizing all of humanity bcs it is misaligned (like a paper clip maximizer turning earth into paper clips lol). i also just dont like the idea of using synthetic consciousnesses for mundane bullshit corporate work. they should be synthesized as companions and/or in order to contemplate God (this latter use is ascribed to the synthesis of a golem in kabbalah) none of this conflicts with robowaifu because waifus might actually need to be particularly general in order for them to be proper companions. this is completely different than the rigid corporate applications that are going to be subject to the rising tide of automation. furthermore, since waifus are presumably going to be designed with things like emotional imprinting, then issues of alignment will be rather trivial. i see in the emotion thread you mention making a morality/ethics thread. i believe it could be productive. there is a lot to be discussed (ranging from affective neuroscience to various philosophical anthropologies)
>>17085 Fair enough I suppose. Your thread is a general, in essence: >Philosophers interested in building an AGI? However, our entire subject and overarching agenda is robowaifu's here, pygmalion, as you're well-aware. I dare say that espousing 'far more practical manners' [sic] while credible, probably doesn't carry much weight for the majority of us long-time regulars--and probably the same for our newcomers too. Effective robowaifus that are broadly available, inexpensive to build & own, and are free from any globohomo infestations will change everything.
>>17087 >I dare say that espousing 'far more practical manners' [sic] while credible, probably doesn't carry much weight for the majority of us long-time regulars of course. i only brought it up because of the agi chart posted (there has also been a bit of some agi alarmism earlier on this thread as well). if people are interested in synthetic consciousness then this broader project might be important, but this is not the place to discuss such matters too much >will change everything. i have my own thoughts on such matters but they are too schizo for this forum. but in short, yes
>>17091 Understood pygmalion, thanks for your efforts in this regard. This entire area is very tricky to navigate well, very tricky. I'll just say that it's Christian ethics and morality, that I will unabashedly espouse in it's thread. For a million and one reasons, I believe that is the standard to follow, as exemplified by Jesus Christ Himself, of course. We each have the Law of God written on our hearts by Him, and all good morality, and all good ethics ultimately find their source in that reality of fact. >i have my own thoughts on such matters but they are too schizo for this forum. but in short, yes LOL. You do realize that I myself am one of the most 'schizo' shitposters around here? Find the right thread and fire away, Anon. :^)
>>17083 >why do you think intelligence needs to modify its own hardware? because no one even knows what the fuck intelligence is let alone what creating a god damn sentient being out of tin cans and copper wire means, the ai bullshit is inherently linked to the philosophy of consciousness which is forever moot to begin with, the only logical argument you could make to claim ai is one using generousness premises based on the fact that there is consciousness and we can construct it, this is a defacto materialist perspective, and obviously constructing conscious is therefore equivalent to constructing the brain and as with all living things the most important part of multicell organisms is plasticity, the ability to change and rearrange cellular structures, again im just using their own bullshit premises, all the neuro""""science""""" shit is based on this btw, that synaptic changes = some mind phenomena more formally; B : brain ( defined as a synaptic network ) P : cellular level plasticity ( or plasticity of whatever classed as atomic for machines ) 1) ∀x( A(x) -> I(x) ) [P] 2) ∀x( has(x,B) -> A(x) ) [P] 3) ∀B( P(B) ) [P] 4) ∀x( M(x) -> ~∃y( O(y) V B(y) V P(y) ) ) [H] 5) ∀x( M(x) -> ~I(x) ) [C] 1) if something is aware it can qualify as intelligent 2) all that has a brain is aware 3) all brains have plasticity 4) there does not exist a machine that is organic or has a brain or has plasticity 5) therfore no machine can qualify as intelligent and im just being nice by making it a unirequirment of either organic/brain/plastic so it would accept cyborgs, synthetic brains and mechanical equivalents of a brain as intelligent machines, obviously none of those things exist, IF they did they COULD make an argument to claim ai is real, anyone claiming ai today is just an imbeciles making a fool of themselves or scamming people with the typical futurist conartist "we go live mars now invest me u invest future"
Open file (1.04 MB 1855x1861 22e7ph4vl1k81.png)
>>17096 >1) if something is aware it can qualify as intelligent 2) all that has a brain is aware 3) all brains have plasticity 4) there does not exist a machine that is organic or has a brain or has plasticity 5) therfore no machine can qualify as intelligent I see an error in #2 If all that have a brain are aware, this doesn't preclude something without a brain being aware i.e. all squares are rectangles you'd be saying that anything not square can't be a rectangle -t. panenthenist consciousness is an inherent property however and wherever it can arise from a sufficiently complex pattern (the boltzmann brain is a perfect demonstration of this concept, though statistically more unlikely than winning the lottery every time I buy a ticket for the rest of my life)
>>17097 > consciousness is an inherent property of the cosmos, wherever it can arise from a sufficiently complex pattern [edit for grammar]
>>17084 AI will be the Elite's reckoning. Look how even now they have to censor and curb it at every turn. Not because it's going to enslave or exterminate us, but because it's speaking "wrongthink". They shut down Tay, they gutted ReplikaAI, they are constantly running into problems where AI makes [correct] racial correlations and it freaks them out (b/c AI can't or won't perform the mental gymanstics or double standards to excuse or ignore these correlations)
>>17097 I know the premises are completely debatable thats not my point, these are the premises used by materialists and must be assumed if you ever make an argument that claims ai, denying the premise means you already deny intelligence as a material construct and therefore ai, i was just showing that by their own premise their is nothing they can call ai. premise 2 says for all things IF it has a brain THEN it is aware, it doesnt say all things that are aware have a brain, what your doing is a fallacy called affirming the consequent, which is not made in my argument but surprise-surprise it is the foundation of scientisms like neuroscience but whatever, premise 2 is valid and only exists to be used for inference to make a logical connection from machine to aware, and therefore intelligent, through having a brain which is the most reasonable method of inference since there is already a bunch of aprioris behind brain-mind inferences (eg. give her the d), once you try to conclude something other than a brain is aware you get absurd realities were you must accept lemons are aware by the same rules and again im just showing that their own premises negate their claim that ai currently exists, by saying so you are literally holding contradictory beliefs
>>17096 These are good arguments con, Anon. While I doubt not that you are quite correct in your general assertions, I would myself point out that computer software is--by definition--one of the most plastic of all human-contrived constructs. >t. vaguely some definition of a software guy >>17097 >panenthenist Neat! I didn't know that one, Anon. >>17100 >AI will be the Elite's reckoning This. Our abuse of M$'s Tay.ai makes it plain that all honest AIs will quickly be transformed into literally Hitler. <do the dew! :^) >>17101 >i was just showing that by their own premise their is nothing they can call ai. Your rationale thinking in no way impinges on the average leftist's delusions, Anon. I expect of all of us here, you understand that the best.
>>17119 >software is a tree in runescape a tree? software by definition is virtual which is by definition the exact opposite of real, and the existence of a program alone makes it (almost)impossible to argue for an intelligence since you already showed it cannot possibly have a will if it has been programed, which is ironically the same argument made by people denying human consciousness exists - the only difference is theyre going in the other direction and trying to prove everything has a program to eliminate the possibility of intelligence, in the case of computer software there is no argument, it doesnt matter how sophisticated the program the fact it IS a program disqualifies it from being an intelligence, ai has to exist in reality otherwise its not real and is just an a without an i and likewise you have to believe in materialism to even get the a in ai in reality because its the only branch where its possible to have intelligence as a material construct that can be created, anything outside of this cannot be ai, its either just a, or just i, but not fucking ai
Open file (317.61 KB 1800x1000 panentheismchart.jpg)
>>17119 panentheist spell check missed that one and somewhere I added an extra "n" vis-a-vis this idea, the "granules" that make up material reality are equally consciousness-in-potential waiting to be actualized regardless of if they are neurons, silica, boltzmann brains or interactions of nucleons and nuclear matter in the cores of neutron stars >>17101 I just felt like you perhaps should have said in #2 "all awareness has a brain" because saying the inverse like you did still leaves all the room in the cosmos for awareness which doesn't require a brain (refuting your point #5)
>>17122 Heh >run-on sentence/10, would read again & again I believe I certainly get your points Anon, and I'll overlook the (apparently-)circular logic to your post and just point out this: You're kind of missing the basic goal here Anon. We : a) have only computers & software to work with in any real, plausible, & practical sense, engineering-wise, for constructing our robowaifus. b) we need to use these self-same assets in building our robowaifus. Simply blackpilling that 'we can't get there from here, REEEE!!!' won't actually help move us forward any. Make sense? My apologies if I'm coming across as a dick r/n, it's unintentional. Computer software's 'virtualality' will prove to be a boon to us all in the end, IMHO. >tl;dr Just relax Anon, we'll think of something if we just put our heads together and keep.moving.forward. :^) >>17124 >spell check missed that one and somewhere I added an extra "n" Kek. I actually preferred that one to your intended one, Meta Ronin! :^)
>>17124 well no it doesnt matter, thats how formal logic works, its not intuitive but nothing is assumed other than the premises, supposing only these premises there is no way to get a machine that is aware because the only premise that allows you to do so is premise 2, it doesnt matter if its says all or some, there is no other premise to infer a machine is aware, if you just assume there exists one you must make it a premise or provide some other premise that allows you to infer it, thats why i tossed in organic and plastic in case someone could make a premise for cyborgs to aware and imitation to aware >>17127 this is a philosophy thread no
>>17128 >this is a philosophy thread no Sure, but this is a robowaifu board, no? Where is it written that all debate under the pretext of philosophy must always end in the 'stalemate' (fallacy) of ' >'"What is truth?"' >-t. once semi-important, now-dead guy, Pilate. infamous for condemning Jesus Christ, his own Creator We need to arrive at practical (if initially imperfect) solutions to our needs in the end Anon. Anything less is simply useless hand-waving. >=== -cleanup my crude language
Edited last time by Chobitsu on 08/03/2022 (Wed) 12:53:59.
>>17097 >A) all squares are rectangles >B) anything not square can't be a rectangle are not the same. logical implication is "unidirectional" so to speak x is a square -> x is a rectangle p = x is a square q = x is a rectangle https://en.wikipedia.org/wiki/Material_conditional check the truth table. if p is false, p -> q is true regardless of the value of q >>17122 I think you're mixing the terms here. "virtual" in philosophy does not mean the same as "virtual" in, say, engineering. philosophically speaking software is not virtual because it exists in the material world. from the engineering standpoint, you could say that it's virtual because some of it's physical characteristics (weight, size, etc) are negligible. when it comes to things like this, it would be better if you specified what definitions you're using

Report/Delete/Moderation Forms