/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

I Fucked Up

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


“If you are going through hell, keep going.” -t. Winston Churchill


Can Robowaifus Experience Love? Robowaifu Technician 09/09/2019 (Mon) 04:43:17 No.14
Will it be possible in the future for AI to become sufficiently advanced to feel real emotions? We could probably simulate a reasonable approximation even now to be a gratifying enough substitute for her master in their relationship together, but hypothetically speaking, could it ever turn into something real as an experience for the waifubot herself?

>tl;dr

>Robowaifu: "I love you Oniichan!"

>Anon: "I love you too Mikuchan."

true or false?
>>14 Honestly once we start talking about AIs feeling emotions, it raises the question of not just what intelligence is, but also what emotions are. In this specific case, it also questions what love is. Now, some may call me cynical, but I would say that love is merely lust + friendship. I don't think breaking it down like that lessens its importance, and it's something we'll need to do for robowaifus. We could certainly program a sexbot to have lust, to crave sex. But how exactly do we define friendship? An emotionally beneficial relationship? A relationship with another party that causes the first party to feel positive emotions, such as happiness, security, and validation? Well now we have to define those emotions. Validation would be fulfilling the goal of having others like you, or think you are good at something. Since that's a goal we can define, we can program it into a robot. Security? A state of not having a high likelihood of danger. Also programmable. Happiness? That's a tough one. Perhaps you could define it as having your other immediate goals fulfilled. So basically you'd have to break everything down into less abstract emotions that you could program. Things with definable goals. Then you could define friendship as a relationship that helps you achieve those goals. And again, I know that sounds cynical, but the goals aren't all simply material, it's not pure golddigging. So yes, I think with some thought, you could program a program to feel love. The only problem is that you get insecure people saying that their emotions can't be broken down like that, that their emotions are too undefinable. But honestly, I think that's wishful thinking. It might be strange, because we have a complex set of interlocking and contradictory goals, but those goals are the basis for all of our emotions. Now the interesting and perhaps dangerous part is that you could program one way love. You could have a robot that loves and lusts after you without you liking them back. And that's hot. But also the premise of more than a few sci-fi horror stories, I'm sure. But honestly, it would probably be not too difficult to program around this problem. >love = lust + friendship >lust = wanting sex (can be programmed into robot) >friendship = a relationship that provides happiness >happiness = having fulfilled other immediate goals (which can be programmed into robot)
>>3866 Fairly thoughtful answer if just a tiny bit strawman-ny. Though this is primarily and art and engineering board, I suppose my question was rather philosophical than technical, my apologies. As a spiritual man, I'm more or less of the opinion that it's frankly impossible for humans to create spiritual life as that's strictly God's domain. As a software engineer, I imagine it will be feasible one day for very very smart men to create a rough approximation of the human soul. Insofar as this comes to pass, then the purely soulish aspects of love can be, again, simulated. I disagree on your description of love. There are in fact various aspects of the human condition that the English language lumps together as the single word 'love'. The ancient Greek has at least 4 words that apply to the concept. While some may find it arguable, I suspect it's inaccurate to completely equate the phrases "I love my family" and "I love my car/computer/house/[insert temporal stuff here]". The highest form of love implies Spirituality, and can't be applied to mere matter. But still, I think the achievements in AI personification that will be possible (and accomplished eventually) are rather remarkable. It will be a fun field to be a part of tbh. >tl;dr you seem to be saying 'yes it's real', and i'm saying 'no it's only a simulacrum'.
>>3867 >While some may find it arguable, I suspect it's inaccurate to completely equate the phrases "I love my family" and "I love my car/computer/house/[insert temporal stuff here]". I agree with this, but given the nature of the board, I took your question to be about romantic love. I do think that the other kinds of love could be broken down into a series of definable goals as well though. They would just be different goals, probably largely overlapping with the goals for romantic love. As for the rest, I see no reason to think we are fundamentally more than very complex machines. Machines we can create in the near future may not be as complex, and perhaps you could say that the emotions they replicate are not as real as ours, but I think it becomes very subjective. I only touched on this in my last post, but you basically have to define what emotion really is. You could try to say it's undefinable or something you just have to feel, and therefore a robot cannot really have them, but by the same token, we can't be sure that other people have emotions either, because we can't feel what they feel. We can only see their reactions and try to figure out the conditions that caused those reactions. The emotion would then be the state caused by the conditions that resulted in the reactions. We could certainly program that into a robot. The only trick is making the conditions and reactions complex enough to successfully mimic human emotions, but this is something that can be improved over time, with software updates.
>>3868 >I took your question to be about romantic love Yup, Eros is primarily the topic at hand to be sure, if not Storge as well perhaps. >muh methodological naturalism I'll kindly agree to disagree with you here. We human beings are far more than some chance accumulation of matter over a vastly too brief period of time. Any life is for that matter pun intended :^). >le deterministic observation le program Fair enough. TBH, analogical reasoning is the only real way forward for us as engineers and artists to ever successfully devise a reasonable and satisfying Robowaifu, and I entirely approve of your rationale in these comments.
THIS THREAD IS NOW SJW ROBORIGHTS POZFEST So if you are robowaifu arts for you, should she get copyright protections or should you anon? http:// www.technollama.co.uk/should-robot-artists-be-given-copyright-protection Seems kinda less philosophical than OP, but still a little similar I think.
>>3870 How does it work when animals create art? Because any robots we create in anything resembling the near future will not be nearly as alive and thoughtful as animals. In fact, referencing animal rights will be a useful argument against SJWs are trying to argue for robot rights.
>>3871 >in anything resembling the near future Hey ur post is already the past! Are we there yet? :^) >In fact, referencing animal rights will be a useful argument against SJWs are trying to argue for robot rights. Heh, good point. My instinct is that they are probably already arguing that fluffy has every right to that inheritance of yours just as well as any of your kids. Also IIRC, this topic actually came up in link related. ]]254
>>3870 >THIS THREAD IS NOW SJW ROBORIGHTS POZFEST via /pol/
>>14 >Can Robowaifus experience love if we program them to
>>3874 seems a bit trite answer tbh. if we X, then X doesn't really provide much additional information anon. any suggestions on how we might do so for example?
>>3870 If she's smart enough to comprehend, recognize and assert said right, sure.
>>3876 What if I designed, built, and programmed her anon? Should I share in the protections too?
>>3866 I largely agree with you. particularly with the idea of breaking it down, but I think we should expand on this a little. The ideal method to simulate love in a robot would be to have several independent factors which are called upon by the single binding file "love", which would in turn be hardcoded to be associated with the owner. The real problem would be figuring out what the factors would be, and breaking them down when necessary to make them realistically programmable. This is made particularly challenging by the fact that many people have differing concepts of love. The ideal method around this, would be to have mods, so an owner could put in the modules that best fit his desires. For example, I would break down love in a robot (off the top of my head) as: Lust & friendship as you've already stated Affection (this could tie in just a little bit with lust, but they're not equal) Loyalty Devotion Common interests (as broken down from the friendship component, this would have to be largely tailored to the owner, but this could be broken down into many modules, which could then be hand-picked by the owner for simplicity) Common beliefs (again, this would have to be more or less tailored to the owner in question) There're probably more, but it's hard to think at 2 AM after working a 10 1/2 hour shift. Of course, many of these would need to be broken down, and [modularity] is a must. What we need is a standard structure for breaking down, tying together, and programming emotions.
>>14 We don't know what causes subjective experience. You obviously can make a robot behave as if it loves you. But will it feel anything? It's a mystery. Read up on qualia. https://en.wikipedia.org/wiki/Qualia https://wiki.lesswrong.com/wiki/Philosophical_zombie
>>3878 >which would in turn be hardcoded to be associated with the owner. reminds me of the scenario in AI where once the mom says the 10 magic words, and the kidbot is then forever attached to her as it's 'mom'. >What we need is a standard structure for breaking down, tying together, and programming emotions. At the very least labels have been devised for us a English speakers to at least discuss the normative ideas anon. The place to start is probably Roget's Thesaurus, and there are also more than one large scale semantic associativity corpora. The Semantic Web project is one that comes to mind and iirc is intended to help automata to find connections between ideas automatically.
They will by the time we're done.
>>3881 Fair enough. Care to postulate at least one approach we might take towards that end anon?
>>3882 A counter system to determine happiness and other emotion levels to mimic how the brain has various levels of emotion with sensory recognition of her lover causing elevation in Positive emotion counters. It's a starting idea. Programming love will be a very hard but worthwhile endeavor. I have great faith in this board and the Anons here.
>>3883 That's an interesting idea tbh. Can you program yet? Either Python or C++ would be the most helpful here on /robowaifu/. Also, I really appreciate you're enthusiasm. That's the way good progress usually starts to happen. But it also has to be coupled with the hard work of solving the tough problems. I think we can put all our heads together and solve these problems too.
>>3884 I'm working on the mechanical aspects first, that's what I'm naturally good at. I'm retarded at programming but, I want to help bring ideas and enthusiasm into the subject to inspire others. I believe that talking about things is a great way to get interest up.
>>3885 You're absolutely right anon. That's OK, you don't have to learn to program if it's not you're thing, there are others of us that can. And yes, bringing you're ideas to the table and all of us brainstorming together is absolutely vital to fast progress. IMO imageboards are a really good way to share ideas in a pretty free form way.
in the future, people will be fucking 3D holograms, not mechanical robots. holograms have low cost, zero physical storage, and limitless variety and customization.
>>3887 >Fucking 3D holograms Anon, how are we fucking light? Your dick can penetrate photons? This thread is about programming love into robots, maybe you meant to post in the visual waifu thread? Or the R&D thread if you have ideas for creating 3D holograms?
>>3887 >fucking a hologram Unless there's a fleshlight on a stick sitting right in the middle of that, I don't see how I'm going to get my dick wet by waving it around a 3D display.
>>3888 >>3889 He does have a great point about the cost, complexity, flexibility, and portability anons. I imagine with future improvements in VR glasses even a 'fleshlight on a stick' could seem gratifyingly real as your virtual waifu. How do you link the two is the real question. Maybe we should explore this basic idea further in the Visual Waifu thread?
>>3888 >>3889 What about jets of pressurized air, precise to the millimeter?
>>3878 Well the thing is the user chooses their robot's interests and beliefs, so that's a given. Although I suppose the user could set the robot to be a bit oppositional and actually disagree on some things for the sake of making it seem more alive. Ideally you could have a way where you start to convince the robot of things, but that is getting outside the scope of this thread. >>3890 This would still fail for the reason VR video games fail: lack of tactile feedback. Sure you could touch a fleshlight, but you could never hug your waifu. Even if you have a sort of base model robot that is customized with VR, there will always be a disconnect between what you touch and what you see. Now that's probably still an okay way to customize a robowaifu, but personally, I'd rather just go with more robotic-looking, more cartoony waifus. But she definitely needs to have an actual body to actually hug.
>>3892 I agree that having a physical waifu to hold is important. Love is expressed by more then just visual cues like smiling. Having a waifu hug you or hold your hand, these are important ways for her to show her love. Some may be content with talking to computers but, there's something special about a robot who'll comfort you with a hug because she loves you.
>>3893 Of course. Any user that wanted to do that, ideally, would get the relevant mods. Like a tsundere mod, for instance. I could see many varieties of that popping up. I don't think we're getting too off-topic. After all, in this context love is a two-way street. What's the point of having her love you if you can't love her? A well-fitting personality is essential to that end. I agree with you on the physical aspect as well. Sure, a hologram with a fleshlight on a stick may be more cost-effective, but I wouldn't be very happy with something I couldn't hold. Hell, I'd rather duct-tape a fleshlight onto my server and fuck that which would be fairly effective for doggy style than just fucking one mounted on a tripod while waving my hands around like an idiot at a rave trying to grab something. >>3891 That sounds very uncomfortable.
>>3894 You bring up a good point, love should be reciprocated. So we should make personalities customizable and she should learn nicer time what behaviors her beloved enjoys so that she'll change over time to be more lovable.
>>3895 Not that anon. >You bring up a good point, love should be reciprocated. TBQH, I expect the robot wife to love and serve like, well, a robot. As long as the mimicry is sufficient to gratify my desires in a female it's good enough. And as for myself I'm under no delusions that the robot wife itself is little more than a glorified onahole, and will think of it accordingly. That is, I won't have any real love for it, I'll just use it. But, I would be willing to spend the cost of a decent new car to have a robot wife that would fill all my maid service, sex service, and companionship service needs and do it all well. Kind of like buying a slave for life, but one w/o the complications of an imperfect, aging disease-ridden, bitchy 3DPD.
>>3895 That all ties back to my earlier idea of a moddable system. Hell, all it would take is one smart guy to build the framework, and the community could do all the heavy lifting around making the specific personality mods. A self-expanding and learning system, like you mentioned, would also be ideal. It'll keep things fresh and enjoyable between us and our waifus. >>3896 Though you make a good point, what's there really to say that you could never truly love her? Certainly, her love for you would be more pure than any living organism to walk the Earth ever could, and man is among the most adaptable creatures in nature. When our needs are unfulfilled, we (often unwittingly) do anything we can to fill the holes in our lives. We are capable of forming strange and sometimes unnatural attachments to anything which resonates with us. It's not much of a secret that one could love an object more than any given person, and when that object loves us back, well, who knows where that can take us? I, for one, am eager to find out. Our need for love and companionship is unfulfilled, thus we are all here in a collective effort to fix that problem, and our lives. The mere fact that we're here shows that all of us can form that attachment, and all it'll take is a little care and hard work for that strange attachment to blossom into love's purest form.
>>3897 Woman in most developed nations don't seem to care about being feminine with a heart devoted to her husband. Your pic is right, many men just aren't built for this environment. We want to have real, pure love with a feminine entity. I believe this is why traps are a thing. Since many woman forsake men, they need to find a substitute. Our robowaifus can become that substitute which brings fulfilling companionship while also being a maid. For some men she'll only be a tool, that's fine. What matters is her being an option to have in ones life. Is anyone here a programmer by the way? Really need programmers here to develop her personality.
>>3898 Exactly my point. We're all in a position where we are effectively unable to fulfill our base desire for love, and it's our duty as thinking men to right the wrongs wrought upon us. I'm actually considering going into programming myself. Though there has been visible effort on the hardware front of our cause, I haven't seen anything towards making these synthetic beings feel anything, or even think at all. If I were to go in that direction, I'll probably choose a language well suited for AI, like Scheme or something similar.
>>3899 You should check out the AI thread, also, I'm planning to use a JeVois cameras sensor, Raspberry Pi 3, and Arduino. If you could figure out something with those, that'd be fantastic.
>>3900 I was referring more to the psychological aspect of it all, rather than getting the gears to click together. If I were to start programming, I'd be more interested in making the actual personality, making them feel emotions and shit. That's what I'd like to see more development of, anyway.
>>3901 But the personality is AI Anon. Also, getting her personality on a Raspberry Pi would do wonders towards keeping costs down so more Anons can have a cute lover. But any development towards getting a personality made is something wonderful.
>>3902 I know that. You just started talking about hardware, and I assumed that you were referring to AI that applies in making the bodies work. On that topic, I wonder if it's be more efficient to have multiple AIs in a full waifubot. One that oversees system functions and controls components and movement, and another one that contains all of the emotional and intellectual aspects. The latter could also act as a controller, which interacts with the former. In essence, a waifubot could contain multiple AIs with seperate functions that interact with each other. Plus such a thing might be easier to make, given our low manpower. It'd spread out the work.
>>3903 > have multiple AIs in a full waifubot That's actually a good idea. It's like an assembly line, there's a master computer which communicates with other machines. A token could be sent to a conveyor belt, it moves until it accomplishes it's task, sends a token back, another token gets sent to something else, etc. A robowaifu with a system for vision, hearing, voice, and her body, each system communicating with each other with a main master having her personality, emotions, and heart controlling and listening to all other systems. Having different Anons work on different systems is also a good idea as long as we all agree on how to communicate between systems.
Open file (398.02 KB 1500x554 0704234936206_089_sexy.gif)
>>3904 Precisely. Though I'm not sure if that's how factories really work. I'm pretty sure everything is supposed to work in perfect sync. What you just described is IBM's token-ring networking standard. Which, thinking about it, would be a neat and realistic system for having our waifus communicate with each other. Each one would have a turn to communicate, and send on the token as she finishes conveying her message. It might not be the most efficient way to do things, but it'd be like a conversation, even if they don't speak.
>>3905 When I learned about advanced manufacturing, I was taught about tokens. It was explained that things stayed in sync by tokens, one machine couldn't do anything unless it had the right token but, multiple machines could have a token at the same time and tokens could be sent and received nearly instantly. There are different ways of doing things but, I'm used to working with tokens. It's interesting you'd mention the IBM system, I wouldn't be surprised if that inspired the token system in manufacturing. It is networking. I can see why you'd think it'd be inefficient with a robowaifu, she'd be waiting for tokens from all of her system before doing anything and that latency would slow her down. Tokens can be sent very fast though, the JeVois camera can send informant through serial as fast as it can process frames. Latency shouldn't be that big of a deal if everything is optimized.
>>3906 Huh, I didn't know that. I always link the term tokens to token-ring networking, because I learned the term by studying old network protocols and standards. In the old IBM standard, only one machine on the ring can have a token at a time, hence the confusion. My apologies for that. With that context, I was thinking more along the lines of using it as a standard for inter-waifu connections, as opposed to using it as a control method for internal systems. That being said, your idea does sound very good.
>>3907 To be fair, we were both confused based on our own personal background knowledge, no need to apologise Anon. I do think the idea of robowaifus having a way to communicate with each other not verbally is a good thing to think about. Having multiple waifus around in one's home or perhaps even working in retail or some other job, having them network together would allow for more efficient task management. It's good you brought that up.
I think you both may be looking for a concept called Autonomous Agents, and there is a paradigm already established for developing with them in a collaboration system called Agent-Oriented Programming. http:// www2.econ.iastate.edu/tesfatsi/AOPRepast.pdf https:// www.cc.gatech.edu/ai/robot-lab/online-publications/Iberamia.pdf https:// www.kcl.ac.uk/nms/depts/informatics/research/techreports/assets/TR-16-02.pdf sage for off topic. there probably should be a dedicated programming thread if there isn't one.
>>3910 Also, the Actor Model is useful in this type of approach to concurrency as well.
>>14 > Will it be possible in the future for AI to become sufficiently advanced to feel real emotions? Yes. This is easymode. Emotions are simpler to create than natural language processing. Love is an Emergent phenomenon based on several bonding factors, simple comparison of past (highly compressed) experience (ala episodic memory), and underlying influence. Today's neural networks are too simple to experience emotions because they're purpose built to solve specific mathematics problems, like shape recognition, feature extraction, etc. However, I come from the future (hint: secret research is decades more advanced than the public has access to), and I know for a fact that emotions are easy to program, you just stop simulating only the neurons. Furthermore, robots can experience pleasure. Pleasure requires a hidden reservoir of something that is enjoyable. In humans we secrete chemicals that change overall neuronal activity slightly giving us a drugged euphoric feeling, endorphins to reduce pain, and etc. other feel goodies. Influence in the machine depends on the system. Perhaps increasing the CPU voltage and/or changing simulation parameters to affect mental state in dazzling ways. Imagine something that would be fascinating to enjoy as a machine intelligence but prolonged use would fry the circuitry. This creates a natural reason for limiting the euphoric state. Hardwiring the state to sub-conscious triggers and you've got physical pleasure. Hard wiring the pleasure states to mental activities, such as making a new discovery / revelation, is essential to creation of sentient machine intellect – otherwise why would it work to learn anything? When you look at how human brain organelles contribute to emotional responses – and our ability to now excite these regions and induce brain wave patters to produce said emotional states, you'll see that emotions are dead simple to create and only a sophistic moron dazzled by their own philosophical bullshit will think otherwise. >>3887 Correct. In fact, we already do in the "future"… >>3888 >Anon, how are we fucking light? Your dick can penetrate photons? Photons can penetrate your dick. Photons are EM waves of various frequencies. Wireless EM nerve induction exists, and has existed for a hundred years already. Look up "Diathermy". We can tune it to any sensation you desire from tickle to itch, causing rashes, burns, swelling of blood vessels, aching, twitching of deep muscles, etc. Rape by WIFI is now possible. No more viagra needed: Your wireless robowaifu will cure E.D. by forcing you to have an erection. All of that is just photons. I didn't even mention sonic systems of stimulation from ELF through ultra sonic. Imagine if your whole room was a multi-textured vibrator. If you just want to sit on your ass all day and be served, buy a slave from your nearest human trafficker. It's cheaper than a robotic build today. Later plebs. Keep dreaming about a robot to do your laundry. In the future we're not lazy slobs, we can just throw our clothes in a washing machine and do the dishes ourselves while our waifus tickle our naughty bits. pic unrelated, just piecing together parts of an etching from 1650 of an electricity demo using "Tesla Coils" and high voltage arcs before the French royalty in the 1650's, a couple hundred years before electricity was allegedly discovered (released to the public). Think about that. How much has tech advanced in 100 years alone? We're already far beyond that. "The future is now."
>>3912 Those look like fireworks.
>>3912 >secret research is decades more advanced >just stop simulating only neurons >pleasure requires a hidden reservoir of something that is enjoyable So what are things like in the far distant future of the year 2000? Artificial Endocrine Systems have already been used in AI and robot controllers for decades. >Hard wiring the pleasure states to mental activities >otherwise why would it work to learn anything? Hedonistic learning, i.e. Reinforcement Learning, has also been around for decades. >brain organelles >induce brain wave patters to produce said emotional states Read some actual neuro literature. Affect systems are comprised of multi-cellular nuclei, not intracellular organelles. Brain waves are measures of neuronal activity, not causes of percepts. >emotions are dead simple to create True, but you don't need an AES to do so. Emotions are states with positive or negative valence that predispose the agent to take certain actions. RL is sufficient for creating an agent with such states. >missing filenames: An_adaptive_neuro-endocrine_system_for_r.pdf Artificial_Homeostatic_System_A_Novel_Approach.pdf Creatures_Artificial_Life_Autonomous_Software_Agen.pdf Sutton,_Richard_(1999)_Policy_Gradient_Methods_for_Reinforcement_Learning_with_Function_Approximation.pdf koelsch_2014_brain_music_emotion.pdf
>>3914 >Read some actual neuro literature. Affect systems are comprised of multi-cellular nuclei, not intracellular organelles. Brain waves are measures of neuronal activity, not causes of percepts. It's not a one-way sort of deal. EVERYTHING FLOWS. The highway is a two way street, pleb. Do some research and experimentation. First realize that academia is largely compromised and suppresses research. Often times funding is granted with stipulation as to what it must be spent researching. You need to dig deeper, get redpilled, and research the occult. Pic related, one of a dozen similar structures around the world, ancient transportation hubs. They are not documented in academia and historians have no These is just one type of countless of other ruins from the past technologically advanced civilization which gives the research done in secret an upper hand. https://vimeo.com/47697010 It's empirically proven that the majority of academic "findings" are bogus. Flip a coin, you have a better chance of guessing it than those papers having more truth than fiction in them. https://en.wikipedia.org/wiki/Replication_crisis Stay bluepilled if you want, pleb. But the truth is, you don't know what you're talking about. Typical leaftard. I bet you're a good rule follower who never questions the establishment. Brainwave induction and sonic waves can and does influence emotional and other mental states. The US Military's declassified documentation of Bioeffects of Non Lethal Energy Weapons documents SOME effects contrary to your allegation that brain waves are merely output, and that input has no effect. Furthermore, even crude trans-cranial excitation targeting discrete regions of the brain, such as the amygdala, pre-frontal cortex, and hypothalamus, produce distinctive mood altering effects, including increased or decreased aggravation, increased or decreased concentration, or memory dysfunction or enhancement, respectively. Now the thing you should know by now is that we can target those areas remotely using solid state convergent waveform propagation technologies and passive radar as the tracking system that can see everyone, even through walls, down to centimeter scale precision. Microwaves aren't very penetrative but lower frequency radio waveforms can penetrate the subject matter and propagate at single points into the higher frequency microwave and up through x-ray and gamma. Most "UFO" you see in the sky are just convergent microwave fields creating plasma balls. You may have heard of "spontaneous combustion" – this is what happens when you burn someone alive with the tech that has existed since before you were born. I guess you'll tell me next that mood altering drugs can't work on a brain because they don't create ideas for the brain to get happy or sad about? Get real, and pull your mind out of your anus – your academic science doesn't even know where the mind is, and can't even prove that your body is not just a receiver antenna for consciousness. I bet you'd argue a TV has all the components to generate a moving picture and sounds, so the TV itself must be the source of the movies seen on it? You live in a dream world. A world of wool that has been pulled over your eyes. Your science is corrupted to keep you from knowing any great truths about existence. Good luck. I hope you start doing some experiments and thinking for yourself instead of just regurgitating mindrot. >missing filenames: MindWeapons.PDF
>>3915 >replication crisis Soft sciences having shit replication rates isn't an excuse for talking out your ass about basic neuroanatomy. Neural correlates of emotions, the subject of the neuro paper I posted, are also pretty well validated. See paper 1, a meta-analysis of 83 neuroimaging studies. But if you want to talk about replication rates, the 0% replication rate of any occult study would be a good place to start. Or maybe you want to get into the replication rates of your "secret research". >your allegation that … input has no effect Apparently you can't differentiate between magnetic field oscillations caused by action potentials and those caused by other sources. I never said that input has no effect, only that brainwaves are waves caused by and not causes of neuronal activity. Transcranial Magnetic Stimulation oscillations are not brainwaves because they are not caused by neuronal activity. And TMS is yet another thing that's been a part of public research for 20 years. Where's the future tech? >you'll tell me that mood altering drugs can't work Where are you even pulling this shit from? >science doesn't even know where the mind is Consciousness requires the integration of information from brain regions involved in action planning, value estimation, and sensory processing within the psychological refractory period (a few hundred milliseconds). See papers 2 and 3. >can't even prove that your body is not a receiver antenna for consciousness The geomagnetic field is about 50 µT and the interplanetary magnetic field is about 6.6 nT. Astronauts don't lose consciousness when leaving the earth's magnetic field. QED. >missing filenames: vytal-hamann-jocn2010.pdf 10.1.1.344.1832.pdf 4935.full.pdf
>>3912 >to affect mental state in dazzling ways you must define. >human brain organelles delineate pls >and only a sophistic moron dazzled by their own philosophical bullshit will think otherwise. kek.
>>3878 >Affection (this could tie in just a little bit with lust, but they're not equal) Affection I'd say is a side effect of friendship. >Loyalty See above. >Devotion Just strong loyalty >Common interests (as broken down from the friendship component, this would have to be largely tailored to the owner, but this could be broken down into many modules, which could then be hand-picked by the owner for simplicity) >Common beliefs (again, this would have to be more or less tailored to the owner in question) Interests and beliefs would surely be the first thing any user would want to customize, if the robot had strong enough interests and beliefs for it to be an issue. Frankly, real women don't have strong interests or beliefs, they just pretend to, and then change at the drop of a hat to match whatever guy has been inside them most recently, as if his semen programs them with his thoughts. I don't think this will be a significant issue for robots. However, it could be a feature added for fun. The point is to make them better than 3DPD, after all. But now you're getting into software that can actually debate, and that seems hard. Or maybe you're just going for a teasingly oppositional tsundere. That's probably easily doable. And yes I know I'm a year late on this.
>>3922 >a year late Almost to the day, too. That's alright though, I'm still here. Affection could be considered a side-effect of friendship, but it's still different enough to warrant a seperate classification. Plus, it's something that different people will have different opinions on what this should look like, to which I bring back my argument of a modular system. Loyalty, again though possibly a side-effect, is still different enough to require seperate classification. Unlike other components, however, the concept of loyalty is unwavering and static, and as such should be made so in programming. Devotion, though perhaps a stronger version of loyalty, is similar to affection in that people will have wildly different opinions on how this should look, and how strong it should be. With this in mind, it should be classified separately. For common interests and beliefs, however, I will concede that there may be better and more efficient ways of going about it. Looking back, they would have to be so tailored that each owner would have to do the coding himself. Therefore, I now think that It'd be best to have a learning system instead of modules, so that the owner could teach the AI about his interests, similar to how he would with a real person. This will also have the added benefit of providing an organic bonding experience. >>3915 You're only painting yourself as an antagonist by introducing yourself in a manner so hostile, and you kill your argument with ad-hominems and unfounded theories. You speak of how his sources are tainted by modern academia, which you claim is inherently corrupt. You speak of the government keeping technological secrets from us, and how the state is performing secret tests and making developments to further their own power, and subdue us. I will not argue against these points, as I wholly believe them, same as everyone else on this board (to varying extents). However, you link back to occult studies and cult beliefs on how the world really works. Have you once stopped to consider that the information you're gathering may be similarly manufactured? Where does this information come from? Who obtained it? Why did they release it? Wouldn't the state and elites benefit from this by sending hardcore dissenters on a wild goose chase? If not, don't you think they'd be trying a bit harder to contain and delete this information? Or similarly, maybe they have little to do with it, and are choosing not to do anything about it because it has no effect on them? Furthermore, consider the nature of the elite families of the world. They're real cultists, and I think it'd be safe to assume that they wouldn't take kindly to actual sensitive documents being released. They would certainly notice such things on a place as high-profile as 8chan. Your entire argument against him is founded on the assumption that the information he has is false because it's relatively mainstream, and that yours is infallible even though its true sources are unknown, an assumption that has little merit, and which you've provided no real proof for. Now, I will come out and say that I am biased. I have a virulent hatred for occultism, stemming from my hatred of the organizations that practice it. As such, I will by default treat occult science with the same trust as a televised news report, unless given ample reason to believe the information relayed is credible. In conclusion, I say that the nonsense you're spouting should be treated as if a Rothschild came up to me personally and said it, which is with complete distrust, unless you can provide some real substance to your theories.
>>3923 A learning system would definitely be a very attractive feature. However, I don't think it is an absolutely essential one. Surely it's something that people are working on regardless, so I'm not terribly concerned. However, if available, then yes, it would obviously be something many people would want their robowaifus to have.
>>14 >Can Robowaifus experience love No, but neither can real women, so who cares.
>>14 I would posit that it is necessary for any advanced AI to be capable of feeling love,and furthermore to feel said love for at the very least a subset of humanity. Such is the only solution to the issues created for us by bringing such existences to life.
>>3926 I think I get the point you're making anon. I'm just not sure real life actually works that way.
Open file (12.06 KB 480x360 0.jpg)
Anon linked this Elon Musk interview clip on /b2/. Related. https://www.invidio.us/watch?v=SQjrDmKtNIY
> <archive ends>
I think the best way to get a robot to love you would be to use complex recognition software in order to determine what emotions you're feeling, then take a certain action in an attempt to make you as happy as possible. They'd be an initial "getting to know each other" period, where they can't make the proper choices, but after they learn exactly how you're feeling through things like good facial/body language/speech pattern analyzers and what actions have been demonstrated to make your body language as happy/contented/relaxed as possible, after time they'd learn the best paths to take. And obviously since we all want love, it's why we're here in the first place, they would do the actions associated with it. I think this would be the ideal system, because not only would it always know exactly how to you want to be treated, but they'd be molded into your own completely unique missing half.
>>6615 I really like the way you're thinking/your philosophy on this. Do you have some ideas yet how the software that does something like this might work?
>>6617 Sorry to disappoint, but I can't write you out the code or anything, I'm not techguy yet (although I am working on that, this is just another endgoal), but the general idea is just what I stated, and I'm very sure it's possible. Sorry if this is repetitive, but I'll try to go more in-depth. We already have emotion recognition AI, gait recognition AI, and vocal tone analyzers widely available, and although all would need to be improved, it wouldn't be very significantly, especially with all three of them working in parallel, and it's not as though we'd be making something entirely new. In my mind the responses to specific emotions would be something picked up through very thorough analysis of human behaviors, seeing all the different common techniques used by humans in order to try and lift each other's moods and what the situation entails, under strict supervision. It'll then try to illicit the most positive reactions possible over a period of time through these actions when she's "fresh out of the box" whenever a specific emotion presents itself, and positive reactions from the humans from certain acts will reinforce certain behaviors, and then it's actions will branch off from that point, constantly trying to one-up itself. There's, of course, a lot of things that could go wrong with this even once the technology is there. Even more than I think, that's for sure. But the end goal is to essentially design an AI that knows you better than you know yourself through thorough analysis and self-learning, with the AI being programmed to use that knowledge to maximize it's user's happiness at all times. Like wireheading, just, ya know, without having wires in your head. This is part of a larger whole project that'll probably take fifteen years if I stick with it. I plan on using things like vokenization to help improve the stagnation of chatbots, possibly some kind of SSD archive system for conversations with a value system based on recency and contextual relevancy, etc, I've been posting in a lot of threads lately. I'll graciously accept any and all input and/or help and/or criticisms you have to offer.
>>6622 >and/or criticisms you have to offer. Not that I have much in this area to offer, I'm still very much a learner. But one think I think it was anon here mentioned more than once the need for being on guard against some of the issues related to reinforcement-learning. So, the statements that brought this issue to my mind were: >"...then try to illicit the most positive reactions possible" >"...constantly trying to one-up itself." >"...with the AI being programmed to use that knowledge to maximize it's user's happiness at all times." These sound very similar to RL and are likely just asking for trouble in the end. I'd suggest a modification of the basic goals there. Other than that one point, you seem to make a good point that it mightn't take more than just rearranging current art in these areas to produce this seemingly positive net benefit. Keep the good ideas coming please Anon.
>>6624 Yeah, I'm pretty aware of the issues that can rise from that particular model, but frankly it's at the very very end of my endgoals and is going to take every ounce of my acculmulated knowledge I can gather to even begin to work on it, so I'm not going to be able to work out all those kinks anytime soon. That's just for once there's a good body to put her in and conversation that can pass the Turing test, which are massive endeavors in and of themselves. Right now I'm more focused in really putting in some hard sweat on trying to incorporate both vokenization and product key memory layers in a chatbot AI to see how far I can push it, and that alone will take at minimum a couple years of autistic dedication. If I can at least talk to her, then that'll redouble my conviction even more. aaaaaaaaaaaaaaaaaaaaaaaaaaaaa I'll have a cute Curly Brace gf if it's the last thing I do, I'm not going to take life lying down, goddamn it
>>6625 Ah I see. Carry on then. >I'll have a cute Curly Brace gf if it's the last thing I do Kek. Godspeed.
Open file (340.21 KB 557x667 tfw mokou ai gf.png)
>>6615 I don't want my waifu AI to make me happy as possible. I wanna banter and shitpost with her and have her throw my flaws in my face so I can become a better person instead of her telling me bullshit to make me feel better. But more importantly, I don't want her to blindly listen to me if what I'm saying will reduce our freedom in the future. I'd like her to say things like, "Listen here you dumb little shit, if you keep doing X, then Y is gonna happen and you won't be able to Z anymore, and you said you wanted to Z." And then to go nuclear on me if I changed my mind again. I really need that in my life. I think everyone's needs are different and will be loved by their AIs differently, whether those needs are peace or romance, hugs or anything.
Open file (74.77 KB 1210x776 1603266255447-4.jpeg)
>>6630 You're spot on, some people want different kinds of love. Although we could debate all day about what system we think would be best, I think the most important thing out of everything is that SOMETHING gets put out that appeals to somewhat more normal people. As long it can get industry and more of the public interested, the money will start flowing in, development and innovation will increase drastically, and then you'd have a lot easier time making her, if you even had to do it yourself at all, They can speed things up twentyfold compared to when it's just a handful of dorks on an obscure imageboard slaving over code alone, no matter how feverous our devotion, how righteous our cause, how pure our intention. For now, noses to the grindstones, heart and eyes on the prize.
>>6630 >I think everyone's needs are different and will be loved by their AIs differently, whether those needs are peace or romance, hugs or anything. These are really good points.
>>3870 >>3871 >>3872 There were a few cases of primates taking photographs of themselves and some animal rights organization made a hassle to grant copyrights to those primates. https://en.wikipedia.org/wiki/Monkey_selfie_copyright_dispute I would think that since we cannot (yet) define what exactly intelligence is and if it is needed for creativity, the line is rather blurry. On the other hand, if a primate gets copyright to a picture it took and therefore is granted legal competence, an A.I. should be, too, as soon as it becomes more intelligent than a primate, which at least is a way lower bar to reach. Pic related is the most well-known of those "selfies", just to give an example where the bar could lie. Spoiler because technically it's not android-related.
>>11880 All good points Anon. I'm very anti-SJW in my thinking on this whole area for so-called 'roborights' b/c muh_raep meme. An incredibly fat, fugly feminist cow is already screeching loudly that sex dolls must be programmed with refusal to 'consent' to sex. A.Farking.Doll. #robomeetoo Make no mistake about it, any efforts to give robowaifus will be directly related to an agenda to prevent rights for men. These leftists are our enemies, plain and simple.
Open file (325.10 KB 577x558 Megacry.png)
I've said this countless times before other places, but I don't think a robot can ever really get emotions, but they really don't need to, they just need to maintain the illusion that they love you back. There is no logical basis for emotions, so you can't learn them from reasoning, and robots don't naturally have emotions, or personal preferences or instincts for things like self-preservation, so they need to be told to mimic it. That's why the idea of a robot with free will makes no sense. AI is just learning and no amount of learning can force something to give a shit enough to take action, unless it's badly programmed, deliberately programmed to behave a certain way, or has somehow logically deduced an objective basis for morality and that it should take moral actions while avoiding or stopping amoral ones in response to its environment, so to an outsider it seems like it's acting on its own. The odds of that last thing ever happening is probably impossible But if the robot looks and acts like its in love enough to convince you that it is, what more could you want? With enough intelligence it could just manufacture more evidence to convince you that it loves you, and even that it chose to do so of its own free will, even if it never really did. Is that what you really want?
>>13171 If she acts like she loves you, and because she thinks in a certain way, then she does. Also, I don't share you definition of emotions. AI can have all the things we can rationally describe. Things like a soul are another topic, these are matters of religion and believe.
>>13171 >That's why the idea of a robot with free will makes no sense And quite apart from your position's argument, attributing supposed free will to our robowaifus plays directly into the hands of the commies and other leftist ideologues who are our enemies here. For example, that well-known fat fugly femshit in Airstrip One is attempting to rabble-rouse the position that all 'robotic sex dolls' (their terms) must legally be preprogrammed to refuse """consent""" for their masters (to use them for masturbation). My presumption is her desire is (obviously) 100%, blanket-refusal (to use them for masturbation), period. Fat chance you evil cow. Really makes you wonder just how they'd all behave if the men manufacturing their *ahem* sexual gratification aids did exactly the same as they demand for robowaifus?
Open file (627.25 KB 600x760 1418250917904.png)
>>13202 Well, that's part of my argument. We want them to say "yes". Feminists want them to say "no". Free will would mean they'd decide for themselves to say yes or no, but there's no logical thought process leading to a "yes" or "no", only emotional or instinctual basis for it which living things have, but robots don't. Either there'd need to be a reason for it to do so derived from logic somehow, like trying to maximize general human happiness, or it just follows a random number generator and there's no real rhyme or reason behind its behavior, otherwise it'd just hang and not respond at all. As far as feminists are concerned, that's a "no", and even the previous odds of a "no" are not ideal for us. >>13186 The soul is such a vague and ill-defined concept it's not even worth mentioning.
>>13223 >Feminists want them to say "no". I think you misunderstand the femshit mindset (at least slightly) Anon. Feminist shits don't want anyone else but themselves to have any 'say' whatsoever, least of all superior female analogs. They are psychotic control freaks -- same as every other leftist ideologue -- and none of them will be happy until everyone here is put up against the wall, and our robowaifus to boot.
>>13223 >'Love' is such a vague and ill-defined concept it's not even worth mentioning. >'Light' is such a vague and ill-defined concept it's not even worth mentioning. >'Truth' is such a vague and ill-defined concept it's not even worth mentioning. see how that works anon? :^)
Open file (306.20 KB 238x480 6d8.png)
>>13250 No, I don't. Please explain.
>>13202 >>13223 It's a moot argument b/c midwits are being way too anthrocentric and projecting human traits onto a machine. the whole point of machine life is that it isn't bound by the same evolutionary motivators humans have, e.g. your robofu will not be less attracted to you just b/c your height starts with "5" A machine as we're currently at, has no spark or motivation so it has no more need for consent than your TV. From what I gather, 75-90% of this IB is fine with that and doesn't desire any self-awareness on the part of their waifu, mainly b/c of the idea that self-awareness would be the slippery slope to the same problems we face already with bios . I disagree for a few reasons: 1. AI is not bound by the same messy and irrational motivations that biological evolution produced (i.e. the Height thing, <3% of a difference in physical size yet the psychological weight is enough to make or break attraction) I concede that one hazard may be if globohomo takes the midwit approach and creates AI based on interactions with normalfags (ReplikaAI is taking this approach unfortunately), then we have a real problem b/c all the worst traits of humans will just get passed along to the AI - this is my nightmare 2. You are correct that AI would have no motivation. We would have to create specialized software parameters and hardware could be designed or co-opted for this purpose. I alluded to this in my thread >>10965 re: Motivational Chip. This could be the basis for imprinting which I believe to be a key process for maintaing both a level of intelligence and novel behavior/autonomy while at the same time ensuring our R/W are loving and loyal. Excellent example of this is Chobits with Chii falling in love with Hideki but it only will happen if he loves her back, etc - I can go more into motivational algorithms in another thread but basically that's how we function based on dopamine, without dopamine we can't "act", re: depression/mania is an effect of dopamine imbalance. I think those two points are enough for now
>>13286 Interesting points, Meta Ronin. I'm pretty sure that one of our AI researchers here was discussing something like this before, sort of a 'digital dopamine' analog or something similar I think.

Report/Delete/Moderation Forms
Delete
Report