/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

The canary has FINALLY been updated. -robi

Server software upgrades done, should hopefully keep the feds away. -robi

LynxChan 2.8 update this weekend. I will update all the extensions in the relevant repos as well.

The mail server for Alogs was down for the past few months. If you want to reach out, you can now use admin at this domain.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


Knowing more than 100% of what we knew the moment before! Go beyond! Plus! Ultra!


Robowaifu Ethics & Morality Chobitsu 08/02/2022 (Tue) 23:25:26 No.17125
>"And as you wish that others would do to you, do so to them."[1] >-t. Jesus Christ I propose this thread to be a narrowly-scoped discussion on the OP's topic; informed primarily by 2 Christian-themed writings, and by our own efforts & practical insights in developing robowaifus : I. In Mere Christianity, C.S. Lewis asserts that all men "...have the Law of God written on their hearts."[2][3][4] This is certainly affirmed by various passages in the Holy Scriptures, as well. II. In The City of God, Aurelius Augustine of Hippo writes >"And yet they have within themselves something which they could not see: they represented to themselves inwardly things which they had seen without, even when they were not seeing them, but only thinking of them. But this representation in thought is no longer a body, but only the similitude of a body; and that faculty of the mind by which this similitude of a body is seen is neither a body nor the similitude of a body; and the faculty which judges whether the representation is beautiful or ugly is without doubt superior to the object judged of. >"This principle is the understanding of man, the rational soul; and it is certainly not a body, since that similitude of a body which it beholds and judges of is itself not a body. The soul is neither earth, nor water, nor air, nor fire, of which four bodies, called the four elements, we see that this world is composed. And if the soul is not a body, how should God, its Creator, be a body?[5][6][7] Now, starting from the fundamental basis & belief (a priori w/ no defenses given pertaining to it >tl;dr let's not descend into debate on this point, merely discuss the implications of it, kthx :^) that this immaterial, moral law inscribed on each of our hearts by God literally serves as the foundational stone for all good ethics & all good moralities out there; I'd like for us all lurkers, I'm looking at you! :^) to have a general discussion on: A) What does this all imply (and likely mean) regarding human behaviours within the general cultural/societal domain under discussion, and B) How do we here, ourselves, go about best implementing responsive behaviors similar to these within our robowaifu's systems? === >"Logic!" said the Professor half to himself. "Why don't they teach logic at these schools? There are only three possibilities. Either your sister is telling lies, or she is mad, or she is telling the truth. You know she doesn't tell lies and it is obvious she is not mad. For the moment then, and unless any further evidence turns up, we must assume she is telling the truth."[8] >-t. The Professor (Digory Kirke) >For the foolishness of God is wiser than man’s wisdom, and the weakness of God is stronger than man’s strength.[9][10] >-t. Paul the Apostle, together with & through the Holy Spirit within him === >(at least somewhat) related threads: > (>>11102 - philosophy, >>18 - personality, >>17027 - emotions, >>106 - society, >>19 - important things) 1. https://biblehub.com/luke/6-31.htm 2. https://archive.org/details/MereChristianityCSL 3. http://www.ntslibrary.com/PDF%20Books/Mere%20Christianity%20-%20Lewis.pdf 4. https://www.youtube.com/playlist?list=PL9boiLqIabFhrqabptq3ThGdwNanr65xU 5. https://www.gutenberg.org/ebooks/45304 6. https://www.gutenberg.org/ebooks/45305 7. https://www.ccel.org/ccel/schaff/npnf102 8. The Chronicles of Narnia: The Lion, the Witch and the Wardrobe by C. S. Lewis (1950) HarperCollins. 9. https://biblehub.com/1_corinthians/1-25.htm 10. https://biblehub.com/bsb/1_corinthians/1.htm >addenda: https://en.wikipedia.org/wiki/Mere_Christianity https://en.wikipedia.org/wiki/Argument_from_morality https://en.wikipedia.org/wiki/Lewis%27s_trilemma https://en.wikipedia.org/wiki/The_City_of_God https://en.wikipedia.org/wiki/The_Lion,_the_Witch_and_the_Wardrobe https://www.josh.org/mtacdownload/ https://en.wikipedia.org/wiki/Theological_virtues https://en.wikipedia.org/wiki/Cardinal_virtues
Open file (51.22 KB 604x443 i_dunno_lol.jpeg)
>>17125 Just to show off my ignorance, and also to be the first one into the pool :^), >[1] I'll start. >What does this all imply (and likely mean) regarding human behaviours within the general cultural/societal domain under discussion, and I think we very likely have no other practical, rational basis by which to even discuss a 'standard' for ethics & morality apart from human ones. This is really all that is directly-apprehendable by us, naturally-speaking; and for all intents and purposes, effectively is completely satisfactory at informing our particular needs at hand. To wit: robowaifu ethics & morals. >tl;dr >What would Yumi Ueda do? (>>17045) >How do we here, ourselves, go about best implementing responsive behaviors similar to these within our robowaifu's systems? >pic-related[2] Why, bring back Dr. Hibiya-san from the grave, and let him do it all, ofc! :^) I really can't think of a better way myself than either trying some kind of pared-down, situational, expert-system-style picklists--perhaps combined with some simulation systems used as predictors of possible outcomes. This would be far from general, but that's kind of the point ATM IMO. We really don't have good generalizations on these kinds of things. There would be a literal infinity of possible scenarios available, so let's just focus on the most common robowaifu tropes/situations available to us now. These mango/animu writers were all a) human beings, and b) already invested some energy & thought into the appropriate outcomes. Make sense anon?
More philosophy and speculation? I stopped even reading the thread related to AI.
>>17135 Heh, not at all fren. I'm quite explicit about the specifics involved. Unless you have something better to contribute than this, I'm going to have to move your >useless post/10 off into the chikun farm tbh.
>>17139 I was just wondering, how is it different from the other thread on philosophy? Is this somehow going to be more specific? I mean towards representing certain ideas as a graph or some input data for a neural network?
Open file (1.32 MB 360x202 chii_vs_butterfly.gif)
>>17141 >I was just wondering, how is it different from the other thread on philosophy? Is this somehow going to be more specific? I mean towards representing certain ideas as a graph or some input data for a neural network? Alright, fair enough then. Good questions Anon. This thread is definitely more specific (and narrow) in scope, than a general philosophical dialog & debate without ending :^) with every concept under the sun up for grabs. This thread is also hopefully going to result in some actually-practical solutions for the domain of ethical robowaifu behaviors. The question How do we here, ourselves, go about best implementing responsive behaviors similar to these within our robowaifu's systems? (>>17125) is always (or should always) be the driving, motivating factor for us in our dialogs ITT. There is a significant advantage to what is laid out in the OP, because a) we are limiting the whole universe of discourse to Christian principles of understanding & behavior, and b) we are also limiting the informative 'classroom' texts to just two (both of which are literally already provided for Anon ITT's OP): >Mere Christianity by C.S. Lewis >The City of God by St. Augustine Taken all together this is quite narrow in scope, relatively-speaking. Also, there's a fundamental premise I'm presuming for us here as a group that I should probably spell out clearly: Once we solve the system of Christian ethics & morals for our robowaifus, then Bob's your Uncle. As a working system, we will already have worked out so many basics & fundamentals, that it can then be extended & transformed into any system Anon cares to devise. However, keeping the initial scope limited to just what we have set forth here is essential to any initial success with the overall effort IMHO. To put things another way; if we tell our robowaifus things like: > "Thou shalt not kill", or > "Thou shalt not toss a flower pot out the window of our 40th-story flat", or > "Don't put the neighbor's cat into the dishwasher, even if it's dirty, stinks, and has a bad attitude towards our beloved dog", why shouldn't she do any of these things? I mean apart from the fact her Master told her explicitly not to? What innate law should be 'written on her heart' that would allow her to do these right-things, even when her Master is out of contact with her for a long, long time--as in Yokohama Kaidashi Kikou? (>>12652) Make sense Anon?
>>17153 >Why shouldn't she do any of these things? I don't understand your question. It's because her Master told her explicitly not to. Same reason as in humans, wherein God has told us (explicitly and implicitly) not to. We physically can do all of those things if we ignore God. The only other reason you can really give is that doing things in contravention to God's law will ultimately have negative consequences, no matter how minor. This is explained clearly to the Israelites in Deuteronomy 28, and well, most of the rest of the Old Testament. In your case, a robot doing these immoral things will have consequences, for people, and for the robot itself. >What innate law should be 'written on her heart' that would allow her to do these right-things, even when her Master is out of contact with her for a long, long time If you start from the assumption that the law of God is written in the hearts of men, I don't understand why this question is necessary either. Surely a rudimentary understanding of God's law should be encoded into the robot's software, in the same way it is in us. As for how these laws are encoded, that's the interesting question. The Bible itself offers many real-world examples of how God's law is to be followed by men. While not an exhaustive compilation, it seems reasonable that general principles could be trained into AI, or encoded into decision systems that would match those displayed in the Bible. Situational training sets could be used for ML algorithms, or as test sets for decision systems, but it ultimately depends how the software is implemented, as to how such laws and moral principles would be implemented into a robot's personality.
if youre using a theological worldview then nothing without a soul has virtue, which is what your actually referring too, ethics isnt a side of the coin it is the coin at least in aristotelian and kantian ethics i know an automata is the lowest even lower than bacteria or algae ethically because it doesnt even have the primitive natural law by nature of being an unnatural creation, it cannot be judged by any law, the question is like saying how do we instill morality into a ballistic missiles, it doesnt matter with this worldview, the onus is on whoever builds and uses it because it has no soul to morally interject, it does what you made it to do, yeah you could just fire it away randomly without any intention or target or purpose but the onus is still on you, in contrast if it had a soul then no matter what you target the missile would always change trajectory towards the nearest israeli embassy because it would just inherently knows good from evil by virtue of a soul its pointless because either it has a soul in which case ethics is inherent or it doesnt in which case its just an object of extension for your own virtues and morality, you cant judge something that doesnt have a soul simply abiding its purpose
>>17154 Great points Anon, let's solve some (or all?) of your questions, alright! :^)
>>17156 We can move this conversation into the Christian Debate thread (>>2050) if you'd like to continue with it Anon--this isn't a debate thread, not in that sense. In fact I plan to move your post there soon, since I strongly intend to keep this thread from devolving just so. As stated in the OP, this thread clearly specifies: > Now, starting from the fundamental basis & belief (a priori w/ no defenses given pertaining to it > tl;dr let's not descend into debate on this point, merely discuss the implications of it, kthx :^) > that this immaterial, moral law inscribed on each of our hearts by God literally serves as the foundational stone for all good ethics & all good moralities out there (>>17125) Now, we can certainly discuss together whether or not these are valid claims, but this isn't the thread for that. >tl;dr This isn't Yet Another Philosophical Debate Thread, Anon. Rather it's a pragmatic, robowaifu behavioral-solutions thread. The rules for it have been laid out clearly enough; let us all proceed accordingly. >=== -minor prose, fmt edit
Edited last time by Chobitsu on 08/05/2022 (Fri) 12:09:04.
>>17158 yes im saying based on this position a discussion around ethics is an irrelevancy, youre just discussing what the purpose of a machine should be not what the ethics are
Open file (75.05 KB 798x434 3catgirls.jpg)
>>17153 >Make sense Anon? Sorry for my late answer. Yeah it makes sense. I skipped over parts of the OP and might have missed some important parts. However, please try to find a way to express what you want to put into her mind in some way we could later work with in a flexible way. I think there are concepts to express information or statements in some kind of pseudo code, maybe I'll look into that myself soon. Just a naive and spontaneous sketch: animal:"no unnecessary harm" "animal in dishwasher":harmful Having that said, I think a lot of the learning could come from extracting the normalcy of human behaviors. Humans put certain things in dishwashers, so ask before putting in anything else. The things normally put in have the trait of not being alive, while animals are. So the minimal understanding would be to be careful about doing things to living things in general, and especially when doing things with them which normally are only being done to not living things. It's not so much a moral question at an early stage, but about knowing if it is harmful and abnormal in the first place. I think these things can probably best be expressed as graphs. Definitions and relationships between things, but also some ambiguity. The information for it, could come from some language models, before having it checked by a human. Someone would have to try that out. So basically ask what goes into a dishwasher, then parse that into a graph. Also ask for the difference between a cat and a dish and try to extract the data into a graph (if possible),
>However, please try to find a way to express what you want to put into her mind in some way we could later work with in a flexible way. Only just that ehh? Should be easy-peasy right? I'll have that for you by next Tuesday Anon, will that work for you? :^) Yes, I agree with you about the need for flexibility & extensibility (as I mentioned ITT). In fact, I think such a tall order will be literally impossible if that development approach isn't established and maintained during the entire process. In large part we're attempting to tackle something here that AFAICT, has never been done before in human history. And to do it on cheap, commodity compute hardware too? Tall order indeed. I will just add to that as a side-note response to the above that I see a real benefit to us the Christian theological view of human beings, and distinguishing between the heart of a robowaifu (where these actual laws will be 'written'), and the mind of a robowaifu (where the more immediate physical responses, etc., etc.) are controlled. At the very least this separation of concerns will ease to some degree our ability to reason about these things. It's also a much higher-level take (analogically) on biomimicry, which I think almost all of us here already admit to being a sound engineering approach generally. >I think a lot of the learning could come from extracting the normalcy of human behaviors. Yes, exactly. We are each striving towards building robowaifus that--at the very least for ourselves--have to engage effectively & pleasingly with human beings. While cultural & social """norms""" will have some bearing on that, our goal ITT should be a narrowly-scoped set of laws that adhere to a small set of basic Christian principles. Not only will this dramatically-decrease the set of tasks we have ITT, I believe it will establish a basic foundation of ethics & morals that can be used & extended by any Anon as he sees fit, while still adhering to basic modes of behavior generally-acceptable in the (now-dead) former Western Tradition worldwide. >So the minimal understanding would be to be careful about doing things to living things in general, and especially when doing things with them which normally are only being done to not living things. >...but about knowing if it is harmful and abnormal in the first place. Quite a large taxonomic problem-space, wouldn't you agree Anon? >I think these things can probably best be expressed as graphs. Definitions and relationships between things, but also some ambiguity. The information for it, could come from some language models, before having it checked by a human. Someone would have to try that out. I think you're generally-correct about these points Anon. Thus my pursuit of Halpin, et al's work on Object Role Modeling (>>2303 ..., >>8773) . I think it's probably our single best shot at managing such complexity via a highly-flexible approach that doesn't inadvertently hamstring the effort from the get-go by dint of limiting syntax (such as UML). And you mentioned more than once that the robowaifu should ask for help. I certainly agree, and that obviously the primary way that we ourselves learn as children. However, that in itself is a basic problem that's mostly separate from this one. Tying this all together functionally, in a way that doesn't kowtow to the Globohomo Big-Tech/Gov, is going to be a major breakthrough in general once it's achieved. The ramifications of that will go far beyond just our beloved robowaifus! :^)
>>17197 >Only just that ehh? Should be easy-peasy right? My comment wasn't about claiming how hard it would be, just the approach. Less text, more structured data which could be parsed into other forms, or where we could discuss if a system could somehow use that to draw conclusions. >> extracting the normalcy of human behaviors This would just be the basis. Most tasks aren't about moral ideals or rules. When we have that, we can see a way to make it about that, where it is necessary. >Not only will this dramatically-decrease the set of tasks we have ITT, Well, you have the ten commandments, but not the way to map it onto actions. What I meant is, you'll need a way to describe the actions and reasoning first. >Object Role Modeling (>>2303 ..., >>8773) . I think it's probably our single best shot at managing such complexity via a highly-flexible approach that doesn't inadvertently hamstring the effort from the get-go by dint of limiting syntax (such as UML). Okay, that might work. More importantly, even if not then we might be able to parse and restructure the data. Which I think is important.
why are you making it sound more complex than it is all you need is an associative array, categorize every word and give it a +/- value, thats how chatbots 'understand' what you say or what it will supposedly do, it just needs to look up the input words, add the values together and construct a response from the appropriate category within the same value range, if kill=-9 and you say kill yourself it 'understands' that you said something 'bad' directed to 'self', and it can use this however you want like influencing reputation or mood or saying an insult, but thats the easy part the real problem has nothing to do with programing its how to figure out whats going on without someone feeding it sentences, someone being stabbed for example, it sees knife (-9), screaming (-4), attacker laughing (+4) , blood (-2) ... etc., evaluates the situation as 'bad' danger and takes action except it turns out its just you cooking with a friend and you accidentally spilled sauce on him and got shocked and are by chance holding the knife in a way that it looks superimposed on your friend from a specific angle, pattern recognition can only take you so far, still the illusion from using a simple dictionary is good enough 90% of time
>>17199 Yeah, that would be a bit too simple, but it might help to build a foundation of what humans see as dangerous or harmful. In a way this can be used to caught attention to something that might need more consideration. However, there's no term in "putting cat into the dishwasher" that does that. So if she would translate what she's planing to do into text and run through such a filter it might not show a problem. At some point they should have a concept of the world and make more thoughtful decisions. This might be the difference between consequentialist and pseudo-consequentialist behaviour: https://arbital.com/p/consequentialist/ - or maybe not since she would probably put the cat in to clean her, just not realizing that it's the wrong object and solution for that.
>>17207 yes because theres nothing inherently bad about putting a cat in a dishwasher, your judging it based on presumptions not what the actual action is, you can only say it was good/bad after the fact not it is good/bad just because, if the cat wasnt injured and there was nothing else that you consider bad had happened and everyone smiled and clapped then who are you to say it was bad, the only thing bad about it is that its illogical, 'cat' is not in the category of 'dish' why would you put a non'dish' object in a 'dish' 'washer', most words are intuitively descriptive enough like this to categorize in a way that nonsensical sentences like this just default to a "do not understand" the context of the thread isnt semantics though, as a preemptive blanket judgement for good/bad without any context dictionaries are more than enough to impose your idea of morality, more vague things that are not inherently obvious could just be evaluated by their outcomes and given an associative weighting ( just another dictionary but with combinations and their +/- value) basically behavior reinforcement like with normal people, you wouldnt have any control over this one but it will still be based on how you defined everything as good/bad so it should be identical to your morality regardless
>>17209 >without any context dictionaries are more than enough to impose your idea of morality, more vague things that are not inherently obvious could just be evaluated by their outcomes and given an associative weighting ( just another dictionary but with combinations and their +/- value) basically This might be useful for fast decision making and maybe even mostly sufficient. However, we would need a form of structured training data anyways, to even get there, unless you want to educate her on her own or crowdsource the creation of such a dictionary. As soon as we have software and data to create such a dictionary then we can as well add that to the robowaifu mind, then she's smarter and can think a bit more about novel situations. >could just be evaluated by their outcomes The cat's won't like that trial and error approach.
>>17210 most of the shit is already there, theres tons of sentiment lists of words scavenged from the internet, problem is theyre universal its never going to be your specific sentiments its that of the entire internet, thats why jewgle keeps crying that their bots hate black people but since youre using something as simple as the ten commandments then imposing your morality is as simple as looking up the synonyms of those commands and changing the value the way you want it, it cant be more than 200 words that need changing, categories and the rest wouldnt need to change i think
Open file (144.23 KB 1280x720 likes2be_human.jpg)
Some of the things we're discussing here are about consequentialism vs pseudo-consequentialism and other options like oracles and imitation: https://arbital.com/p/consequentialist/ (Disclaimer: Close to MIRI/lesswrong, alleged cultists, and imo unpleasant to read, but still probably useful and interesting) >Consequentialist reasoning selects policies on the basis of their predicted consequences ... >But since consequentialism is so close to the heart of why an AI would be sufficiently useful in the first place, getting rid of it tends to not be that straightforward. E.g: >Many proposals for what to actually do with Oracles involve asking them to plan things, with humans then executing the plans. >An AI that imitates a human doing consequentialism must be representing consequentialism inside itself somewhere. >>17211 >theres tons of sentiment lists of words scavenged from the internet, Okay, good to know. >problem is theyre universal its never going to be your specific sentiments I would only use that as one filter, as I wrote before, and it could be changed through interactions with the owner. Having more sensibilities is probably better as a starting point than less. Internally this should be seen as societies expectations, not the values of the robowaifu. This would even be useful to her for avoiding conflict with people who might care about things she doesn't. >something as simple as the ten commandments then imposing your morality is as simple as looking up the synonyms of those commands and changing the value the way you want it, it cant be more than 200 words that need changing, I'm looking forward to see that.
>>17219 so youre going to try and program cognitive dissonance where lying is immoral but its moral at the same if its in society because society doesnt have the same morals so its moral to be immoral if its immoral to be moral when its moral but not immoral and in society, youre making it too complicated, theres a reason why the word exception means the same thing as error in programing on a humorous note i recognize this pattern if you implement it it would just function as a nand gate thats 1/3 for an ace exploit, you can program everything with just this, if you have a bot that you can ask questions then it has to store what you say in its memory thats 2/3 and if it has to reply to the question thats your execution avenue 3/3 , so theoretically someone smart enough could make any bot like this do whatever they want like go on a rampage and jump in a river just by asking a shit ton of "is it moral to do x in society" like a robo manchurian candidate, kind of funny to imagine as a scenario
>>17135 You are on a board full of coomers who largely only care about glorified mechanized sex dolls. Is it really any surprise? They are likely some kind of feds operating a honeypot for intel on innocent everyday parts to ban for the general public. Of course everyone will claim otherwise when accused but the immediate regression after my departure to the tone of the board going back to posting largely the same kind of innane psychobabble proves otherwise. There is nothing wrong with wanting sex from a robowaifu, but when that is the "end all, be all", with no focus on the waifu companionship part, it only goes downhill from there.

Report/Delete/Moderation Forms
Delete
Report