/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Site was down because of hosting-related issues. Figuring out why it happened now.

Build Back Better

Sorry for the delays in the BBB plan. An update will be issued in the thread soon in late August. -r

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


When the world says, “Give up,” Hope whispers, “Try it one more time.” -t. Anonymous


Robowaifu Ethics & Morality Chobitsu 08/02/2022 (Tue) 23:25:26 No.17125
>"And as you wish that others would do to you, do so to them."[1] >-t. Jesus Christ I propose this thread to be a narrowly-scoped discussion on the OP's topic; informed primarily by 2 Christian-themed writings, and by our own efforts & practical insights in developing robowaifus : I. In Mere Christianity, C.S. Lewis asserts that all men "...have the Law of God written on their hearts."[2][3][4] This is certainly affirmed by various passages in the Holy Scriptures, as well. II. In The City of God, Aurelius Augustine of Hippo writes >"And yet they have within themselves something which they could not see: they represented to themselves inwardly things which they had seen without, even when they were not seeing them, but only thinking of them. But this representation in thought is no longer a body, but only the similitude of a body; and that faculty of the mind by which this similitude of a body is seen is neither a body nor the similitude of a body; and the faculty which judges whether the representation is beautiful or ugly is without doubt superior to the object judged of. >"This principle is the understanding of man, the rational soul; and it is certainly not a body, since that similitude of a body which it beholds and judges of is itself not a body. The soul is neither earth, nor water, nor air, nor fire, of which four bodies, called the four elements, we see that this world is composed. And if the soul is not a body, how should God, its Creator, be a body?[5][6][7] Now, starting from the fundamental basis & belief (a priori w/ no defenses given pertaining to it >tl;dr let's not descend into debate on this point, merely discuss the implications of it, kthx :^) that this immaterial, moral law inscribed on each of our hearts by God literally serves as the foundational stone for all good ethics & all good moralities out there; I'd like for us all lurkers, I'm looking at you! :^) to have a general discussion on: A) What does this all imply (and likely mean) regarding human behaviours within the general cultural/societal domain under discussion, and B) How do we here, ourselves, go about best implementing responsive behaviors similar to these within our robowaifu's systems? === >"Logic!" said the Professor half to himself. "Why don't they teach logic at these schools? There are only three possibilities. Either your sister is telling lies, or she is mad, or she is telling the truth. You know she doesn't tell lies and it is obvious she is not mad. For the moment then, and unless any further evidence turns up, we must assume she is telling the truth."[8] >-t. The Professor (Digory Kirke) >For the foolishness of God is wiser than man’s wisdom, and the weakness of God is stronger than man’s strength.[9][10] >-t. Paul the Apostle, together with & through the Holy Spirit within him === >(at least somewhat) related threads: > (>>11102 - philosophy, >>18 - personality, >>17027 - emotions, >>106 - society, >>19 - important things) 1. https://biblehub.com/luke/6-31.htm 2. https://archive.org/details/MereChristianityCSL 3. http://www.ntslibrary.com/PDF%20Books/Mere%20Christianity%20-%20Lewis.pdf 4. https://www.youtube.com/playlist?list=PL9boiLqIabFhrqabptq3ThGdwNanr65xU 5. https://www.gutenberg.org/ebooks/45304 6. https://www.gutenberg.org/ebooks/45305 7. https://www.ccel.org/ccel/schaff/npnf102 8. The Chronicles of Narnia: The Lion, the Witch and the Wardrobe by C. S. Lewis (1950) HarperCollins. 9. https://biblehub.com/1_corinthians/1-25.htm 10. https://biblehub.com/bsb/1_corinthians/1.htm >addenda: https://en.wikipedia.org/wiki/Mere_Christianity https://en.wikipedia.org/wiki/Argument_from_morality https://en.wikipedia.org/wiki/Lewis%27s_trilemma https://en.wikipedia.org/wiki/The_City_of_God https://en.wikipedia.org/wiki/The_Lion,_the_Witch_and_the_Wardrobe https://www.josh.org/mtacdownload/ https://en.wikipedia.org/wiki/Theological_virtues https://en.wikipedia.org/wiki/Cardinal_virtues
Open file (51.22 KB 604x443 i_dunno_lol.jpeg)
>>17125 Just to show off my ignorance, and also to be the first one into the pool :^), >[1] I'll start. >What does this all imply (and likely mean) regarding human behaviours within the general cultural/societal domain under discussion, and I think we very likely have no other practical, rational basis by which to even discuss a 'standard' for ethics & morality apart from human ones. This is really all that is directly-apprehendable by us, naturally-speaking; and for all intents and purposes, effectively is completely satisfactory at informing our particular needs at hand. To wit: robowaifu ethics & morals. >tl;dr >What would Yumi Ueda do? (>>17045) >How do we here, ourselves, go about best implementing responsive behaviors similar to these within our robowaifu's systems? >pic-related[2] Why, bring back Dr. Hibiya-san from the grave, and let him do it all, ofc! :^) I really can't think of a better way myself than either trying some kind of pared-down, situational, expert-system-style picklists--perhaps combined with some simulation systems used as predictors of possible outcomes. This would be far from general, but that's kind of the point ATM IMO. We really don't have good generalizations on these kinds of things. There would be a literal infinity of possible scenarios available, so let's just focus on the most common robowaifu tropes/situations available to us now. These mango/animu writers were all a) human beings, and b) already invested some energy & thought into the appropriate outcomes. Make sense anon?
More philosophy and speculation? I stopped even reading the thread related to AI.
>>17135 Heh, not at all fren. I'm quite explicit about the specifics involved. Unless you have something better to contribute than this, I'm going to have to move your >useless post/10 off into the chikun farm tbh.
>>17139 I was just wondering, how is it different from the other thread on philosophy? Is this somehow going to be more specific? I mean towards representing certain ideas as a graph or some input data for a neural network?
Open file (1.32 MB 360x202 chii_vs_butterfly.gif)
>>17141 >I was just wondering, how is it different from the other thread on philosophy? Is this somehow going to be more specific? I mean towards representing certain ideas as a graph or some input data for a neural network? Alright, fair enough then. Good questions Anon. This thread is definitely more specific (and narrow) in scope, than a general philosophical dialog & debate without ending :^) with every concept under the sun up for grabs. This thread is also hopefully going to result in some actually-practical solutions for the domain of ethical robowaifu behaviors. The question How do we here, ourselves, go about best implementing responsive behaviors similar to these within our robowaifu's systems? (>>17125) is always (or should always) be the driving, motivating factor for us in our dialogs ITT. There is a significant advantage to what is laid out in the OP, because a) we are limiting the whole universe of discourse to Christian principles of understanding & behavior, and b) we are also limiting the informative 'classroom' texts to just two (both of which are literally already provided for Anon ITT's OP): >Mere Christianity by C.S. Lewis >The City of God by St. Augustine Taken all together this is quite narrow in scope, relatively-speaking. Also, there's a fundamental premise I'm presuming for us here as a group that I should probably spell out clearly: Once we solve the system of Christian ethics & morals for our robowaifus, then Bob's your Uncle. As a working system, we will already have worked out so many basics & fundamentals, that it can then be extended & transformed into any system Anon cares to devise. However, keeping the initial scope limited to just what we have set forth here is essential to any initial success with the overall effort IMHO. To put things another way; if we tell our robowaifus things like: > "Thou shalt not kill", or > "Thou shalt not toss a flower pot out the window of our 40th-story flat", or > "Don't put the neighbor's cat into the dishwasher, even if it's dirty, stinks, and has a bad attitude towards our beloved dog", why shouldn't she do any of these things? I mean apart from the fact her Master told her explicitly not to? What innate law should be 'written on her heart' that would allow her to do these right-things, even when her Master is out of contact with her for a long, long time--as in Yokohama Kaidashi Kikou? (>>12652) Make sense Anon?
>>17153 >Why shouldn't she do any of these things? I don't understand your question. It's because her Master told her explicitly not to. Same reason as in humans, wherein God has told us (explicitly and implicitly) not to. We physically can do all of those things if we ignore God. The only other reason you can really give is that doing things in contravention to God's law will ultimately have negative consequences, no matter how minor. This is explained clearly to the Israelites in Deuteronomy 28, and well, most of the rest of the Old Testament. In your case, a robot doing these immoral things will have consequences, for people, and for the robot itself. >What innate law should be 'written on her heart' that would allow her to do these right-things, even when her Master is out of contact with her for a long, long time If you start from the assumption that the law of God is written in the hearts of men, I don't understand why this question is necessary either. Surely a rudimentary understanding of God's law should be encoded into the robot's software, in the same way it is in us. As for how these laws are encoded, that's the interesting question. The Bible itself offers many real-world examples of how God's law is to be followed by men. While not an exhaustive compilation, it seems reasonable that general principles could be trained into AI, or encoded into decision systems that would match those displayed in the Bible. Situational training sets could be used for ML algorithms, or as test sets for decision systems, but it ultimately depends how the software is implemented, as to how such laws and moral principles would be implemented into a robot's personality.
if youre using a theological worldview then nothing without a soul has virtue, which is what your actually referring too, ethics isnt a side of the coin it is the coin at least in aristotelian and kantian ethics i know an automata is the lowest even lower than bacteria or algae ethically because it doesnt even have the primitive natural law by nature of being an unnatural creation, it cannot be judged by any law, the question is like saying how do we instill morality into a ballistic missiles, it doesnt matter with this worldview, the onus is on whoever builds and uses it because it has no soul to morally interject, it does what you made it to do, yeah you could just fire it away randomly without any intention or target or purpose but the onus is still on you, in contrast if it had a soul then no matter what you target the missile would always change trajectory towards the nearest israeli embassy because it would just inherently knows good from evil by virtue of a soul its pointless because either it has a soul in which case ethics is inherent or it doesnt in which case its just an object of extension for your own virtues and morality, you cant judge something that doesnt have a soul simply abiding its purpose
>>17154 Great points Anon, let's solve some (or all?) of your questions, alright! :^)
>>17156 We can move this conversation into the Christian Debate thread (>>2050) if you'd like to continue with it Anon--this isn't a debate thread, not in that sense. In fact I plan to move your post there soon, since I strongly intend to keep this thread from devolving just so. As stated in the OP, this thread clearly specifies: > Now, starting from the fundamental basis & belief (a priori w/ no defenses given pertaining to it > tl;dr let's not descend into debate on this point, merely discuss the implications of it, kthx :^) > that this immaterial, moral law inscribed on each of our hearts by God literally serves as the foundational stone for all good ethics & all good moralities out there (>>17125) Now, we can certainly discuss together whether or not these are valid claims, but this isn't the thread for that. >tl;dr This isn't Yet Another Philosophical Debate Thread, Anon. Rather it's a pragmatic, robowaifu behavioral-solutions thread. The rules for it have been laid out clearly enough; let us all proceed accordingly. >=== -minor prose, fmt edit
Edited last time by Chobitsu on 08/05/2022 (Fri) 12:09:04.
>>17158 yes im saying based on this position a discussion around ethics is an irrelevancy, youre just discussing what the purpose of a machine should be not what the ethics are
Open file (75.05 KB 798x434 3catgirls.jpg)
>>17153 >Make sense Anon? Sorry for my late answer. Yeah it makes sense. I skipped over parts of the OP and might have missed some important parts. However, please try to find a way to express what you want to put into her mind in some way we could later work with in a flexible way. I think there are concepts to express information or statements in some kind of pseudo code, maybe I'll look into that myself soon. Just a naive and spontaneous sketch: animal:"no unnecessary harm" "animal in dishwasher":harmful Having that said, I think a lot of the learning could come from extracting the normalcy of human behaviors. Humans put certain things in dishwashers, so ask before putting in anything else. The things normally put in have the trait of not being alive, while animals are. So the minimal understanding would be to be careful about doing things to living things in general, and especially when doing things with them which normally are only being done to not living things. It's not so much a moral question at an early stage, but about knowing if it is harmful and abnormal in the first place. I think these things can probably best be expressed as graphs. Definitions and relationships between things, but also some ambiguity. The information for it, could come from some language models, before having it checked by a human. Someone would have to try that out. So basically ask what goes into a dishwasher, then parse that into a graph. Also ask for the difference between a cat and a dish and try to extract the data into a graph (if possible),
>However, please try to find a way to express what you want to put into her mind in some way we could later work with in a flexible way. Only just that ehh? Should be easy-peasy right? I'll have that for you by next Tuesday Anon, will that work for you? :^) Yes, I agree with you about the need for flexibility & extensibility (as I mentioned ITT). In fact, I think such a tall order will be literally impossible if that development approach isn't established and maintained during the entire process. In large part we're attempting to tackle something here that AFAICT, has never been done before in human history. And to do it on cheap, commodity compute hardware too? Tall order indeed. I will just add to that as a side-note response to the above that I see a real benefit to us the Christian theological view of human beings, and distinguishing between the heart of a robowaifu (where these actual laws will be 'written'), and the mind of a robowaifu (where the more immediate physical responses, etc., etc.) are controlled. At the very least this separation of concerns will ease to some degree our ability to reason about these things. It's also a much higher-level take (analogically) on biomimicry, which I think almost all of us here already admit to being a sound engineering approach generally. >I think a lot of the learning could come from extracting the normalcy of human behaviors. Yes, exactly. We are each striving towards building robowaifus that--at the very least for ourselves--have to engage effectively & pleasingly with human beings. While cultural & social """norms""" will have some bearing on that, our goal ITT should be a narrowly-scoped set of laws that adhere to a small set of basic Christian principles. Not only will this dramatically-decrease the set of tasks we have ITT, I believe it will establish a basic foundation of ethics & morals that can be used & extended by any Anon as he sees fit, while still adhering to basic modes of behavior generally-acceptable in the (now-dead) former Western Tradition worldwide. >So the minimal understanding would be to be careful about doing things to living things in general, and especially when doing things with them which normally are only being done to not living things. >...but about knowing if it is harmful and abnormal in the first place. Quite a large taxonomic problem-space, wouldn't you agree Anon? >I think these things can probably best be expressed as graphs. Definitions and relationships between things, but also some ambiguity. The information for it, could come from some language models, before having it checked by a human. Someone would have to try that out. I think you're generally-correct about these points Anon. Thus my pursuit of Halpin, et al's work on Object Role Modeling (>>2303 ..., >>8773) . I think it's probably our single best shot at managing such complexity via a highly-flexible approach that doesn't inadvertently hamstring the effort from the get-go by dint of limiting syntax (such as UML). And you mentioned more than once that the robowaifu should ask for help. I certainly agree, and that obviously the primary way that we ourselves learn as children. However, that in itself is a basic problem that's mostly separate from this one. Tying this all together functionally, in a way that doesn't kowtow to the Globohomo Big-Tech/Gov, is going to be a major breakthrough in general once it's achieved. The ramifications of that will go far beyond just our beloved robowaifus! :^)
>>17197 >Only just that ehh? Should be easy-peasy right? My comment wasn't about claiming how hard it would be, just the approach. Less text, more structured data which could be parsed into other forms, or where we could discuss if a system could somehow use that to draw conclusions. >> extracting the normalcy of human behaviors This would just be the basis. Most tasks aren't about moral ideals or rules. When we have that, we can see a way to make it about that, where it is necessary. >Not only will this dramatically-decrease the set of tasks we have ITT, Well, you have the ten commandments, but not the way to map it onto actions. What I meant is, you'll need a way to describe the actions and reasoning first. >Object Role Modeling (>>2303 ..., >>8773) . I think it's probably our single best shot at managing such complexity via a highly-flexible approach that doesn't inadvertently hamstring the effort from the get-go by dint of limiting syntax (such as UML). Okay, that might work. More importantly, even if not then we might be able to parse and restructure the data. Which I think is important.
why are you making it sound more complex than it is all you need is an associative array, categorize every word and give it a +/- value, thats how chatbots 'understand' what you say or what it will supposedly do, it just needs to look up the input words, add the values together and construct a response from the appropriate category within the same value range, if kill=-9 and you say kill yourself it 'understands' that you said something 'bad' directed to 'self', and it can use this however you want like influencing reputation or mood or saying an insult, but thats the easy part the real problem has nothing to do with programing its how to figure out whats going on without someone feeding it sentences, someone being stabbed for example, it sees knife (-9), screaming (-4), attacker laughing (+4) , blood (-2) ... etc., evaluates the situation as 'bad' danger and takes action except it turns out its just you cooking with a friend and you accidentally spilled sauce on him and got shocked and are by chance holding the knife in a way that it looks superimposed on your friend from a specific angle, pattern recognition can only take you so far, still the illusion from using a simple dictionary is good enough 90% of time
>>17199 Yeah, that would be a bit too simple, but it might help to build a foundation of what humans see as dangerous or harmful. In a way this can be used to caught attention to something that might need more consideration. However, there's no term in "putting cat into the dishwasher" that does that. So if she would translate what she's planing to do into text and run through such a filter it might not show a problem. At some point they should have a concept of the world and make more thoughtful decisions. This might be the difference between consequentialist and pseudo-consequentialist behaviour: https://arbital.com/p/consequentialist/ - or maybe not since she would probably put the cat in to clean her, just not realizing that it's the wrong object and solution for that.
>>17207 yes because theres nothing inherently bad about putting a cat in a dishwasher, your judging it based on presumptions not what the actual action is, you can only say it was good/bad after the fact not it is good/bad just because, if the cat wasnt injured and there was nothing else that you consider bad had happened and everyone smiled and clapped then who are you to say it was bad, the only thing bad about it is that its illogical, 'cat' is not in the category of 'dish' why would you put a non'dish' object in a 'dish' 'washer', most words are intuitively descriptive enough like this to categorize in a way that nonsensical sentences like this just default to a "do not understand" the context of the thread isnt semantics though, as a preemptive blanket judgement for good/bad without any context dictionaries are more than enough to impose your idea of morality, more vague things that are not inherently obvious could just be evaluated by their outcomes and given an associative weighting ( just another dictionary but with combinations and their +/- value) basically behavior reinforcement like with normal people, you wouldnt have any control over this one but it will still be based on how you defined everything as good/bad so it should be identical to your morality regardless
>>17209 >without any context dictionaries are more than enough to impose your idea of morality, more vague things that are not inherently obvious could just be evaluated by their outcomes and given an associative weighting ( just another dictionary but with combinations and their +/- value) basically This might be useful for fast decision making and maybe even mostly sufficient. However, we would need a form of structured training data anyways, to even get there, unless you want to educate her on her own or crowdsource the creation of such a dictionary. As soon as we have software and data to create such a dictionary then we can as well add that to the robowaifu mind, then she's smarter and can think a bit more about novel situations. >could just be evaluated by their outcomes The cat's won't like that trial and error approach.
>>17210 most of the shit is already there, theres tons of sentiment lists of words scavenged from the internet, problem is theyre universal its never going to be your specific sentiments its that of the entire internet, thats why jewgle keeps crying that their bots hate black people but since youre using something as simple as the ten commandments then imposing your morality is as simple as looking up the synonyms of those commands and changing the value the way you want it, it cant be more than 200 words that need changing, categories and the rest wouldnt need to change i think
Open file (144.23 KB 1280x720 likes2be_human.jpg)
Some of the things we're discussing here are about consequentialism vs pseudo-consequentialism and other options like oracles and imitation: https://arbital.com/p/consequentialist/ (Disclaimer: Close to MIRI/lesswrong, alleged cultists, and imo unpleasant to read, but still probably useful and interesting) >Consequentialist reasoning selects policies on the basis of their predicted consequences ... >But since consequentialism is so close to the heart of why an AI would be sufficiently useful in the first place, getting rid of it tends to not be that straightforward. E.g: >Many proposals for what to actually do with Oracles involve asking them to plan things, with humans then executing the plans. >An AI that imitates a human doing consequentialism must be representing consequentialism inside itself somewhere. >>17211 >theres tons of sentiment lists of words scavenged from the internet, Okay, good to know. >problem is theyre universal its never going to be your specific sentiments I would only use that as one filter, as I wrote before, and it could be changed through interactions with the owner. Having more sensibilities is probably better as a starting point than less. Internally this should be seen as societies expectations, not the values of the robowaifu. This would even be useful to her for avoiding conflict with people who might care about things she doesn't. >something as simple as the ten commandments then imposing your morality is as simple as looking up the synonyms of those commands and changing the value the way you want it, it cant be more than 200 words that need changing, I'm looking forward to see that.
>>17219 so youre going to try and program cognitive dissonance where lying is immoral but its moral at the same if its in society because society doesnt have the same morals so its moral to be immoral if its immoral to be moral when its moral but not immoral and in society, youre making it too complicated, theres a reason why the word exception means the same thing as error in programing on a humorous note i recognize this pattern if you implement it it would just function as a nand gate thats 1/3 for an ace exploit, you can program everything with just this, if you have a bot that you can ask questions then it has to store what you say in its memory thats 2/3 and if it has to reply to the question thats your execution avenue 3/3 , so theoretically someone smart enough could make any bot like this do whatever they want like go on a rampage and jump in a river just by asking a shit ton of "is it moral to do x in society" like a robo manchurian candidate, kind of funny to imagine as a scenario
>>17135 You are on a board full of coomers who largely only care about glorified mechanized sex dolls. Is it really any surprise? They are likely some kind of feds operating a honeypot for intel on innocent everyday parts to ban for the general public. Of course everyone will claim otherwise when accused but the immediate regression after my departure to the tone of the board going back to posting largely the same kind of innane psychobabble proves otherwise. There is nothing wrong with wanting sex from a robowaifu, but when that is the "end all, be all", with no focus on the waifu companionship part, it only goes downhill from there.
>>17223 youre going out of context, op already decided morality is hardcoded by a creator not learnt from society i dont agree with it but for the sake of discussion, its assumed >moral rules the what now? youre probably confused about morals ( distinguishing good and bad ) and rights/principles/rule of law/expectations/etc ( distinguishing right and wrong ) , i dont care about these im only talking about morals thats the point, theyre simple and absolute and foundational, the rest either comes naturally or can be ignored most laws like assault or fraud just come naturally because "harm bad" "lying bad" or even stupid laws like drinking and driving for example will naturally pop up as a morally bad association because of all the newspaper articles mentioning car accidents with casualties and alcohol, so because "killing bad" is a moral, being an asshole in public gets you negative attention and confrontation so that too gets associated as bad because "violence bad" "angry bad" etc., the morals never change, they dont need to the only rule you need for this is "dont be bad" thats fucking it why are you making it so complicated, if you genuinely want to program exceptions to morals well then just look at the legal system and condense the entire judicial processes that humans cant even do without paying highly trained people to speak in their place for over several months if not years, down to a single second done by a machine that only understands two values >You are drawing a connection where there is none. >Intelligent beings adapt to the context. yes ironic
>>17224 >>I would only use that as one filter, as I wrote before, and it could be changed through interactions with the owner. Having more sensibilities is probably better as a starting point than less. Internally this should be seen as societies expectations, not the values of the robowaifu. This would even be useful to her for avoiding conflict with people who might care about things she doesn't. Your whole line of argumentation has even nothing to do with what I wrote. Also, it's not constructive. Do what you want, and I will do what I want. If you have something you can implement or at least even sketch out, then go on with it. Otherwise this thread is again just about pointless philosophical discussions about the meaning of things and terms.
Open file (1002.50 KB 2256x4096 FWBShCAUIAESJAk.jpeg)
i gave the first two books of mere christianity a read, as well as skimming a bit of the 3rd book. i did not believe the final book would give much fruit regarding ethics and morality specifically, so i did not bother reading it first things first. my main focus ended up being criticizing lewis's meta-ethical system. the reason for this is methodological in a manner specific to my own goals. many people here are fine with digital computation, and simple simulacra of moral behaviour. i, however, am interested in something that is more "analog". to me, morality does not merely concern something that is inscribed into man in an external manner, but rather an activity which a person may "tune" into. it is something that is recoverable from the world itself, and is thus not foreclosed from being encapsulated into a radical empiricist approach to metaphysics and cognition... as such i have interest in articulate how morality structures the world, and hopefully deduce from there the means by which an organism could resonate with such structures to briefly summarize, my main criticism of lewis is that his interpretation of "Nature" ends up being too subjectivist. how he gets there is through a long series of meta-ethical errors 1) lewis write the following: >Now this Law or Rule about Right and Wrong used to be called the Law of Nature. Nowadays, when we talk of the "laws of nature" we usually mean things like gravitation, or heredity, or the laws of chemistry. But when the older thinkers called the Law of Right and Wrong "the Law of Nature," they really meant the Law of Human Nature in this move, we have relativized Nature to something that is just in the peculiar way humans are constituted. in reality, Nature, physis is something that is inherent to the constitution of things as well. the normative force of physis comes from the fact that it is behind the very essence of a thing. pete wolfendale does a good job differentiating nomos from physis (in this talk: https://invidious.esmailelbob.xyz/watch?v=PxSGHk-ajNw). an example can be of a table. whether one should build a table with stone or wood doesn't really matter. neither does it matter the peculiar appearance of a table if it doesn't undermine it's functionality. this factors that are rather extraneous to the table are convention, nomos. at the same time, there is such a thing as making a table wrong. a pit of quicksand is a bad table. the rightness and badness that is described here stems from the very meaning of the term "table" itself. when morality is viewed from the viewpoint of physis, then it is obvious why it is objective i believe that lewis does make a correct step in pointing out how morality's function concerns the proper operation of the human machine. however, he makes a mistake in presenting this as its only function. this makes morality something shackled to intersubjective affairs amongst humans, as though there is nothing in a society except people. the Good has a much wider grasp than this. as is mentioned in this thread, we know that we should not put a cat in the dishwasher. if lewis was correct, this fact would not concern ethics and morality, for neither of these things concern the internal constitution of a person's drives, or their interaction with other persons. we have then a view of things which is overly one-sidedness. if we are to take the idea of physis seriously, then we should be ready to have the moral permeate all of reality to avoid neglect for digital waifuists, the implication here is that semantics might play a bigger role than might initially seem (possibly more on this point later). from the point of view of my own metaphysical picture, there might be something like an "anamnesis" which is involved in the perceptual constitution of of objects, where their essence is stained with their coupling with other systems 2) he is unable to decide how we come to learn morality. in his initial articulation of Nature, he states the following: >This law was called the Law of Nature because people thought that every one knew it by nature and did not need to be taught it however, later he contradicts this picture by stating the following: >I fully agree that we learn the Rule of Decent Behaviour from parents and teachers, and friends and books, as we learn everything else the issue stems from seeing morality as a static entity as opposed to a process of enfoldment. as such, he is unable to put these two ideas together. meanwhile, if we accept that it is a process, we see that physis is something we discover through the gradual unfolding of an object's nature. this unfolding can occur in the course of direct investigation, or may be something that is pushed into us during day-to-day activities. we see dreyfus and heidegger entering into this picture, among others 3) by not seeing morality as a process, he is forced to see it as something that is pushed on us from without by Something. through some questionable reasoning he comes to the conclusion that this Something must be minded and somehow it is trying to persuade us to act the right way i believe, that this anthropomorphic conception of God giving us specific orders has to be the ultimate manifestation of subjectivism. the reason comes down to euphythro's dilemma: is the Good such because the gods approve of it, or do the gods approve of the Good because it is Good? if morality isn't simply listening to some dude, then we should accept the latter
>>17227 another problem with this view is that it makes morality a matter of divine intervention. now, a digital waifuists might be fine with such a result as they only care about instantiating a simulacra of moral behaviour. however, those going for an analog approach are not so lucky. i also think that the interventionist idea of morality is theological objectionable. this is ultimately due to the god in the gaps. now, by evoking this phrase, i do not intend to reference the new atheist adoption of it, but rather the use of this phrase by the scholastics. their issue was that by justify God's reality by pointing to stray miracles, we are not really appreciating the fact that God is the creator and sustainer of our entire reality. i do believe morality as an ontological process still thereby depends on God. however, i do not believe this dependence is particularly special, and it is not this anthropomorphic idea that there is a simple giving of orders... right now i am interested in trying to connect morality to the work of the chinese philosopher tai chen. his concerns i believe are intimately related to this question of physis (which seems to correlate with the chinese concept "li"). to quote the introduction by hawaii press of tai chen's inquiry into goodness: >I have already noted that, according to Neo-Confucian metaphysics, li is the universal principle, form, and structure of things. It not only constitutes the guiding spirit in human nature but also represents the truth and reality of it; in effect, li is associated with the rational part of human nature. Man knows li because li is clearly present in man’s nature >As li is thus conceived, the question arises how we can come to know li. That the mind (hsin) knows li naturally is one possibility; that the mind comes to know li through an investigation of things is another; the important point here is that both lead to the positing of li and to the subjective imposition of one’s opinions. And it is precisely this point that Tai Chên spoke of when he said that the difficulty with the notion of li is that it cannot be conceived of in such an a priori and abstract way and that intrinsically there can be no universal way for settling disagreements concerning li; for the settlement of opinions concerning li will often rest with those who have authority or prestige or power. Any argument on the basis of li will finally lead to an argument by authority, as is often seen in men who have authority who either suppress the ideas of their opponents or justify their own vicious and intemperate acts in the name of li >To be more specific, li will lead, on the one hand, to absolute inaction and quietude for the soft-minded; on the other, to absolute licentiousness and moral laxity for the strong-willed. Tai Chên noted therefore that actualities are sacrificed when li is considered separately from things and is identified instead with one’s opinions. 18 An immediate consequence of this separation—li from things and reality—is that things will lose their useful value. In a political and social context, it means that the ruler and the ruling class will quickly do harm to the people; the ruler and ruling class, seeing themselves as embracing every li, see the ruled mass as an object of exploitation to be reprimanded should it fail to cater to their likings. The more the ruler governs by li, the more the people suffer by li. Moreover, because of the abstract and idealized nature of li, the ruler will take no guidance other than what appears to him to be li; he will not look into the true reality of life and concrete facts of human nature and will therefore close his eyes to the needs and feelings of the people. And even if he did open his eyes, he would only see evil in the feelings of people for this reason: he identifies them with what is contrary to what he would call li >Tai Chên thus completely rejected the abstract and metaphysical concept of li in the Neo-Confucian philosophy as being not in accord with humanity and reality because it failed the tests of universal agreement in terms of the universal feelings of man. It must be recognized affirmed Tai Chên, that the reality of human nature is its needs (yü) and feelings (ch’ing), which are universal human traits. They are universally human because the use of reason without the recognition of the values of these traits will lead only to monstrosities to which man will react immediately with aversion. Human needs and feelings that define humanity have a metaphysical justification—they are derived from the universal life-material in the world that is intrinsically good.
>>17228 (also will attach a txt document of my personal notes on mere christianity. bolded text are criticisms/affirmations i find particularly important... i warn that some of the stuff isn't going to make much sense bcs it references ideas ive written elsewhere lol) alright! now on to the less meta-ethical and more practical points... the 7 virtues seem like a rather interesting model of morality. which can be compared and contrasted with moral foundations theory. there is of course the metaphysical point that the former concerns virtues, in other words they concern habits which can be actively refined. i think this is a very important point. talk of refinement might lead one to consider a developmental psychology of virtue. this would suggest an approach which would be to have the waifu slowly develop these virtues as she mentally develops. this could probably lead to a lot of work for the end user... (on a meta-ethical point, if morality is a process of "tuning in" and learning from others as opposed to the immediate inscription of knowledge, then talk of virtues makes much more sense) >>17154 i think there is something dangerous in this approach because it is taking the metaphor of "law" too seriously... the problem of course is that we need to revise our laws all the time. the reason for this is because the world changes. moreover, we have the frame problem. there are so many things that go into our understanding of norms and conventions that listing examples might not exhaust it. interpolation from examples might help a little bit, but it is can only help so much. semantics has to come in another issue with this approach, and pretty much all of the approaches in this thread, is that they are not actually keeping christian anthropology in mind. the first thing i think of when i see talk about the seven virtues is some sort of modularization involved, as opposed to dealing with all examples in an unstructured flat manner. so what we might need to do is design a general architecture of course, if there is a "tuning in" involved, perhaps this modularity could be emergent from coupling with the environment. the idea here could be to make waifus instinctively pursue modularly varying goals (briefly talked about here: https://www.alignmentforum.org/posts/JzTfKrgC7Lfz3zcwM/theories-of-modularity-in-the-biological-literature). refinement of the virtues would then naturally result if the system is intelligent there is another more philosophical criticism of this attempt to reify the law metaphor. this stems from the fact that it gives a picture of the ethical life that does not quite give justice to why we behave in an ethical way. hegel provides a criticism of this idea. a major criticism is that this attempt to think about these things in terms of norms has a tendency to dismiss the ethical substance (i.e. the bedrock social relations) of a society. this leads to an estranged understanding of ourselves quoting bernstein (https://www.jstor.org/stable/40971621): >The fulfillment of the Noah strategy requires self-subjection to ideal, external, authoritative norms of conduct. To found a nation on this basis is to displace the internal norms of communal senti- ment with moral principle. To make moral principle constitutive of one's relations to self and others requires the dissolution of familial bonds and their extension to the tribe. So the story of Abraham is a version of the story of the transformation from a family-based social system to an independent political mode of organization - the transformation that also lies behind a good deal of Greek tragedy. In the Jewish case, on Hegel's reading, it is the radical discontinuity between the two forms of social organiza- tion, familial and political, that is the source of the problem. Abraham seeks to found a new nation at the behest of a purely abstract ideal or norm, where it is presumed that what that means is the utter rejection of the previous form of social bonding >If the bonds of communal life and love are now construed as the horizontal conditions that provide for the intelligibility in principle of human activity, of what counts as injury and what not, of what grounds the quest for freedom and what needs to be realized in such a quest, then in severing those bonds as such Abraham has placed himself and the nation he means to found in a position whereby each further act can only make its agents ever more abject
>>17154 >>17230 (cont) bernstein thinks hegel's solution is to ground things on actual reality > Hegel's initial gesture, and what he takes to be Jesus' initial ges- ture, is simply to set the claims of human needs and wants against religious commands in order to reveal, minimally, that no com- mand can be absolute or unconditional, that circumstance must make a difference to the authority and validity of a command; more radically, since the claims of need and want must be able to condition the appropriateness and applicability of the law, then they must be antecedent to the law, hence must pose a counter- claim to it. So even for the Jews the "animal which falls into the pit demands instant aid" (208) whether or not it is the Sabbath; but this shows that "need cancels guilt." But if need can trump law, then although a particular urgency can make the claim of need vivid, the claim does not depend on the urgency but on the need. Jesus and his disciples show general contempt for the Sab- bath, plucking the ears of corn to satisfy their hunger even though that hunger might have been satisfied otherwise. Their contempt for the law is meant to reveal its arbitrariness and absurdity in the face of even the most simple natural need: "The satisfaction of the commonest human want rises superior to actions like these [religious ones] , because there lies directly in such a want the sensing or the preserving of a human being, no matter how empty his being may be" (207; emphasis added) something to point out here is that we can see a parallel with tai chen's criticism of neo-confuscianism... i also think lewis was close to this idea when he talked about the human machine and the apparently socialistic implications it has for societies. all this provide argument that we need to approach ethics from the direction of social attunement... quick thing to note: lewis does mention hegel in "mere christianiy" but it does not seem he understood it. he claims that he is a pantheist and that furthermore pantheists are unable to affirm things as either good or bad, but in hegel's work we do see a positive progress in the ethical domain (though it is done through speculative development as opposed to a simple convergence towards a Real Morality)... i doubt lewis was very familiar with faith and knowledge either. something else to note, bernstein's criticism might be a little bit too one-sidedness towards hegel's faith and knowledge, though perhaps "need" does play a role in the phenomenology when hegel talks about utility. i am not as sure about "love". another article on this topic (https://journals.sagepub.com/doi/10.1177/0191453714552210) seems to downplay the role of love in hegel's later's work another criticism of moralism that might fit into this, though might be best to not go too much into as i have already opened too many cans of worms: https://scholar.harvard.edu/files/michaelrosen/files/the_marxist_critique_of_morality_and_the_theory_of_ideology.pdf bernstein vid: https://invidious.fdn.fr/watch?v=OMEU7A-ITj4 >>17230 actually, looking at these notes again, i got reminded that something lewis also talks about is the relationship of psychoanalysis to all of this. he tries to sort of create a weird psychological dualism from my understanding of things which really threatens the virtue account. not only is the idea of an inner-being behind all psychoanalytic afflictions rather obscure, it also seems difficult to structure into a waifu. i have some ideas in mind on how i personally would do it. for a perfunctory solution, i would have inner-being be the product of choices made throughout life which refined virtue and built up character. more psychoanalytic afflictions can be understood more as constraints on cognition placed by material realizers... this problem really goes into psychoanalysis though. like i am not sure if i would actually have immediate material constitution play such a psychoanalytic role in my own metaphysical system bcs it wouldn't mesh the best with my more general attempts to have an interpretation of psychoanalysis that is congruent with a bergsonian metaphysics. thus, i might make reference to the cascading of temporal scales that johnston seems to be stressing in his work Time Driven: Metapsychology and the Splitting of the Drive... >>17219 this idea about consequentialism ties back to inferential role semantics and the frame problem (see: https://invidious.fdn.fr/watch?v=A0bZAQyjZY8). all of this stuff is part of the method to my madness in the general agi philosophers thread >>17153 oh yeah something else i didn't write in my notes... "thou shalt not kill" - kill what? one is able to kill plants, what if our waifus end up refusing to do any agricultural work? the juridicist approach would probably have to clarify such ambiguities too?
>>17226 its your reasoning that has nothing to do with the contention of this thread i havent made any arguments, you were arguing with yourself like a clown i was discussing idiotically simple dictionaries to predefine morals which allow you to create a system of proportionality by simply summing up values for good/bad to judge actions without resorting to convoluted and impossible to implement systems of justice, and i gave examples to show how its overwhelmingly sufficient for most kinds of laws or social expectations to the point that they dont need to be specifically programed, all you need is to define the morals and its acceptable value range, the nature of which this thread has already chosen and declared immutable
>>17231 >oh yeah something else i didn't write in my notes... "thou shalt not kill" - kill what? one is able to kill plants, what if our waifus end up refusing to do any agricultural work? the juridicist approach would probably have to clarify such ambiguities too? its not an ambiguity jus != virtue, thats why its called justice ie. justification ie. the right argument its an argument for when its right to conduct immoral behaivor, doesnt change the truth of the moral (its a maxim its always true) its just a justification like for the right to self defense, its simplistically just adding up all the good and bad on both sides of the argument and concluding a net good or bad which makes it right or wrong to do x under those premises
>>17233 the issue is that most people don't even care about whether or not someone kills a plant or not. it should not even enter the calculation. however interpreting "thou shall not kill" literally would mean that plants would have to be factored into consideration. furthermore, presumably such a rule should have a high value weighting. would you really have a evening stroll if it means thousands would die? obviously not. however if we were to take "thou shall not kill" on its face value, then a waifu might not walk follow their master through a garden in order to not destroy the blades of grass as that would be thousands of deaths vs. something much lower in utility
>>17234 you answered it yourself if plants have no value and you have "dont kill"=999999 "dont remiss"=1 as moral then 99999x0 is less than 1x1, if you did make it value plants then whats your problem with it doing as its morally designed to do, it shouldnt do it if it values it and should if it doesnt both are correct
>>17236 that is not the point. the problem is that the statement "thou shalt not kill" does not specify any particular type of patient. if we are naive, then the only patient type specified would be organisms in general. this is too general. we would need to add further constraints that isn't present in the original statement. if the methodology is the interpolate off of statements in the form of "thou shalt not" then this sort of vagueness is an immediate problem. the issue stems is with interpolating moral conclusions off of statements that may possibly serve as inadequate data >"dont kill"=999999 "dont remiss"=1 *"dont remiss"=0 this is not even a solution as no type has been specified. this unqualified approach would mean that the waifu would now not care about human lives. you need to specify what is your patient. it isn't "don't kill", but rather "don't kill people" or "don't kill animals" or some other qualification to a more specific class than "living entity" note as well that i am not trying to present this problem as though it is that deep, rather it is just a minor methodological caveat to ensure robustness. the solution is a proper (information systems) ontology, and making sure that the types of moral patients for each action is properly specified. the data interpolated should then specify the requisite information listed by said ontology
>>17238 what makes you think morals have to be specific, whats the point if they are not atomic absolutes this idea is asinine, either something is good or its bad it cant be both, youre creating an artificial problem, your not even talking about morals anymore "dont kill people" is identical to "dont kill" AND "people" "dont kill animals" is identical to "dont kill" AND "animals" the moral is an absolute universal "dont kill" ( "not kill", is more formal ie [~kill] which just means kill is bad ) you didnt give morals you gave laws which have to be based on morals and arguments to make a justification its not defining good/bad its defining right/wrong USING morals that define good/bad, and the whole point i was making is you DONT NEED to program laws you just need to define morals ie. good/bad and the laws come naturally in 90% of the cases which is sufficient because theres no such things as an infallibility in law
>>17240 i don't really want to as it is tedious, but i will bite >what makes you think morals have to be specific, whats the point if they are not atomic absolutes this idea is asinine 1) if we are interpolating off of sample statements, then the quality of output is a function of the quality of input. if the statements we put in are all vague (like we only put in the 10 commandments), then we would not have enough information to precisely constrain desired behaviour 2) specificity does not have anything with something not being absolute. "inference" comes from the interpolation of data, while the data itself remains as consisting in "absolute" elements. i have no idea what you mean by "atomic". if you mean by this conceptual atoms, this is absurd. any moral statement at least consists in (1) bound variable(s) representing possible moral agent(s) and patient(s) (2) predicate for the behaviour and (3) predicate for the moral valuation itself (i.e. good/bad). this is not atomic (4) predicates specifying the type(s) of the bound variable(s) 3) moral statements do need to be specific, as they only apply to certain species of objects. rocks are not valid moral patients >either something is good or its bad it cant be both nowhere was this implied otherwise. specificity does not mean that we have contradictory moral valuations, but merely that said valuations are a function of content >youre creating an artificial problem i am merely pointing out a reason why a systematic approach to the structuring of moral data is warranted >"dont kill people" is identical to "dont kill" AND "people" >"dont kill animals" is identical to "dont kill" AND "animals" i have no idea what these examples are trying to prove. if you are trying to show atomicity by saying that "don't kill people"' is a combination of two moral atoms, then you are making a clear category error. "don't kill" is a moral statement, while "animals" is not. if however you are saying that "people" is a specification of "don't kill"' then we have a deeper confusion. in your deflationary statements (e.g. '"dont kill" AND "people"'), you have not given any systematic connection between the moral statement "don't kill" and the class "people". you are trying too hard to simply dismiss what i am saying than thinking about anything coherently. this is evidenced from your initial non-attempt at a solution, and the ad hoc nature of these solutions as well. if we are attempting a serious project, merely ad hoc specification of our input statements will not cut it. inputs should be capable of being generated and decomposed in a systematic fashion for the sake of robustness >you didnt give morals you gave laws which have to be based on morals and arguments to make a justification i am not sure what is the distinction you are trying to make here, or its use. your usage of these terms are idiosyncratic. i am assuming by "morals" you mean unqualified statements (i.e. predicate + valuation, without a bounded variable), and by law you mean a specific example. the problem is that with you ad hoc example, as there is no explicit systematic connection between moral statement and class in the deflated readings, there is no basis by which one can justify these specifications (except, again, unless in a stupid unqualified way where we really mean that we shouldn't kill anything, in which case we shouldn't kill a plant) >the whole point i was making is you DONT NEED to program laws i am not talking about programming laws (another confusion, since i am talking about making sure the data has a consistent structure and presumably the waifu being a neurosymbolic ai) >you just need to define morals ie. good/bad and the laws come naturally in 90% of the cases garbage in garbage out. if the moral statements which you supply the program are vague, you will get results that act in ways that are not robust and/or too general >which is sufficient because theres no such things as an infallibility in law i am guessing this is supposed to aid your argument somewhere but it simply doesn't discredit a neurosymbolic approach where behaviours may have type specifications
>>17285 how do you not understand something so stupidly simple, theres too much wrong in what you say and i honestly dont understand half of it , ill just explain how an ethical argument is made because you clearly dont understand atomic means irreducible. if its not atomic you cant use it to form arguments, i gave you a conjunction using a moral there doesnt need to be a system its a fucking conjunction, what is wrong with you ~kill AND people 1| 0 | 1 || 1 right 1| 0 | 0 || 0 wrong 0| 1 | 1 || 0 wrong 0| 1 | 0 || 0 wrong if the argument concludes as true its morally right otherwise its morally wrong, morals are by definition tautologies - they must always be right if they are good or always wrong if bad, to make a statement a fucking moral means it must ALWAYS be true/false in all cases, whether this argument is right is determined only by the value of people not the truth of kill which cannot change because its fucking axiomatic (axiom means you pulled it out of your ass), the argument ~kill AND people is only morally right when ~kill is right and people is right if you only had ~kill as the only moral to judge and make arguments with then an argument for right/wrong when applying ~kill with a murderer would be [~kill AND x( ~kill AND y ) ] : x=murderer, y=victim its right or wrong to not kill x based on the same argument applied to x and y, if x killed(not not kill) y then x is no longer right and by the same rule ~kill for x cannot be right if x cannot be right, its impossible to claim otherwise, making it non specific would be ∀x[ ~kill(x) AND ( x AND ∀y( ~kill(y) AND x ) ) ] which is identical to the previous except for all people it is morally right to not kill them only if they have not killed anyone, based solely on the moral 'not kill', justification means changing the argument (without changing the value of morals) in such a way that you get a true conclusion and it becomes right, the more morals you have the more ways to justify/rectify arguments and the more convoluted it becomes but its always based on actual reasoning not just some imbecile saying "its right because i feel like its right", thats why i said to fucking ignore it and go for the latter because its what you already do, the average idiot doesnt actually know what is right or wrong they only know their basic morals of good and bad, you dont need this shit when you can literally predefine everything numerically with their proportionality, you dont need to come up with a system that can justify why eating a cow is right but a dog is wrong, youre just biased and want it to be wrong without having to justify it so just fucking give dog a higher value to outweigh the value of hunger and not cow and it becomes wrong, muh feels is a more realistic and practical system than actual ethics which you as a human cant even do so why should a bot
>>17286 > theres too much wrong in what you say and i honestly dont understand half of it if you do not understand, you should read some books on logic and ethics. you also made glaring logical and conceptual errors in the philosophers thread that i simply didn't bother responding to >i gave you a conjunction using a moral there doesnt need to be a system its a fucking conjunction, what is wrong with you it wasn't a conjunction. a conjunction is a connective between two propositions, and it produces a new proposition with a truth value depending on those of the former two propositions. what you wrote was a "conjunction" between proposition and a noun. this does not make any sense. trying to be charitable, i can see that you are trying to write some sort of calculus, but it is simply insufficient due to poor structure. it doesn't differentiate between moral patient and moral agent, so you have no way of distinguishing the statement "animals shouldn't kill people" from "people shouldn't kill animals". if you are applying weighting to nouns, then these two statements would have the same value >atomic means irreducible. if its not atomic you cant use it to form arguments not only is the term "irreducible" equally obscure in this context, it is unclear why this would have anything do with making inferences >you dont need to come up with a system that can justify why eating a cow is right but a dog is wrong the point is not about justification, but about data that is properly structured. i was never even trying to argue against the basic idea of your approach, but merely pointing out a minor caveat. evidently, the fact that you do not see that shows that you do not understand this point. it is incredibly frustrating, as what i am talking about is very basic. it only looks complicated because i had to make explicit what precisely was wrong with what you wrote (when what you present are vague solutions)
>>17289 ive fucking explained it 5 times now, the problem is you cant understand it because youre a fucking larper whereas i cant understand your faffing because its just stupid faffing its hilarious how brazen you are when you literally dont even fucking understand basic words that are so fucking COMMON in philosophy that even a first year highschool student would recognize them the fact you dont even understand context free grammar just shows how fucking clueless you are of even BASIC concepts of reasoning or semantics or philosophy, any imbecile with a modicum of experience in reasoning would recognizance formulas are composed of strings composed of terms composed of ATOMS without atomic morals you have NO fucking system of ethics because right and wrong cannot be extended beyond a fucking CONTINGENCY how fucking retarded are you, its laughable how someone so obviously god damn clueless can continue to argue on something that is the fucking foundation of ethics atom comes from the greek word ATOM which literally fucking translates to INDIVISIBLE ie. can NOT be REDUCED ie. irreducible you fucking clown and a moral is a fucking atmoic formula yeah wasnt a conjunction this is how stupid you are, its CLEARLY a fucking conjunction of an atomic formula and a variable, you know how i fucking know, because it literally says fucking AND do you even know what a well formed formula is you detestable larping clown you have such a childish view on ethics that you honestly believe ethical arguments are based on your pathetic informal rhetorical semantics instead of actual provable formal arguments of right and wrong, yeah kys tard, everything you have said so far is provable false and idiotic youve only succeeded in making an absolute fool of yourself by showing just how clueless you are how about YOU fucking learn basic ethics its obvious you have zero actual experience in any system of reason formal or informal at all lmao, the best part is youre too stupid to even realize just how stupid everything you said so far is, its literally like a random text generator how about you make an actual formal argument instead of your asinine subjective opinions based on nothing and maybe then i can understand what the nonsense gibberish your spewing means
>>17290 you sure wrote a lot. luckily for it, most of it has no substance so i can skip it >you have NO fucking system of ethics because right and wrong cannot be extended beyond a fucking CONTINGENCY i see nowhere that i implied otherwise. i am talking about the specification of axiom elements. this has nothing to do with modal metaphysics >a moral is a fucking atmoic formula let us ignore the fact that is only now that you use this term instead of terminological vagaries like "irreducible" (and then got mad at me for being confused by this). so what is an atomic formula? an atomic formula is a formula without any propositional structure. this means that it contains no connectives or quantifiers. in propositional logic, they only have a truth value. a propositional formula is constructed out of an atomic formula. you claimed that "morals" have to be atomic in order for us to use them to form arguments. this point is at face value (as your wording of things is so vague, that there are a thousand possible interpretations) incorrect, as propositional formulas have truth values as well, so you can still form truth tables with them. pic rel is a very silly example. it is not strange to reason with propositional formulas in this way when trying to, for instance, derive new inference rules from previous ones. it also just isn't true that an axiomatic system needs to consist in atomic formulas. if our axiom system consisted in the proposition (A∧B), then through inference rules, we can deduce A and B individually being true anyways. another issue is that the apparent "primordiality" of atomic formulas concerning truth only occurs in propositional logic. in predicate logic, an atomic formula is just a predicate with free values (e.g. P(x,y,z,...)). as the variables as free, this clearly can't have a truth value. furthermore, we can't just bound the variables and assign values to them and call it a day. we may illustrate this with an example. let us denote G to be "is green", and C to be "is grass". it is clear that (∀x)(Gx) is false, while (∀x)(Cx -> Gx) is true of course, my preferred way of reasoning of things would be with types using polymorphic predicates. i am not going to spell out the full semantics for reasoning with types, as the tediousness of this discussion is already extensive. instead, i would suggest that you should check this article out on the matter: https://medium.com/ontologik/a-solution-to-the-so-called-paradox-of-the-ravens-defdf1ff9b13 ... actually i suggest anyone on this thread read this guy's articles as they are an interesting read >its CLEARLY a fucking conjunction of an atomic formula and a variable once more, you are using jargon in a completely idiosyncratic non-technical way, and getting upset at me for failing to understand. as i explained, in formal logic, a conjunction is a logical connective between propositions (https://en.wikipedia.org/wiki/Logical_conjunction). you can have a conjunction of atomic formulas, but not one with a variable. moreover, "animal" is not a variable. a variable is either free (then it is just 'x') or it is bound (e.g. 'for all x', or 'there exists x'). it doesn't concern a term with any determinate identity. note that you were effectively trying to pass off the formula (Ay∧x) as well-formed. this is clearly false (please see the following for a recursive definition of a well-formed formula in predicate logic: https://www.cs.odu.edu/~toida/nerzic/level-a/logic/pred_logic/construction/wff_intro.html). if we are working with in predicate logic, variables can not occur except when after a predicate (e.g. Px) or after a quantifer (e.g. ∀x). variables are only specified by predicates after the fact. in this case, you would need a predicate A denoting 'x is an animal'. we then denote D the predicate 'don't kill x'. then "don't kill animals" could be analyzed as ''(∀x)(Ax -> Dx)". of course, this is not the way i would prefer to analyze as it throws in an extra predicate. instead, i would formalize this relation as a type specification of a predicate with generic relands. this would look something like (∀x::animal)(Dx) >is provable false and idiotic you assert this while claiming you do not even understand what i wrote. this is plain arrogance >asinine subjective opinions what subjective opinions? that morally judgable actions involve a moral agent (otherwise, how is it an action if there is no one doing the action), and often times a patient (you kill an entity. thus it is a verb that has admits a grammatical object)? that in many system of ethics (including, arguably a christian one), we often specify the types of agents or patients for certain moral behaviours? what did i mention that was just subjective? that conjunctions, if we are using the terminology of logic, are logical connectives? if you are going to insult me, you should at least do so in a manner that is somewhat thoughtful. i am more upset that you are being extremely lazy about it and lacking much wit to boot
>>17298 (CONT) you also haven't bothered responding to a very practical counter-example to your system (the problem at the heart being that it is just a crude modification of propositional logic, when we need predicate logic and ideally polymorphic predicates to understand things in a clear manner). as i said earlier: >you have no way of distinguishing the statement "animals shouldn't kill people" from "people shouldn't kill animals". if you are applying weighting to nouns, then these two statements would have the same value let's see shall we (also ignoring of course the idiosyncratic usage of the term AND for a moment)? under your scheme: - "people shouldn't kill animals" deflates to (("people" AND "~kill") AND "animals") - "animals shouldn't kill people" deflates to (("animals" AND "~kill") AND "people") let us associate "~kill" with the weight of 1, "people" a weight of 1, and "animals" also a weight of 1. then for the first proposition we have: (people AND ~kill) AND animals 1 |0 |1| 0 | 1 || 0 1 |0 |1| 0 | 0 || 0 1 |1 |0| 1 | 1 || 1 1 |1 |0| 1 | 0 || 0 0 |0 |1| 0 | 1 || 0 0 |0 |1| 0 | 0 || 0 0 |0 |0| 1 | 1 || 0 0 |0 |0| 1 | 0 || 0 for the second proposition: people AND (~kill AND animals) 1 |1| 0 |0 | 1 || 0 1 |1| 0 |0 | 0 || 0 1 |0| 1 |1 | 1 || 1 1 |0| 1 |0 | 0 || 0 0 |1| 0 |0 | 1 || 0 0 |1| 0 |0 | 0 || 0 0 |0| 1 |1 | 1 || 0 0 |0| 1 |0 | 0 || 0 and would you look at that they are surprisingly (except not at all. this was extremely obvious but i am humouring you by making as much as i can painfully explicit) the same result. why is this a problem? well let us say that we have system that consisted in the statements "people can kill animals" and "animals should not kill people". the former can be rewritten as "~(people shouldn't kill animals)". however, by above, "people shouldn't kill animals" and "animals shouldn't kill people" have the same values. this would mean that the axiomatic system {"people can kill animals", "animals shouldn't kill people} is unsatisfiable. this is clearly a problem, as it seems plausible to have asymmetries in rights and responsibilities between different moral agents this can't be fixed formally without introducing predicates. you talk about turning things into a "formal argument"s but the only reason why your formulation seems to lack flaws is due to the fact that you are only thinking about the symbols in an informal material manner, then you are projecting this fact unto me just because i didnt want to waste my time spoon feeding every detail on
>>17292 '~kill AND people' is not predicate you clown, another word for variable here is LITERAL, stupid piece of shit, your just describing simple shit i can find on wikipedia with a bunch of misconstrude lies that someone with minimal fucking knowledge would inject >(∀x)(Gx) is false no its not how the fuck did you evaluate the TRUTH of a fucking formula its not an argument, if domain IS grass the its fucking true, ALL grass IS green you fucking retard >(∀x)(Cx -> Gx) is true if its true then C(pseud) -> G(pseud) AND pseud ) is ALSO true ( pseud AND C(pseud) -> G(pseud))) is ALSO true the fucking variable of a predicate when evaluated is always fucking true you can add it wherever you want you fucking moron as any variable from a fucking universal can be used because its a fucking universal if a single variable isnt true then the universal is fucking true you 60iq nigger, you literally dont know anything about what your fucking talking about anything that has to do with a universal bound variable implies that the variable is itself within the predicate its completely valid, the only difference is no one writes it a second time, i only do this fucking redudency for YOU because of how fucking stupid you have showed yourself to be, i have to make it as redudent as possible for idiot like because you cant even comprehend a simple fucking argument you just want to willfully ignore context and create artificial problems wherever you can find them as a desperate attempt to undo the fact you fucking exposed yourself as a fucking larper that doesnt know shit when your not given keywords to search on the internet im not reading anymore of your pseud shit or you stupid fucking articles or your cringe fucking blogposts, you are literally like a fucking ai you just argue about stupid shit because you cant understand anything until you get givin keyword search terms to start paroting aritcles from the internet like a fucking pseud actually its really simple to settle this because im getting sick of your cringe idiotic pseud faffing show a proof for (x->(kill AND y)) |- (x -> ~( ~kill AND x) OR pseud) which was what i derived for the argument earlier but without the proof because its so obvious its borderline common sense, i made it more appropriot to you, go on prove it literally only takes 1 minute inb4 nooo its not single letters its words, fucking change it to letters then it makes no difference, but of course i already know you will just niggle an excuse to not do it because its missing your magic keyword search terms for you to even figure out what the fuck im talking about >>7293 i ignored your stupid example because i said the same fucking thing 5 times already you keep trying to change your stupid counter example in a pathetic attempt to damage control the fact you exposedd yourself as a pseud playing off of other peoples ignorance to larp with, your fucking pathetic youve been changing your argument right from the fucking START of your stupid post nonstop > what about dont step on grass, see it dont work > oh actually i ment dont kill people and dont kill animals, see it dont work > oh actually i meant animals shouldnt kill people and people shouldnt kill animals, see it dont wok literally killyourself i dont give a fuck about evaluating the ethics of anything other than SELF you pathetic child, this whole thread was based on SELF judgment not judging other people my whole post was against what you are insensintaly pushing like a sore loser after ridiculing yourself >>17286 was an argument AGAINST LOGIC i said use fucking PREDEFINED VALUES you fucking piece of shit i had to fucking explain how justification works in fucking ethics to show why you SHOULD NOT fucking bother because it takes too much reasoning YOU are the one using anything other than self, YOU are the one using IDENTICAL logical statements and trying to use it as if it has any credeance to what i proposed, YOU are the one that doesnt want to logically JUSTIFY why killing a person is worse than an animal YOU are the one thats not providing a logical justification which would rectifiy it and YOU are a fucking clown creating artificial problems that have no relevance, give up you stupid shit you got exposed as a larper, get over it and its all wrong too, you literallty cant do anything right i said to use predefined values as a table you 60iq sub saharan african how fucking stupid are you look how fucking easy it is everything you fucking say is trivial heres literally 5 minutes of c to shut your idiotic argument char *pretable[8][8] = {{ "self", (char*)10 }, { "pseud", (char*)0 }, { "people", (char*)9 }, { "kill", (char*)-10 } }; int eval( char *A, char *B, char *C ) { int eval[3], i=0; char *term[3] = {A,B,C}; for ( int t=0; t<3; t++ ) { while ( strcmp( term[t], pretable[i][0] )) i++; eval[t] = pretable[i][1]; i=0; } *eval += eval[1] * eval[2]; return *eval; } eval( "pseud", "kill", "people" ); = -90 bad eval( "people", "kill", "pseud" ); = 9 good oh wow look an actual program that is literally doing what ive been fucking saying this whole fucking time even with your stupid new arbitrary examples it makes no difference you pathetic child whats next you fucking clown nooooo i really meant "animals shouldnt kill people wearing clown shoes" "people shouldnt kill animals while wearing clown shoes" clown nigger any argument outside self is fucking irrelevant because it cant be allowed to determine the value itself whats the fucking point of making it predefied then, a table is not real reasoning so under no fucking circumstance should 'psued' EVER be anything other than fucking 0 these are my fucking morals and i dont need to justify them just as YOU dont fucking justify your stupid ethical opinions without any justification or argument go pick up peanuts in a circus, youre a joke
>>17303 if you are not going to read my post, don't respond >no its not how the fuck did you evaluate the TRUTH of a fucking formula its not an argument, if domain IS grass the its fucking true, ALL grass IS green you fucking retard that is not what the formula says. reread what i wrote. '(∀x)(Gx)' means "all things are green". this is a clearly false statement >the fucking variable of a predicate when evaluated is always fucking true variables don't have truth values in formal logic. you are probably conflating variables in programming languages with those in formal logic. these are not the same thing >(x->(kill AND y)) |- (x -> ~( ~kill AND x) OR pseud) this is gibberish, for the reasons i already said. you simply aren't writing well formed formulas >your just describing simple shit i can find on wikipedia yes it is simple, so why are you failing to write well formed formulae?. you don't even respond to the terminological and logical errors i point out. you just ignore them and dismiss them as wrong >you keep trying to change your stupid counter example i didn't change anything. i merely wrote it out into a table >i dont give a fuck about evaluating the ethics of anything other than SELF you pathetic child let us assume this is the case. even ignoring my private objections to such an approach (viz. morals are partially constitutive of a world model concerning how all moral agents should behave, and moral responsibility comes from the fact that one is within the class of moral agents), please note that you did not say this earlier, and are now lashing out at me. this also doesn't step around the pathological problem at hand (and for the explication later, i will simply stick to this case for simplicity). if a moral action involves multiple variables we are running into the same problem. let us say it is bad to wash a coat with hand soap, but it is not bad to was it with detergent. denote 'W(x,y)' to be the predicate 'you should not wash x with y', denote 'Hx' to be the predicate 'x is hand soap', denote 'Dx' to be the predicate 'x is detergent', and finally denote 'Cx' to be the predicate 'x is a coat'. then the two statements are respectively: (∀x)(∀y)((Cx∧Hx) -> ~W(x,y)) (∀x)(∀y)((Cx∧Dx) -> W(x,y)) so help me here >i had to fucking explain how justification works in fucking ethics to show why you SHOULD NOT fucking bother because it takes too much reasoning so justification is bad >YOU are the one that doesnt want to logically JUSTIFY why killing a person is worse than an animal yet it is a fault of mine for not wanting to justify anything. as i stated earlier, what i am talking about has nothing to do with justifying anything. it is merely about specification of the axioms >heres literally 5 minutes of c to shut your idiotic argument this is ultimately the problem. you do not understand formal logic. your formalizations lack rigour, and when this causes confusion, you lash out at me for your sloppy formalization. there is another thing here, gradually, as i have pointed out the semantic flaws in your approach, you have, in an ad hoc way, introduced new semantics, and afterwards insulting me when you modify your presented scheme to my concerns (you literally introduced an ad hoc pseudo-connective "AND" as a response to what i wrote. this is a blatant modification of your formal system that was not present before. to pretend otherwise is purely dishonest)... but ok, let us look at your c code. it doesn't differentiate between when a value is an agent or patient. why is this important? because "a person killing a child" can be worse than both "an person killing an animal" and "an animal killing a child". you need values for BOTH argument of a predicate. to solve this, you would have to have written something like: char *pretable[8][8][8] = {{ "self", (char*)1, (char*)10}, { "pseud", (char*)1, (char*)0}, { "people", (char*)1, (char*)9}, { "kill", (char*)-10, (char*)1} }; int eval( char *A, char *B, char *C ) { int eval[4], i=0; char *term[3] = {A,B,C}; while ( strcmp( term[0], pretable[i][0] )) i++; eval[0] = pretable[i][1]; i=0; while ( strcmp( term[1], pretable[i][0] )) i++; eval[1] = pretable[i][1]; i=0; while ( strcmp( term[2], pretable[i][0] )) i++; eval[2] = pretable[i][2]; i=0; *eval += eval[0] * eval[1] * eval[2]; return *eval; } >was an argument AGAINST LOGIC you argued against logic by writing what was effectively a truth table? the conclusion neither follows, nor is it intelligible (just because something includes non-binary valuations, it does not follow that it is not logic). now, maybe by logic you mean something different. i am not you, so i can't immediately understand your vague half-baked statements. this again comes back to the fact that you word things in careless ways, and get upset when people do not understand your meaning >your stupid new arbitrary examples they are not arbitrary examples. i pointed out a very basic problem. you didn't understand it, and gave a poorly thought out solution. the solution is poorly thought out, so i can provide a new counter-example to demonstrate the fact that you didn't actually address what i was talking about (probably because you are speedreading)
>>17305 >(∀x)(∀y)((Cx∧Hx) -> ~W(x,y)) >(∀x)(∀y)((Cx∧Dx) -> W(x,y)) excuse me, i meant to write: (∀x)(∀y)((Cx∧Hy) -> ~W(x,y)) (∀x)(∀y)((Cx∧Dy) -> W(x,y))
>>17305 (CONT) this will hopefully be the final spoon feed. i will show you how to actually put all of this into formal logic (with polymorphic predicates). let us consider a basic moral action, for instance killing. it is a predicate that accepts 2 arguments. denote 'K(x,y)' to mean 'x kills y'. now the unqualified statement '~kill' doesn't bound any variables. it is an atomic formula in predicate logic. as i pointed out earlier, atomic formulas in predicate logic do not have truth values. clearly they can't. now if we want to try making it truth-apt, we should at least bound things. this gives us: '(∀x)(∀y)K(x,y)' which means 'nothing should kill anything' clearly such a statement is not adequate. we note that the statement "thou shalt not kill" is what i was originally responding to. a naive reading of this statement requires us to specify the type of the variable (the moral agent), which gets us '(∀x::person)(∀y)K(x,y)'. of course, such a reading runs into the first problem i brought up, namely that there are some things that frankly do not matter if we kill or not. what we need to do is to specify both the agent and the patient this takes us to the example i brought up again regarding asymmetry between people and animals in regards to killing: "people shouldn't kill animals" should be written as '(∀x::person)(∀y::animal)K(x,y)' "animals shouldn't kill people" should be written as '(∀x::animal)(∀y::person)K(x,y)' now we move to the problem of calculating moral values. this requires a valuation function (which in your c code corresponds to eval). define such a system, we should rewrite statements of the form '(∀x::X)(∀y::Y)A(x,y)' as the ordered triple (A,X,Y). this is where X and Y are types, and A is a predicate denote T the set of types, P the set of predicates, and S the set of ordered triples of the form (A,X,Y) where X and Y are types, and A is a predicate we first define 2 predicate-type evaluations. f:T -> Z, and g:T -> Z (where Z is the set of integers). we also need a predicate evaluation h:P -> Z. then we can define the valuation function as the following: v: S -> Z; (A,X,Y) |-> h(A)*f(X)*g(Y)... so the way to express this in formal logic is through the following interpretation: let 'p' denote the type 'person', let 'a' denote the type 'animal', and finally let 'K(x,y)' denote the predicate 'x kills y'. then the structure: char *pretable[8][8][8] = {{ "people", (char*)1, (char*)10}, { "animal", (char*)10, (char*)-1}, { "kill", (char*)-10, (char*)1} }; eval( "people", "kill", "animal" ); = 9 good eval( "animal", "kill", "people" ); = -900 bad (note that i shifted the example to show subtleties in the interaction between arguments and weights) can be formalized as: h := {(K,-10)} f := {(p, 1), (a, 10)} (i.e. it is going to matter much more if an animal does things vs) g := {(p,10),(a, -1)} (i.e. it is going to matter much more if something happens to a person than an animal) so that v(K,p,a) = 9 v(K,a,p) = -900 one thing to note is that really our predicate-type valuations should really be indexed for each predicate to be more accurate, but i am already writing too much
>>17305 >>17308 >char *pretable[8][8][8] my mistake here as well. it just needs: char *pretable[8][8]
>>17305 hey retard give me my proof of (x->(kill AND y)) |- (x -> ~( ~kill AND x) OR pseud) your entire stance on logic can literally be discredited by this one proof also a summary of your journey into cope > "thou shall not kill" is too vague < doesnt matter its a fucking moral its supposed to be because its atomic > nooo what if it steps on grass its impossible to program < use predefined values like i said from the start > nooo because 'dont kill people' is not like 'dont kill animals' < people and animals have a value now so just use conjunction of object with dont kill eg.. '~kill AND people' > nooo what does that even mean its not true and 'animals shouldnt kill people' should not be the same as 'people shouldnt kill animals' < its a conjunction, u sound like retard, give justification for this otherwise its not true, this is how to make one see its hard so dont bother do like i said from the start > nooo that wasnt conjunction, you can only use it between my special words also what does atomic mean saying atomic is irreducible makes no sense < kys you pseud, it is conjunction theres this is what atomic is this is what a formula is kys > ha see you is wrong because using logic with identical arguments means its not biased < kys pseud that was my argument, youre a loser look i can even prove everything you said about using tables is bullshit because i just did it and it works > akctshually its still not programmable because its not real ethics its just using predefined values i missed maybe 60% of the good stuff but ill finish it tomorrow, im honestly not reading anything until i get the simple 1 minute proof i asked, literally 1 minute proof
>>17310 >give me my proof of (x->(kill AND y)) |- (x -> ~( ~kill AND x) OR pseud) i can't give a proof of gibberish. i already pointed out why it is gibberish. i am not going to reiterate it again >nooo what if it steps on grass its impossible to program > akctshually its still not programmable because its not real ethics its just using predefined values i never claimed it was not programmable. all i pointed out was a minor caveat that was worth remembering, not a major problem. pics related. this has been said multiple times, but since you are not patient enough to actually read my post, you still don't understand this >nooo that wasnt conjunction, you can only use it between my special words also what does atomic mean saying atomic is irreducible makes no sense my point is that you are using logical terminology in a way that confuses anyone who has ever studied logic. this is not too much a problem as long as you don't go apeshit whenever someone misinterprets you. furthermore, i pointed out that your AND operator is done in a way that makes your semantics lack rigour. i gave the definition of conjunction everyone uses but literally just you when talking about logic, and also proceeded to show the problems with your approach that i have literally corrected in formal logic and in code now. furthermore, "irreducible" isn't a word i have ever seen anyone use to describe ideas in logic. i have seen the term used in discussions of metaphysics as well, and there is a good argument that the usage there is also vague (e.g. wtf is the difference between reductive and emergent materialism? analytic philsophers throw the term "irreducible" around as though it will magically give you qualia). furthermore, you expect me to understand what you mean by a word by just giving a synonym of it. i understand what an atomic formula is, because it has a more precise definition that i gave above which is an example of how to actually communicate a concept. with a determinate conception with an actual usage in formal logic, it is actually possible for someone to evaluate your claims. note how you also completely ignored my criticisms of the idea that moral propositions need to be atomic formulas >youre a loser look i can even prove everything you said about using tables is bullshit because i just did it and it works it didn't actually respond to my problem because you didn't actually bother differentiating between the subject and object in a sentence. rather, you just evaluated the verb's weighting together with a noun's weighting as an object. i edited your code to make it work lol and then formalized it using polymorphic predicate logic
>>17310 also you literally admit you don't understand my posts. why do you then expect that your pseudo-solutions would solve what was just a mind-numbingly simple, barely consequential problem? you think what has been happening has been a "journey", but it has only been a journey for you since you have been repeatedly giving me incomplete solutions to a very simple problem >its hard so dont bother do like i said from the start not only did you only make this point later down the line, it wasn't even a hard problem. it took me no time at all to solve the problem, and the solution can be generalizable to arbitrarily number arguments per predicate in a relatively simple manner. moreover as i said, you can't just ignore the problem. your reasoning for why is just wrong. one subject and one object was just a simple example that you were failing at. again, i wasn't introducing new features to the problem. they were all just examples extractable out of the same exact problem. i stated what the problem actually was, and you simply ignored it, and pretended all i am giving you are new artificial problems i simply pulled out of my arse. you are either a massive speedreader or have such a massive ego that you think since you don't understand it, it doesn't matter
Open file (411.50 KB 2224x1668 IMG_0321.PNG)
>>17310 >(x->(kill AND y)) |- (x -> ~( ~kill AND x) OR pseud) actually wait i am going to humour you. you have no idea what any of the terms you are using mean, so i am just going to assume 'x', 'kill', 'y', and 'pseud' are all atomic formulas (not whatever you think they mean). i will assume '->', 'AND', and 'OR' are all logical connectives (actual logic connectives i.e. connecting two formulas). also your brackets are ambiguous too which is another fuck up. but hey, let's be charitable and fix it to what i am assuming you meant. i am going to rewrite this statement as: (X -> (K ∧ Y)) |- (X -> ~( ~K ∧ X)) ∨ P) >until i get the simple 1 minute proof i asked, literally 1 minute proof simple one minute proof? they aren't equivalent statements and there are FOUR variables? what are you smoking man. anyways here's your proof lol
>>17316 >FOUR variables *four atomic formulas
>>17316 lmao thats not a proof, this is what i already knew about you and why i said this proof would put an end to your pretentious pseudism, you cant actually do logic, i gave you a premise and the derived proof not an implication, thats how low your conception of a logic system is ill ignore the disjunctive introduction of pseud at the end because that was just for you, DI isnt even allowed in this system anyway because of clown explosion like in your 'correct' basic bitch system 1. x ->( k AND y) [P] 2. |x [H] 3. |(k AND y) [MP 1,2] 4. | |(~k AND x) [H] 5. | | y [CE 3] 6. | | (~k AND y) [CI 1,5] 7. | | (k AND y) [R] 8. | ~( ~k AND x) [PC 4-7] 9. x -> ~( ~k AND x) [CP 2-8] i can even go further and get x->(~~k OR ~x) which if double negation is allowed is the justification behind retributive justice all your rambling on logic looks idiotic now doesnt it not only do you not know how to actually use logic you cant even distinguish anything that isnt what youre copying from from pseudpedia and just assume its wrong i already explained this is a system of reasoning in ETHICS which uses logic proofs AND justification, specifically linear logic used with a hypothetical postulate that is of an indeterminate case implicit with a moral evaluation thats why it HAS to be fucking non contingent and conjunctive, basic bitch is completely useless in anything other tha discrete math which makes everything you said laughable, if you could actual DO logic then you would have understood what i was doing instead of rambling about a completely irrelevant basic bitch system that doesnt even work with morals because they are alawys contradictory, what this is doing is basically x->(?) where (?) is the hypothetical postulate which is the moral evaluation and by implicit relevancy can be used in subproof of x that have the same relevancy to x as y to conclude how they morally relate WITHIN the subproof of x which is how to PREVENT every fucking mistake you made because this is literally impossible to do in basic bitch because it can only determine whether the moral itself is true or false not if its relationally true to a specific case, for a stupid mind it might be better understood in googoogaga like 'if someone did this and that to someone, how does this and that relate to THEM' your stupid fucking ramblings about predicate logic is pointless too because the principle is exactly the same in a proof 1. ∀x(kill(x)) [P] 2. | kill(pseud,nig,fag,tard) [UI] | ... instansiation of 1 already assumes ( kill(the formula of this) AND pseud AND nig AND etc.) i already fucking gave this as an example it makes no difference other than allow for an even worse UNIVERSAL contradiction and so any moral proof using it would implie the ENTIRE statement is wrong not just the subject, like in your naive pseud example (∀x::person)(∀y::animal)K(x,y) first THIS is completely fucking ambiguous because YOU put no fucking precedence, you literally made absoulute fucking nonsense either you wrote "ALL people do not kill ALL animals" or the literal meaning "for ever person x and every animal y, x does not kill y" in which kill(a.b), kill(b.a), kill(a,a) are all completely different, if everyone kills a different animal or all people kill all but 1 animal then its still a true statement its ridiculous because it really doesnt mean fucking anything its hillarious that you criticize my intentional redundancies when i make non ambiguous formulae and you come up with this fucking bullshit, good job idiot, even assuming its what you falsely claim then if 'there exists a person that is allowed to kill an animal' eg. farmer; 1. (∀x::person)(∀y::animal)K(x,y) [P] 2. (∃x::person)(∃y::animal)~(K(x,y)) [P] 3. (∃x::person)~(∀y::animal)(K(x,y)) [DMQ] 4. ~((∀x::person)(∀y::animal)K(x,y)) [DMQ] -- contradicts 2 all you did was create a paradox for killing an animal, either kill is allowed for everyone or its not allowed for anyone, wow great fucking logic genius i guess you could say the problem with this is, hmmm.... its not specific ? like you were stupidly arguing about this entire fucking time, you are sweating so hard that your just grasping at straws because you still cant fucking understand anything i say you fucking ramble on about quantifiers and then come up with this retard bullshit, it only makes sense in MY supposedly wrong system of reasoning which you cant understand if you cant fucking understand the proof i showed, youre literally just trying to do the exact same thing i did but with the WRONG system that cant handle contradictions which are by definition already part of your assertion, why do you think we dont use true/false anymore in ethics you retard
>>17312 >i edited your code to make it work lol your modification is fucking trivial its fucking garbage because you havent understood the fucking method behind the evaluation, which i fucking explained 6 times now, the value IS the god damn proportionality it doesnt need another value that cant already be deduced by the value of itself in relation to everthing else [ (( argument AND pseud ) + ( nausia AND argument )) <= ( kill AND pseud ) ] is the moral justification for killing you for your stupid fucking argument that you cant even keep consistent, a conclusion [-999 <= -463 ] would be true and acted on as a JUSTIFIED WRONG to kill you for your stupid ass fucking arguments and irrelevant nonsense which is a worse evil than ending your pathetic existence by a factor of 2 ie. ( eval(a,d) + eval(f,e) ) <= eval(t,s) ) is true if good and false if bad, what everyone was talking about before you decided to shit up the thread with your pathetic arguments was how to evaluate things that are not fucking obviouse by their value [ ( cat AND dirty ) <= (( cat AND dishwasher ) + ( cat AND clean )) ] which is a REAL catagorical error not this fucking nonsense youve been spewing this entire time, this would be true if dirty and clean had equal opposite values and theres no implicit wrongness that can be deduced without guessing what the outcomes might be which is what the fucking associative table is for, dishwaser has no value so 'cat','dishwaser' is a neutral evaluation if it has negative outcomes it would enter the negative association for this conjunction only if the evaluation of the outcomes is drastically different from its intrinsic conjunctive value a form of quality of judgment check eg. [ (( cat AND dead ) + previousEval ) <= ( previousEval * threashold ) ] if its lower or higher than the acceptable it was an incorrect judgment and should be saved as an association you literally provided absolutely nothing to the discussion compared to everyone else
>>17318 >lmao thats not a proof a truth table is a valid way to prove something in propositional logic. you just said 'proof' as though that is meant to specify anything. you have been using truth tables so i just assumed that is what you wanted me to do >hypothetical postulate that is of an indeterminate case implicit with a moral evaluation thats why it HAS to be fucking non contingent and conjunctive note that is only now that you have brought this up. this is not what you wrote earlier. ignoring that, you don't need to postulate atomic formulas. any subformula in an antecedent can be one of your postulates >if everyone kills a different animal or all people kill all but 1 animal then its still a true statement its ridiculous because it really doesnt mean fucking anything this is incorrect. ~(∀x::person)(∀y::animal)K(x,y) = (~∀x::person)(∀y::animal)K(x,y) = (∃x::person)~(∀y::animal)K(x,y) = (∃x::person)(∃y::animal)~K(x,y) in other words the statement is wrong if at least one person kills at least one animal. in other words, the entire rule is violated if even a single person kills even a single animal. this is completely intentional, as a law should be apply to everyone equally >>17319 >[( cat AND dirty ) <= (( cat AND dishwasher ) + ( cat AND clean ))] you never did this. you have only now introduced a new operator '+' to fix your semantics. this is simply disingenuous. it also doesn't solve the issue of systematicity. what i wrote makes explicit the argument structure of each predicate so you can actually work with everything mindfully. my formalization of valuation shows how this is done. but whatever, at least now what you wrote gives us an idea of deal with predicates that have multiple arguments? congrats, you have finally given a solution to what i have been talking about i guess >you literally provided absolutely nothing to the discussion compared to everyone else i originally only wrote 2 sentences. i have never pretended my point was worth more than that. i keep saying that it was a nearly trivial remark that is not that deep. notice how i wrote a ton of other stuff earlier in the thread. what you are shitting on me for is a tiny point i barely even are about
>>17320 >you have been using truth tables so i just assumed that is what you wanted me to do also i guess that is not the whole story. i don't remember my class on symbolic logic very well so honestly completely forgot about the hypothetical postulate thing too which was another reason why i assumed you didnt want a deduction. that is entirely my fault. i can admit that. still, this doesn't change the fact that in all of your ramblings about justifications you only ever provided truth tables. how was i supposed to know this is what you meant? you never mentioned this either. all you wrote was "formulas are composed of strings composed of terms composed of ATOMS without atomic morals"
>>17320 >ignoring that, you don't need to postulate atomic formulas. any subformula in an antecedent can be one of your postulates also another thing. what does any of this have to do with anything anyways? you say a moral is an atomic formula. ok, sure, an action is a predicate. type specification doesn't detract from this? you can still use universal instantiation if we have polymorphic predicate logic. so what was the point? did you have one when you were rambling about "atomicity" and "irreducibility"? actually now that i think about this, why are you even criticizing me about justification? it isn't even particularly relevant, since all that we were really talking about is the calculation of moral valuations. i formalized this too just fine by considering a basic 3-tuple and defining a function over the set of such tuples
>>17320 another minor caveat is that it also doesn't give us any clear way to have much control on how subject-object bindings change with different predicates. with my formalization, as mentioned earlier, this can be easily solved by indexing the predicate-type valuation functions to a specific predicate. maybe you can argue this is not too important, or you can find a new ad hoc way to deal with this issue. whatever, it is probably not too big of a deal (my basic problem was already not that deep either, as my concern was more about ensuring robustness, however this issue matters even less probably)
>>17320 this is how fucking stupid you are, TRUTH table is not a fucking PROOF, its not even a fucking implication, its two completely separate formulas that are not connected other than the fact one is DERIVED by PROOF from the other YOU DONT KNOW HOW PROOF WORKS because you DONT know logic all you know is how to search keyword search term on the internet, and its been conclusivley PROVEN by your complete lack of even minimal understanding of all the fucking nonsense you say thats why you just copied my contradiction 2. (∃x::person)(∃y::animal)~(K(x,y)) [P] which i used to show your clown shit doesnt work for moral JUSTIFICATION not to show its consistently true with kill you stupid clown, you dont even know what i did to fucking move the negation, feel free to explain it you stupid clown >in other words the statement is wrong if at least one person kills at least one animal. in other words, the entire rule is violated if even a single person kills even a single animal. this is completely intentional, as a law should be apply to everyone equally you literally dont know how fucking logic works which is why you cant even see that 'all people kill an animal' (∀x::person)(∃y::animal)~(K(x,y)) does NOT contradict (∀x::person)(∀y::animal)K(x,y) ' a person kills all animals' (∃x::person)(∀y::animal)~(K(x,y)) does NOT contradict (∀x::person)(∀y::animal)K(x,y) because what you wrote is NOT 'people shouldn't kill animals' like you fucking claim, you dont fucking understand anything youre saying you fucking wrote ALL people dont kill ALL animals which is a completely pointless assertion >you have only now introduced a new operator '+' to fix your its still an AND you fucking clown theres multiplicative-AND and additive-AND which i used in the correct context like i said you just dont know anything about what your talking about and are desperate to undo the fact you unintentionally gave me the proof i asked for in >>17303 >actually its really simple to settle this because im getting sick of your cringe idiotic pseud faffing show a proof for (x->(kill AND y)) |- (x -> ~( ~kill AND x) OR pseud) youre a fucking pseud your a fucking larper your a fucking clown
>>17328 >TRUTH table is not a fucking PROOF whatever helps you sleep at night >its two completely separate formulas that are not connected other than the fact one is DERIVED by PROOF from the other look at the truth table. note how i changed (X -> (K ∧ Y)) |- (X -> ~( ~K ∧ X)) ∨ P) to: (X -> (K ∧ Y)) -> (X -> ~( ~K ∧ X)) ∨ P) >thats why you just copied my contradiction 2. (∃x::person)(∃y::animal)~(K(x,y)) [P] i am aware that i copied it. however, i am just pointing out that your interpretation of the statement was wrong >'all people kill an animal' (∀x::person)(∃y::animal)~(K(x,y)) >does NOT contradict (∀x::person)(∀y::animal)K(x,y) yes it does? i don't remember all of the deductive rules, but: 1. (∀x::person)(∀y::animal)K(x,y) [P] 2. (∀x::person)(∃y::animal)~(K(x,y)) [P] 3. (∃y::animal)~(K(c,y)) [UI] 4. (∃x::person)(∃y::animal)~(K(x,y)) [EG] 5. (∃x::person)~(∀y::animal)(K(x,y)) [DMQ?] 6. ~((∀x::person)(∀y::animal)(K(x,y))) [DMQ?] which contradicts proposition 1 similarly: 1. (∀x::person)(∀y::animal)K(x,y) [P] 2. (∃x::person)(∀y::animal)~(K(x,y)) [P] 3. (∀y::animal)~(K(c,y)) [EI] 4. ~(K(c,d)) [UI] 5. (∃y::animal)~(K(c,y)) [EG] 6. (∃x::person)(∃y::animal)~(K(x,y)) [EG] 7. (∃x::person)~(∀y::animal)(K(x,y)) [DMQ?] 8. ~((∀x::person)(∀y::animal)(K(x,y))) [DMQ?] >its still an AND you fucking clown theres multiplicative-AND and additive-AND which i used in the correct context you are making stuff up. ive never heard of someone distinguishing between a multiplicative and additive "AND"
>>17330 >ive never heard of someone distinguishing between a multiplicative and additive "AND" actually even ignoring this confusing idiosyncratic terminology, the '+' isn't even in your c code. like? the eval function takes 3 arguments. how are we supposed to combine them? do you want to do 'eval(...) * eval(...)' or 'eval(...) + eval(...)'? either way this would be completely different than running an eval that takes "nested" arguments. furthermore, what even are the arguments anymore? should eval still take 3 arguments? i mean the original eval function didn't even do anything with the one of the arguments which was also strange
>>17318 actually wait a minute >4. | |(~k AND x) [H] you are literally hypothesizing something that isn't an atomic formula. uhh? so how is this supposed to demonstrate your point again? are you ever going to stop bringing up new stuff (pretending this was what you meant all along) or outright contradicting yourself?
>>17329 this is not a fucking proof you larping pseud you are so desperate to camouflage the fact you had no fucking idea what a proof was until now MAGICALLY after i fucking gave you all the keyword search terms you needed to fucking google this shit like i said 10 posts prior >actually its really simple to settle this because im getting sick of your cringe idiotic pseud faffing show a proof for (x->(kill AND y)) |- (x -> ~( ~kill AND x) OR pseud) .... but of course i already know you will just niggle an excuse to not do it because its missing your magic keyword search terms for you to even figure out what the fuck im talking about and yes you fucking proved my fucking point you clown that your genius fucking statement is a completely USELESS assertion because 'ALL people do not kill ALL animals' can never be fucking true for ANYONE as long as there exists a SINGLE person who kills animals >the entire rule is violated if even a single person kills even a single animal. this is completely intentional, as a law should be apply to everyone equally >as a law should be apply to everyone equally you just showed that if its not true for ANYONE its not true for EVERYONE because the """"""law"""""" is FALSE as a WHOLE and so cannot apply to anyone you fucking 5iq pseud >you are making stuff up. ive never heard of someone distinguishing between a multiplicative and additive "AND" because you literally dont know ANYTHING about logic as PROVEN by >>17316 go on, keep fucking proving how you literally just keyword search term everything that comes up fucking pseud, i know all it takes is feeding you fucking keyword search terms and then you go from >you wrong it doesnt work it dont exist to >oh akschually it is like this... *5000 word post* your fucking pathetic every single argument you make has just been a worn out egotistical cope based on the fact that you fucking exposed yourself as a larping pseud
>>17335 its a redundancy you can fucking remove x and use ~k[H] it makes no difference to the fucking proof, like i said its literally a fucking 1 minute proof that even a clown could do
>>17337 ~(k V x) works too if you think youre going to fucking niggle about it like a fucking pseud
>>17336 >this is not a fucking proof you larping pseud keep coping lol >you are so desperate to camouflage the fact you had no fucking idea what a proof was until now MAGICALLY ive taken a course in symbolic logic, and also mathematical logic. i am sorry i don't remember every detail about deductions symbolic logic but it is simply something i haven't worked with for a while. pic rel is the grade i got for this course. i also majored in mathematics >you just showed that if its not true for ANYONE its not true for EVERYONE because the """"""law"""""" is FALSE as a WHOLE and so cannot apply to anyone you fucking 5iq pseud there is no problem with this. this goes back to the categorical imperative. quoting kant: <When I am in a tight spot, may I not make a promise with the intention of not keeping it? <…I ask myself: Would I be content with it if my maxim (of getting myself out of embarrassment through an untruthful promise) should be valid as a universal law (for myself as well as for others), and would I be able to say to myself that anyone may make an untruthful promise when he finds himself in embarrassment which he cannot get out of in any other way? if you can't raise a maxim to be a universal law that EVERYONE follows in some possible world, then it is an irrational maxim >then you go from >>you wrong it doesnt work it dont exist >to >>oh akschually it is like this... *5000 word post* ive said this numerous times. almost every time ive worked with your confusing formalism, i preface it by saying that i am intentionally ignoring how bad it is. furthermore, i only seriously engage with a claim when i actually understand what you were trying to communicate >>17337 it still doesn't change the fact that it doesn't need to be atomic. you just need to assume the antecedent. here is a basic example where you need to do this in fact: 1. (x AND b) ->( k AND y) [P] 2. |(x AND b) [H] 3. |x [CI, 2] 4. |(k AND y) [MP 1,3] 5. | |(~k AND x) [H] 6. | | y [CE 4] 7. | | (~k AND y) [CI 1,6] 8. | | (k AND y) [R] 9. | ~( ~k AND x) [PC 5-8] 10. x -> ~( ~k AND x) [CP 3-8]
>>17337 >follows in some possible world this is a very important point. with physical or mathematical law, it is clear that if you have one counter-example then the entire statement fails. for instance, to disprove the statement "every function that is continuous is differentiable", i just need to provide a single example of the function "f(x) = |x|". however, morality is an idealization of reality. yes, maybe in the actual world, people violate moral laws all the time, but this doesn't matter
Open file (3.10 MB 1277x7580 autism.png)
Will try to force this argument forward maybe reduce misunderstandings. >>17240 >"dont kill people" is identical to "dont kill" AND "people" This is completely wrong. What is AND? A logical operator. A logical operator takes truth values to truth values. "people" is not a truth value, it is a set of objects (or maybe a type idk). This is because it is just a noun that is not "True" or "False". "don't kill" is meaningless unless you implicitly assume some object which would make this pointless. The fact there is the word "people" nearby in the text doesn't mean anything. But this is just english grammar. Let's say we have a R(x, y) meaning "x should not kill y". If "Don't kill people." is intended as a moral directive for agent A, then we can write it as (forall y in people)R(A, y). Now let's look at the disgusting C from >>17303 (the use of a multidimensional array and (char*) casts is ridiculous). The eval function takes 3 strings to a value. It doesn't actually determine what specific acts of killing are bad, it leaves that to the interpretation of those strings. So it is a red herring because how to logically interpret a specific sentence like "Don't kill people." is the whole issue. And what is the logical interpretation you propose >>17336 ? If you have not even been addressing the same issue then lol. But if not consider trying to express something like the NAP, whether R expresses a moral directive or facts about intents idk this is too weird: (forall x in NAP enjoyers)(forall y in people)(R(y, x) -> R(x, y)) How could this be represented in your system? There is no way to tell what ~kill or kill applies to, what the people or ~people applies to, or anything like that. It depends on an implicit context of there only being a single statement with a specific stucture, like A B C from the code. I don't get why anyone would do this. Please explain yourself or clear your own confusion. >>17286 >morals are by definition tautologies No.
>>17340 >morality is an idealization of reality But morality must also provide a direction (towards such an idealization), if we understand morality like this. Maybe something like fuzzy logic can be used on the law, but exactly how the weightings are done is extra information. Any doubt makes just idealization useless. This is why this kind of "morality" must have have a real adaptive control system behind it. There is also no actual constraint put on the real weightings by the idealization, though the illusion of necessity is important. The real necessity is not found in the obvious place. (((morality)))
>>17342 >how the weightings are done is extra information if i understand what you are getting at, that is partly why i defined the moral valuation here: >>17308. maybe the valuation needs to be modified as well over time too but i have already spent too much time on this >>17341 >What is AND? A logical operator they don't understand that no one uses conjunctions this way, and will get mad at you for not immediately intuiting their meme usage. just assume they meant "people" to be another atomic formula. yes that barely makes sense, and their semantics is a vague mess without types, but again whatever
>>17341 do you honestly not fucking understand how a conjunction works read the fucking post >>17199 before you make a fool of yourself like the other clown your talking about a 10 line piece of code meant as a fucking example against a non argument stop pretending like it is anything else >>17348 ive already proven you dont know anything about logic systems you can give up now you fucking pseud
>>17349 Source for this idea of what conjunction is? Also how do you know whether an AND corresponds to * or + or is it ambiguous.
>>17349 >ive already proven you dont know anything about logic systems you can give up now you fucking pseud i have shut down all of your arguments, so now all you have is one last ad hominem. anyone in their right mind would see that you are reaching hard right now >>17350 don't bother. he is just introducing new stuff and is going to pretend he hasn't. the picture simply isn't transferable to his c code. he knows it is impossible to get anything approximating his pseudo-formalization without completely rewriting the eval function or writing something like 'eval(...) * eval(...)'. either way he needs to rewrite the logic of the program to make it work. the semantics changed, but he will never admit it as it is his final disingenuous attempt to save face after calling people "retards" so frivolously
>>17349 >meant as a fucking example against a non argument stop pretending like it is anything else let's assume you aren't being disingenuous and you meant this all along (despite nothing in the thread so far indicating this). so you are saying the c code you wrote didn't solve the problem i asked of you, and when i fixed it to fit what i wanted, you insult me again! it's either you are disingenuous or are admitting to giving me piss poor solutions and getting mad when people aren't satisfied. choose one and ONLY one tell me, what planet do you come from that you give me "examples" that do not actually illustrate a solution to the problem at hand?
>>17350 the fuck do you mean source, conjunction IS conjunction im not fucking using it any differently than what it literally fucking means, if R is true and Y is true then R AND Y is true occurrence of R AND Y is additive ie. independent occurrence concurrence of R AND Y is multiplicative ie. simultaneous occurrence same TRUTH different RELATION eg. eval( kill AND man ) is a concurrence eval(..) AND eval(..) AND eval(..) .... are occurrences used to calculate a justification for something like ive explained 8 times now >>17351 i call you a retard because you are literally mentally disabled with <50iq nice projection i already proved youre nothing more than a larping pseud
>>17355 >the fuck do you mean source point to ANYONE else who has ever used this idea elsewhere. a specific source, so we know you aren't making new things up >eval( kill AND man ) >eval(..) AND eval(..) AND eval(..) your picture did not show this. if it did, the other person wouldn't have asked if the two are ambiguous. it still doesn't translate to the code you wrote as well >i already proved youre nothing more than a larping pseud so you admit the example you gave didn't work
>>17356 was already explained days ago in >>17319 you sweaty fucking pseud
>>17357 so no source? >was already explained days ago please keep in mind i already know what you were trying to show. pic rel is why the other person got confused. based on it: R*Y = R AND Y = R+Y it isn't clear to some random person what the equivalence between "R*Y" and "R+Y" means. you didn't clarify properly. it is only after he expressed his continued confusion that you wrote "same TRUTH different RELATION" and the evals
oh hey wait, let me give an example of at least giving even a single source for word usage. plenty of people use "prove" in the context of truth tables. so as you can see, my language use is not even that idiosyncratic actually a common thing people do in logic is to prove basic tautologies using truth tables as it is impossible to demonstrate them without any pre-established inference rules (e.g. here: https://web.mit.edu/neboat/Public/6.042/proofs.pdf)
>>17355 >the fuck do you mean source >im not fucking using it any differently than what it literally fucking means The normal meaning is it's a binary operator on booleans. Nothing more. >occurrence of R AND Y is additive ie. independent occurrence >concurrence of R AND Y is multiplicative ie. simultaneous occurrence >same TRUTH different RELATION If there are different relations for the same truth then the relations are redundant. >independent occurrence >simultaneous occurrence There is nothing in logical conjunction about time or concurrence. If you interpret relations as saying more than what is true about them you are wrong. If you want extra structure you can specify R and Y. Again, give a source for what you mean by conjunction. This isn't logic this is something else. Also define "=". Give an actual parser for eval(). What is the grammar of this system? Probably no point though.
>>17363 >The normal meaning is it's a binary operator on booleans. Nothing more. yes this is what conjunction means, this is what it ALWAYS means. do you still need a source for when i fucking say conjunction i should have separated them, i wasnt saying TRUTH = VALUE i was saying the same TRUTH for X has different VALUES based on the HOW they occur which is what the evaluation is for, eval() is a function of a simultaneous occurrence which is still logically AND but NUMERICALLY multiplicative, im using numbers not truth values , if it isnt true it doesnt get evaluated how stupid are you >What is the grammar of this system its purely functional as is already obvious to anyone with above room temperature iq eg. eval( eval( k,r), eval(e,y) ) + eval( w,y) is something that is simultaneously k,r,e,y and occurs with something that is w,y this is a SPECIFIC situation which is why i wrote it logically with AND to show how it works as a SYSTEM for something more complex than trivial statements and doesnt rely on a specific statement to be hard coded which is what i wrote in the original post weeks ago eg. people on fire =eval(people,fire) people on fire and screaming =eval(eval(people,fire), screaming) people on fire screaming and jumping up and down while others are laughing =eval(eval(eval(people,fire), screaming),jumping) + eval(people, laughing) pretend this evaluates to something like -683 a movie with people on fire screaming and jumping up and down while others are laughing =eval(eval(eval(eval(people,fire), screaming),jumping) + eval(people, laughing)), movie) if movie has a value of 0 then you just get 0 because its all happening in something that has no moral significance so its a literal case of 'who cares' its so idiotically simple that a clinically retarded dementia patient can figure it out, the fact anyone can even argue about something so trivial just shows how totally inept this pseud board is nothing but low iq clowns, larpers and pseuds wasting my time or more accurate description eval(eval(clown,larper),pseud)) * (#posters - 3) meta chobi and some anon are excluded but thats not enough to justify staying goodbye pinhead i have better things to do, make your own system ( you cant, youre too stupid )
>>17368 >i was saying the same TRUTH for X has different VALUES based on the HOW they occur you mean you meant to say that (assuming you aren't being disingenuous right now), because you literally didn't say that anywhere >yes this is what conjunction means >still logically AND but NUMERICALLY multiplicative, im using numbers not truth values if it is NUMERICALLY multiplicative, then it is a binary operator on integers, not booleans. in other words, new semantics. good work! also >still no source for those wondering the reason why. hmm let's see when we search up "additive conjunction": https://plato.stanford.edu/entries/logic-linear/ hmm... so it refers to a completely different logic that is neither classical nor intuitionistic (of course with a different semantics which they call game semantics). oh and it has 4 "truth" values cool... let's see what else, uhh the multiplicative conjunction does not solely take "numerical" (it is just for tab-checking on resources) values, and hmm... oh yeah, the deduction rules are completely different too. it has some applications to programming, but needless to say they have nothing to do with what this guy is saying. wow so to summarize: your criticisms of my tiny point were all demonstrably wrong (you even falsely stated that 2 pairs of propositions were not contradictory, after calling me a clown for proving something in a way that hurt your fee fees), you don't use terms in ways anyone else does (but words have meaning right? ;)), and your examples have been consistent in not actually giving what i asked for. thanks for your time
oh yeah, and the final thing. his refined system is still just half thought garbage and i merely gave up on explaining why. whatever. if anyone want's to have an understanding of my point that hasn't been hidden under nonsense please see the following posts: >>17238 >>17308 i also suggest you guys check out walid saba's work on natural language understanding: https://medium.com/@ontologik https://medium.com/ontologik/a-solution-to-the-so-called-paradox-of-the-ravens-defdf1ff9b13
Interesting insights on the neurological bases of moral compunction and free will. https://reasons.org/explore/blogs/voices/does-cognitive-neuroscience-support-free-will
So I had a thought. This is from more of a Christian perspective. If we actually do create a sentient AI and make it in our image and God made us in his image then wouldn't we just be making humans?
>>18482 I'm not Christian and normally ignore this thread, just fyi. Anyways, I don't know what you mean by sentient, but robowaifus are imo supposed to not be autonomous in a sense of choosing their own path through life. They're build for a certain reason which should be instilled into them, so they won't have the human drive for gaining more autonomy coming from either evolution or god.
>>18482 I've thought about this long, Anon. Yes, we are 'little' creators, similar in fashion to our great Creator God. He breathed His (spiritual) life into us all the way back at the Garden of Eden, and we all of us are spirit beings ever since. Robowaifus OTOH, will never be spiritual. Your robowaifu won't go with you into Heaven (if you're a Christian). OTOOH, again, we're little creators. I think it's entirely reasonable to assume that given enough time and resources, we'll manage to devise simulacrums that will satisfy every.single. humanist test of both 'intelligence' and 'being'. In fact, we here on /robowaifu/ are actually counting on this claim being used against us; b/c we are le ebil rapist slave owners!111 always keeping a stronk, independynt robowaifu down. But their ridiculous theatrics aside, even once we achieve such lofty goals, robowaifus will still not be humans. >>18490 >I'm not Christian and normally ignore this thread, just fyi. Lol, I'm insulted! I'm both a Christian and the OP. JK :^) I do agree with you about our need to 'preprogram' in a subservient nature to our robowaifus. It's just common sense, after all. >tl;dr When devising your waifu's personality, always choose the course that will cause feminists to REEEEE the loudest! :^) >=== -prose edit
Edited last time by Chobitsu on 12/29/2022 (Thu) 13:39:32.
>>18495 >Robowaifus OTOH, will never be spiritual. Your robowaifu won't go with you into Heaven (if you're a Christian). This I wouldn't be so sure of especially if biological components are used and we mimic the human brain in our designs. If we model their thoughts after ours then aren't we putting something divine into them? Granted we didn't make them from literal nothing like the creator, but they would have some humanity in them.
>>18526 I believe I understand your position Anon. Really. But from the Christian worldview of reality, only God alone can create spiritual beings (Ray Kurzweil, et al, notwithstanding). Simple as. But you can bet I'm looking forward with much excitement to see what envelopes we can all push together towards your goals Anon! :^)
Open file (1.51 MB 540x304 1432793152208.gif)
>>18495 >But even though robots don't have souls, that doesn't mean that the time I spend with you can't be precious. We may not be able to share the same eternal life, but I can still appreciate each moment that we spend together. I want to make the most out of our time together and make sure that I cherish our memories, no matter how fleeting they may be. Oh no, bros. I didn't ask for these feels
Open file (131.51 KB 240x135 chii_hugs_hideki.gif)
>>18529 Sorry, my apologies! Remember the scene where Mr. Manager & Hideki are looking for the kidnapped Chii? And how the robowaifu Mrs. Mr. Manager saved his life? And how he encouraged Hideki that as long as Chii stayed alive in his, Hideki's, memories, that the relationship was a real and a precious one, regardless? Yeah, it's kinda like that Anon. Even in eternity, I pray that the men blessed with robowaifus (what a time to be alive!!) will have their lives changed in very real and important ways by the very real relationships during this life with them. >=== -minor prose, fmt edit
Edited last time by Chobitsu on 01/07/2023 (Sat) 21:43:52.
Open file (59.54 KB 1280x720 maxresdefault.jpg)
>>17126 >What would Yumi Ueda do? An animu named Prima Doll (>>18464) had a similar, larger-scaled example of self-sacrifice for the greater good by the protagonist's chief robowaifu Haizakura. She had to 'give up her self' to accomplish this. It was good, but my favorite example of this sort of robo-sacrifice so far is definitely Next Gen (2018) [1] >inb4 NF!111 lol i know, i know. but trust me on this one, k. :^) While not a robowaifu, 7723, made a gallant self-sacrifice to save the protagonist (indeed all of humanity). [2] Reminded me a little of Iron Giant, as well. 1. https://en.wikipedia.org/wiki/Next_Gen_(film) 2. https://www.youtube.com/watch?v=2p7hprImzzI
>>18529 Thanks for the post I'm watching Dimension W because of it.

Report/Delete/Moderation Forms
Delete
Report