/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Build Back Better

More updates on the way. -r

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


Have a nice day, Anon!


Open file (40.50 KB 568x525 FoobsWIthTheDew??.jpg)
Emotions in Robowaifus. Robowaifu Technician 07/26/2022 (Tue) 02:05:49 No.17027
Hello, part-time lurker here. (Please excuse me if a thread on this topic exists already) I have and idea on how we could plan to implement emotions easily into our Robowaifus. This idea stems from Chobits where Persocoms change behavior based on battery level. So please consider this. Emotions would be separated into two groups. Internal and external stimuli. Internal stimuli emotions are things like lethargy, hunger, weakness, etc. Things that are at their base are derived from lower battery and damaged components. External stimuli emotions, things like happiness, sadness, etc. Provoked from outside events, mostly relating to how the humans (and master) around her act. A mob mentality way of processing emotions. All of this would be devoid of any requirement for AI, which would quicken development until we make/get a general AI. So until that time comes I think this artificial implementation for emotions would work fine. Though when AIs enter the picture this emotion concept is simple enough that a compatability layer could be added so that the AI can connect and change these emotions into something more intelligent. Perhaps a more human emotional response system [irrational first thought into more thought out rational/personality centered response] or a direct change of the base emotional response by the AI as it distinguish itself from the stock personality to something new. :] > (>>18 - related-thread, personality) > (>>9 - " , face) >=== -add related-thread crosslinks
Edited last time by Chobitsu on 07/27/2022 (Wed) 00:27:23.
I'd imagine you can program any (or upload any "stock") personality you want. I don't think we've delved too deep into this aspect on here, but as I see it there are two, or three avenues of approach. 1. program a personality 2. evolve/train a personality (seems like this would be more robust and have the bugs worked out) 3. "clone" a personality from a human or trained on a set of human interactions (which we probably want to avoid or we get right back to square one with where things are, bio-women, to avoid using the more offensive term) The evolution of emotion is fascinating. Beginning with anger/agitation in the lowest lifeforms up to the more complex caring, nurturing/protective feelings in higher mammals. Emotions serve a purpose and are more than aesthetics. Although you could program your waifu to exhibit "favorable" emotions that you enjoy being around (and I see no problem with that tbh), one may want to consider if other emotions are helpful, a hinderance, or even needed.
>>17027 >Hello, part-time lurker here. Hello Anon, welcome! Yep emotions are a very important part of engaging with others, and as you point out Chobits addresses this issue directly in several different ways. Chii for example, actually grew in her capability to express emotions as her language ability grew too. For instance, it's sort of a primary story-arc that she went from being a child-like character to one who loves Hideki wholeheartedly.
>>17031 I haven't considered the idea of minimal to no emotion, though one might consider that the growth of robowaifus in the future might depend on them being as human as possible without turning into a hoe. Though as this in the future and it most likely we'll being open sourcing our stuff I see no problem in someone making an 'emotions' module in the future. :]
>>17033 Man I love chobits so far, gotta finish it though (i just finished SEL) Also sorry if this is off-topic but i am being a social autist and very shy posting right now, I really like the idea of having a robowaifu one day so i plan to support you all! ;}
>>17037 >Sel Lain was interesting enough, I think it deserves a second watch b/c it feels like there's some essence there that generates appeal and I've only picked up a hint of whatever that is. It was confusing and I suppose some of that was from the juxtaposition of dated tech with more futuristic sci-fi possibiltiies.
>>17035 I like the idea of a somewhat cold and pragmatic waifu whose soft side is only revealed in private or intimate moments. I think of the main appeals of the R/W over a biological human is its ability to transcend its emotions to the point of not needing them except when they serve a purpose.
>>17039 The personality of each man's robowaifu is of their own discretion, I really wanted to consider design standards related to emotion. And while evolving would be the easiest (yet perhaps time-consuming) way to build a waifu personality I think standards are still needed for though who want to build their own AI (within a custom personality) and a standard way to deal with emotions might help in that. Though making emotions into something more like a system library and less like a external hardware with a driver is probably the smart choice. Either ways, I don't think emotions should be a requirement, but should be standardized. I don't want personalities that make use of emotions to work only on one robowaifu. They should be somewhat portable, I want a harem of clones :] >>17038 The apple (ass pull) like design for the navi's were a bit weird to me, considering I don't think any of those computers were super extensible, but I love Lain. Not in a earthly sense but she is just so appealing in a way I can't describe.
>>17043 >I really wanted to consider design standards related to emotion. I too, want to consider that. BTW, this thread also inspires me to want to do the same for morality & ethics in robowaifus. While I will never go down the 'everyone is a speshul-snowflake, we're all the same and no one can be allowed to fail except Donald Trump!11111' leftist-tier ideologies, yet it's an important set of topics to address--and indeed establish standards for--in our robowaifu's system. For example, in Chobits, Hiroyasu Ueda (Mr. Manager to Chii) had a persocom wife Yumi Ueda. During the end of her life as she was slowly dying, there was a traffic accident and she was killed saving his life. Disabled as she was, she still managed this act of sacrifice. That suggests to me (fictional account notwithstanding) that good ethics was very fundamental to her makeup. Just so for our robowaifus also. However, just like all the rest of these important characteristics, this will only come about as the result of lots of hard work on our and other anon's parts! :^) Thus the need for a specific Robowaifu Ethics & Morality thread, without allowing it to be troled by the usual suspects into a cesspool of degeneracy. It will take firm positions on all our regular's part to keep it from descending into a wretched hive of scum & villainy tbh. >=== -add the single word 'good'
Edited last time by Chobitsu on 07/26/2022 (Tue) 14:09:34.
On the subject ITT, there's a widely-studied, widely-(ab)used resource that became what is called FACS. Anons here probably can at the least gain numerous benefits from it's 'standardization', even if we adopt somewhat different approaches to things for our robowaifu's 'emotions' >tl;dr We can have a discussion starting here. https://en.wikipedia.org/wiki/Facial_Action_Coding_System https://en.wikipedia.org/wiki/Emotion_recognition https://diglib.uibk.ac.at/ulbtirol/content/titleinfo/782346 https://imotions.com/blog/facial-action-coding-system/ >also: lib gen.rs/book/index.php?md5=4A97BF6B6227201C5D82065973C0329F lib gen.rs/book/index.php?md5=D56B5845AFBC2D4E8205F8DCE3238D22
>>17027 It's important to disambiguate the different things we call emotions. I think in cognitive science, there are at least these categories: - The internal state that has some global effect on cognition. I think researchers are fairly consistent about calling this "mood". - The conscious experience. I think this is usually referred to as "feeling", though the definition seems to vary across researchers. - How it's expressed. I think this is usually referred to as "affect", though the definition seems to vary across researchers. - The categorizations we use for conveying them. When researchers take the time to define what they mean by "emotion", I think this is usually what they have in mind. Thinking about it systematically, you would need at least these parts. An underlying description language for feelings. This would be similar to how x, y, z coordinates are used to identify a position in a 3D space. There should similarly be some "coordinates" to identify a feelings. This seems like a good starting point: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2367156/. Note that the circle in this paper has a somewhat-complicated history. It started with factor analysis on survey questions on feelings & affects. Psychologists then associated emotions (categorizations) to the dimensions they found, then did factor analysis again with surveys on how people think about those emotions. So the two dimensions of the circle (activation-deactivation, pleasant-unpleasant) correspond to dimensions of emotions, and emotions themselves correspond to dimensions of feelings & affects. Intuitively, describing feelings in a language that matches this model would be like describing objects in a visual scene. The number of objects can vary, and so can the characteristics of each object. Similarly, the number of dimensions of a feeling can vary, as can the characteristics of each dimension. In machine learning and statistics, models without a fixed number of dimensions ("parameters") are represented with stochastic processes, like the Dirichlet process, the Chinese Restaurant process, and the Indian Buffet process, and they're studied as "nonparametric methods". A model of how feelings relate to one another, similar to how you can zoom into or out of objects in a visual scene, or how you can look at objects as a whole or in part. With visual objects, we have things like positioning on the retina, depth estimation, and interaction between objects (e.g., one person throws a ball and another looks ready to catch it, so we can group both people as "playing catch") to all help us with this. This should be something similar for feelings. By that, I mean we should be associating feelings with some underlying thing that tells us when two feelings are both associated with "the same thing" when we "zoom out", or that a feeling may have some depth to it if we were to "zoom in". A source of stimuli. These are things that should generate (or generate changes to) behavior. I think what you're describing with internal and external stimuli is a categorization of this. A model of how feelings change over time. With visual objects, we expect that forces, entropy, and intent govern how objects change over time. Solids can push or pull each other, gasses and liquids can dissipate, animals can pursue goals. (I know they technically all come down to physical forces, but that's now how we talk about them, and it's probably not how we cognitively model them.) Similarly, there should be some rules that govern the dynamics of feelings.
Follow-up to >>17244. I have a question for everyone: What are all of the things we have feelings about? OP mentioned internal stimuli associated with physiology and external stimuli associated with how other people act. I think there are more. People can have feelings about characters or events in stories, they can have feelings (like excitement) about activities they're performing, and they can have feelings about statements about the world (god does/doesn't exist, intelligence is/isn't genetically determined, etc.). It would be good to be specific. For example, we have feelings about some stories but not others. We have feelings for some statements about the world but not others. Why is that? If we can get a good understanding of that, I think we can figure out a lot of the missing pieces for how to model feelings and emotions.
>>17245 Great post, Anon! For one, I welcome our new feefee overlords will try to respond to this after I give some thought to your ideas this weekend. Cheers.
>>17246 Thanks! I asked the same question to a cogsci researcher, and he pointed out that we technically can have feelings about anything. I tried to narrow the scope to these questions: >Is there any way to ask a person questions about X to figure out whether they will have strong feelings (either strong postive/negative valence or arousal) about Y? Or is there any way to construct Y such that if someone has strong feelings about X, then they will have strong feelings about Y? >Similar to how we can have feelings about anything, we can have visuals for anything. However, we can only efficiently work with visuals (zoom in, zoom out, rearrange, recolor) when they're associated with spatially-coherent objects. Similarly, there should be some requirements for when we can efficiently work with emotions. What are they? Both try to answer the same question from different angles. The first one, about predicting/constructing things people will have strong feelings about, are more focused on understanding the logic of emotions. The second one, about efficiency, is about understanding the organization of emotions. I expect that the two are very closely related, and that answering one of these questions will result in an answer to the other.
>>17245 Bit of a ramble, bear with me. Currently with our language and culture we need to attach a feeling to something in order to process it. That isn't to say don't have instinctual reactions to things, but rather we lack the capacity to quantify such things. Take fear for an example. Where would fear sit in the internal/external stimuli model? A textbook example of fear is "thing goes bump in the night". Allow me to attempt an explanation and model of such an event. On a biochemical level, the lack of sensation through the most used sensory organs (eyes) and overeliance on your other sensory organs create an overload of information with the sudden bump (external stimuli), triggering primal survival instincts. We then "feel on edge" so to speak, as one human defined manner of expressing that rushing feeling of chemicals waking your body up to high alert. But the lack of an appropriate cause for such a response and eventual release of said energy deadens the recepticals for the chemicals, and the body is put into a state of suspense, as it can't decide whether to continue the highly inefficient but potentially life saving energy output or to put this as a false positive and return to normal. Would this be a mix of both then? This is the typical process (as I would imagine) one goes through the first time they become scared. After experiencing this and storing it away, whenever someone says the phrase "scared", we are able to draw upon the same or similar experience to align that idea with our own worldview, a mix of prediction, imagination and extrapolation. And as the "scary" storage grows, so too does the interpretation of what is scary, for example people being afraid of dogs vs people who have seen much scarier animals in comparison, making dogs feel less scary. But just how much of a language did I have to use in order to quantify such a feeling? Is this model good enough to extend further to cover all cases of the emotion? Imagine needing to store all the related information from above in a manner such that a robowaifu can summon the full meaning of the feeling on demand. But that expectation is a bit overboard when we ourselves can't even quantify some of the feelings we get, until much later. I think language acts as a shortcut, with emotions being a sort of tree structure that begins with the strongest example at the front and slowly cascades down into weaker or imagined scenarios, with the edges of the tree being amorphous until generated specifically, branching out from a mishmash of the consciousness. Language is a quick way to access the root of the tree, and has a predesigned path depending on the context of the language used. Designing a language that can model feelings like humans is a tall order as the human brain is much better at "pruning" unrelated branches that are generated, an evolutionary feature that can simultaneously read and rewrite what is being created to generate a good enough response in the shortest amount of time. Algorithms are mostly Min-Max only, designing a "good enough" point is difficult to quantify other than setting a hard limit on iteration count of generation, which then strays into needing good Set theory and statistics. So having said all that, how would we model something abstract like fear? The feeling of powerlessness? The knowledge that after all possible calculations within acceptable timeframe, the solution is still not optimal? An unknown variable that cannot be resolved? A simple attempt here to model it in English has already resulted in many different generations, each one potentially being percieved differently depending on the "base tree" of fear that one subscribes to. How different does an emotion or feeling have to be to generate a new tree? For example, Fear of abandonment vs Fear of unknown are two relatively different types of fear (though you could argue fear of abandoment is simply fear of unknown extended to happening after the event of "abandoned"). How would we store this concept, or would the robowaifu generate the emotions from a predetermined set? What would that set be? I have no idea where I was going with this but I hope it generates some ideas for you to share.
>>17281 I'm going to do a thought dump as I parse what you wrote. I think you're right when you say we lack the ability to process [certain] things unless we can attach a feeling to them. We treat feelings as if they were right on the border of logic and not-logic. I say that because it seems reasonable to ask why (a logic question) a person feels a thing whenever they feel something, yet we simultaneously reject feelings as a justification for anything abstract and objective. I guess the question then comes down to this: which things do emotions let us process? I think emotions are always involved when we expect people to justify their actions or behavior, and that pure logic is never sufficient to justify these things. Even when people try to justify their actions through something physical or mathematical, like utilitarianism or evolution, there's always the implicit assumption that could break from the utilitarian or evolution "script" if they wanted to, and the thing attaching them to the script is some desire. In the pathological cases, like with depression or anxiety, people's feelings drive their behavior to such an extent that it overrides any script that a person believes reasonable to follow. >We then "feel on edge" so to speak, as one human defined manner of expressing that rushing feeling of chemicals waking your body up to high alert. But the lack of an appropriate cause for such a response and eventual release of said energy deadens the recepticals for the chemicals, and the body is put into a state of suspense, as it can't decide whether to continue the highly inefficient but potentially life saving energy output or to put this as a false positive and return to normal. Would this be a mix of both then? That's a very interesting thought. There's one potential outcome to which you attach a high-arousal feeling (as in the "valence-arousal" dimensions of emotions), and another potential outcome to which you attach a low-arousal feeling, and it's through a mix of the two that you get suspense. If that's true, it suggests that the brain can look for (potentially complex) patterns in feelings, similar to how it can look for complex patterns in visual input. In the case of vision, it finds patterns in color-based activations with spatial structure. In the case emotions, it could be feeling-based activations (e.g., Valence-Activation instead of Red-Green-Blue) with prediction structure (e.g., Monte Carlo Tree instead of a 2D grid). If that's true, then the brain would be able to attach phrases like "scared" and "powerless" to these feeling patterns just like it can attach names to physical objects. >I think language acts as a shortcut, with emotions being a sort of tree structure that begins with the strongest example at the front and slowly cascades down into weaker or imagined scenarios, with the edges of the tree being amorphous until generated specifically, branching out from a mishmash of the consciousness. This is the same as how I think of it. The mathematical object desribing this is a topology or (equivalently) a lattice. With no information, all possibilities are valid. Every piece of information is associated with a subset of those possiblities, so every time you get more information, you can "descend the lattice" to more specific scenarios. Information theory gives a direct relationship between language and information of this sort. The relationship between natural language and the kinds of language useful for navigating these trees is less clear, but I don't expect that to be a big jump. I think people are already working on this under what they call "triplet extraction". >How different does an emotion or feeling have to be to generate a new tree? With a topological representation, you don't need to worry about things like when to generate a new branch. Since each unit of language is allowed to carry a (continuously) variable amount of information, all the branches of the tree are able to blur together or separate depending on what the data requires. Depending on what kind of space you use, you can even calculate the "distance" between two arbitrary feelings. Conceptually, all possible emotions would exist in some abstract space, and each labeled emotion would correspond to some "landmark" in that space. The landmark could be either a point (i.e., a complete, maybe infinite, description of a distinct feeling) or a subset (i.e., a finite description that narrows the set of possible feelings. That might sound complicated, but programmatically, these things are pretty straightforward to implement since it's just a combination of, e.g., text generation and text-to-whatever models. Thank you for that. Reading and responding to your post helped clarify a lot of things for me. I think I now have a satisfying answer to the question I originally posed in >>17245. We have feelings about exactly the set of things that predict our actions and behaviors. With that, I think I can follow up on: >If we can get a good understanding of that, I think we can figure out a lot of the missing pieces for how to model feelings and emotions. I'll need to think about all of this a bit more to compile my thoughts and ground them in directions that are actually programmable. For now, I can at least say that this has flipped my understanding of emotions on its head. I previously believed, and I think it's common to believe, that feelings were necessary for making good decisions with bounded compute. If I'm right about us attaching feelings to everything that predicts our actions and behaviors, that means the relationship is actually the reverse: feelings require us to have bounded compute. Otherwise, the whole situation would be impossible since our feelings-about-feelings-about-feelings-about... (ad infinitum) could drive our behavior, which would lead to all sorts of paradoxes.
Open file (545.45 KB 997x708 Ponko.png)
>>17311 Look at you using fancy purty words. Excellent stuff attaching scientific concepts to my inane rambling. >emotions are always involved I agree. And as an extension of my previous post >>17281 , I think a difficult aspect in modeling human emotion processing is that a lot of the emotion processing is done subconsciously, and everything has to be observed post fact. It's sort of like the collapsing wave-form paradox, where only post fact introspection can occur, and a lot of the variables at the time are lost afterwards or very difficult to obtain information on simultaneously. It's why even though we can formulate a logical reason for an emotion, in practice there always appear to be more factors that influence how that emotion is expressed, and the actions taken on that emotion. >reject script The rejection of following a script might come from other aspects on how they view the human condition and other philosophically rooted ideas, but lends credence to the idea that their feelings on the subject matter defines how they understand it rather than a subject existing and what are the feelings about it. Achieving a true "neutral" view on something is almost impossible for humans because of this tinge of subconscious processing. As for what a subconscious is, I guess it's akin to a primitive pathset in the neurons and axons, while the conscious is a more active aspect that also focuses on creating new subconscious pathways as it interprets information from sensory organs. A sort of weighting system, where the more the conscious aspect travels down the topographical field or lattice or tree or whatever in one specific manner, the more it is imprinted into the subconscious module so that future quick pathing / shortcutting will travel down that one instead of a different one. Putting it into your language, the conscious part travels down landmarks / subsets, which creates an imprinted map in the subconscious space that only holds those landmarks for quick retrieval should a similar pattern of activation arise. Muscle reflex and venus fly trap triggers are what I'm mainly thinking about when I write this, where the information does not even reach the central processing unit before an action is already taken, a predetermined chemical set of reactions should a trigger arise. I believe the emotional subconscious operates on a similar level, where before the conscious aspect has time to process the information fully, the emotions subconscious has already "realigned the board", creating a somewhat biased field of landmarks for the consciousness to travel down. It would take a greater effort to travel a new path than to revert to using the old ones (An attempt at explaining the pathological aspect of depression or other learned behavior/biases). Perhaps there might be multiple subconscious spaces that trace down from certain landmarks (triggers if you will) depending on how often it is used, for example "happy" as an emotion I would imagine would mave multiple subconscious paths.
>>17343 (cont) >>17311 >topological My point about needing a new branch was more a question about the memory costs of storing that branch, or costs associated with letting a full generation occur. I'd argue the brain is pretty efficient for a few hundred million years evolved system at pruning offshoots to conserve energy, but even then we're running at a purported 20% energy usage just for the brain at rest. I think my idea about the subconscious above helps with conceptualizing how these are stored, where a small section of the topological field, a thread if you will, is stored for the subconscious from the original conscious topological field. Or perhaps its in reverse, where the topological field exists in the subconscious, the conscious only travels along. And the topological field is actually a lot of these threads pieced together. Thus my question is sort of about when a new "thread" (since I am running with the idea that the emotion is attached to a series/sets of landmarks) pops up in the subconscious, how much energy is needed to alter the topological field? >feelings require us to have bounded compute To reiterate everything above in relation to this point, the subconscious topology field projected forth is the bounded compute, where the "board" has already been set by preset triggers and reactions, and the follow up consciousness (aka feelings as we can process them) travels along this biased field. It would explain why there is much difficulty in accepting the predeterministic view of feelings and emotion, because the board has been set without us being able to process it as such, and attempts to prove it as such is next to impossible. >the whole situation would be impossible since our feelings-about-feelings-about-feelings-about... (ad infinitum) Though again with how seemingly at the quantum level lots of statistical fuckery is going on, perhaps there is no ad infinitum calculation going on simply because of how a result must have been reached therefore a stable state was calculated. A perhaps overly deterministic view of how emotions and feelings are decided. Though I think a topological space of the subconscious being spun from all existant subconscious threads (landmark set from subsets) makes the most sense, as even if there is no bounded compute, there are still stable states within the topology in which the consciousness can enter and traverse for emotions processing (Sort of like quantum tunneling and energy fields). <energy <topology Babies would therefore have a almost flat topology field as their subconscious does not have enough information to take a shape yet, thus transformations to the topology is rather easy and very little energy cost, resulting in their exponential emotional learning ability. Perhaps craziness is due to consciousness entering a "hole" in the subconscious toplogical space, where there is ad infinitum generation, thus trapping them within that state until there is random chance for the consciousness to exit, or upon split personality, a new consciousness thread takes over for feelings processing, leaving that original consciousness within the mess until random chance/variation in the topological space lets it pop out. Or perhaps energy expended to transform the topological space into a different category, freeing the trapped consciousness thread. Bringing it all back to the fear example, an event occurs and a change is generated in topological space, transforming the topology by drawing upon similar previous subconscious fear threads. Post fact when one tries to introspect on that fear and understand it, they utilize a conscious thread to traverse the altered topological space, and in doing so either create a new subconscious thread to anchor into topological space (if a new fear) or to reinforce an old subconscious thread (enlargening peaks in topology field). When traversing the topological space they draw out previous patterns, and ascribe to it the concept of fear. Have I contradicted myself yet? I've lost track once again. Excuse me if I've misunderstood topology and their transformations, I was mostly interpreting it in a similar manner as energy states and tunneling to exit a locally bound low energy state to an even lower one (entropic effects). I'm not very well read on this subject nor any others for that matter. <attach a feeling to something in order to process it As to how this system works in relation to my original idea, to enter into the subconscious thread created topological space, one needs an entry point. Emotion subconscious threads are defined by the word describing it that anchors the thread into the topological space. Thus processing can only occur if there is a feeling existant in the subconscious space, or else processing will result in a new subconscious thread being anchored. It's a sort of multiple layered interaction between original space, subconsciously altered space and then conscious driven altered space. >robowaifus And how to easily implement this in robowaifus? Shit man I ain't got no clue. But I think there's something there with a main emotion processing system, subsystem and storage if we want to model human emotions. If we want simplified emotions we can forgo the subsystem and simply have robowaifus reading off the storage, but it would feel very "artificial". Building up a emotions topological space will be done through training with the main emotion processing system, slowly building up sets of assosiated stimuli and responses (think baby emotions training and ensuing autism if not done right).
>>17344 (final) If we want emotions to be able to evolve like in >>17033 or to derive their own "good" or learnable standard of morality like in >>17045 , then the subsystem is needed. And I just realized I inadvertantly put forth a reply to >>17244 without reading it. Consider all of the above as an exploration into how emotions might evolve, a two part subconscious subsystem and an active main processing system.
>>17343 This post is going to be another thought dump. >It's sort of like the collapsing wave-form paradox That kind of talk scares me. One of the worst outcomes would be if quantum computers are required to emulate any significant aspect of people. Regardless, I think the analogy is an interesting one. Pardon me while I go full nerd for a minute. In QM, a system takes many mutually-contradictory paths, and the observed outcome is a choice of some of those paths. Each individual path that get selected is always self-consistent, but the selection process can "use" information from all of the paths, even the ones that are inconsistent with the selected paths. Analogously, it would be as if your subconscious can evaluate all options, choose one that "makes sense", then do some parallel construction (https://en.wikipedia.org/wiki/Parallel_construction) to make it seem as if that chosen option was the only one taken. Making the analogy explicit in my own shoddy way: - A self-consistent logical reason would be represented by a wavefunction. Your subconscious can juggle multiple mutually-contradictory logical reasons about any given thing, which would translate to multiple wavefunctions. - Your subconscious would generate something like a density matrix of logical reasons. This would represent how every logical reason relates to every other logical reason. - Emotion would be represented by some operator (function) that describes how one can feel about some given logical reasons. You can think of an operator as sort of like a matrix. - Each eigenvalue of the emotion operator would represent a feeling, and each eigenvector of the logical reason density matrix would be another wavefunction (i.e., another self-consistent logical reason). - Whenever your conscious needs something evaluated by the subconscious, the subconscious picks some emotion eigenvalue (feeling) and a corresponding set of reason eigenvectors (reasons), and it will return the feeling-reasons pair to the conscious side. When it does that, the subconscious updates its own context so it stops considering logical reasons that contradict what it just returned. - Technically the subconscious would return multiple feeling-reasons pairs-- one for each observation it wants to report to the conscious side. Intuitively, it's like how when you visually look at something, you don't see just a single color, you see a whole picture full of colors. One pixel would correspond to one eigenvalue reported from your subconscious to your conscious side. - Speaking from experience, I think the subconscious can try to roll back feeling-reasons pairs it previously provided to the conscious, and that it can update context deeply enough that even the conscious side will misremember things. Something about the subconscious providing a feeling-reasons pair to the conscious seems to make it very difficult to undo though. In QM, eigenvalues are usually real (with no imaginary component) since physics observables tend to not have imaginary components. The circumplex model from the link in >>17244 suggests that feelings are complex though, having both a real and imaginary component. There are some consequences to having complex eigenvalues, though I don't understand the math well enough to say exactly what those are. I think it says something about how the system can evolve. Maybe something like "emotions aren't differentiable." I'll leave this link here on the chance that someone understands it: https://en.wikipedia.org/wiki/Stone%27s_theorem_on_one-parameter_unitary_groups Ok, I'm done nerding out. When I get some time, I'll explore this idea a bit more. >their feelings on the subject matter defines how they understand it rather than a subject existing and what are the feelings about it Maybe a bit of both. I think of it as a dynamic system. A person has thoughts about a subject matter and feelings about those thoughts. Those feelings determine what narrative they ascribe to the subject matter. That narrative affects what information they're receptive to, which shapes their thoughts on the subject matter. It forms a cycle, which keeps evolving as the person encounters more information and resolves more of their feelings. They can break out of a particular narrative given strong enough feelings (like guilt maybe), though it's always constrained by, as you stated well, their view of the human condition and philosophically-rooted ideas. Maybe one way to shift these fundamental perspectives would be to force the subconscious into a massive contradiction such that no feeling-reason pair exists for the current context, and such that "rolling back" feelings & reasons within the current perspective is infeasible. I imagine this is what cognitive dissonance is all about. It's getting pretty late for me. I'll respond to the rest a bit later.
>>17343 >>17344 >As for what a subconscious is, I guess it's akin to a primitive pathset in the neurons and axons, while the conscious is a more active aspect that also focuses on creating new subconscious pathways I like this view. This feels conceptually related to your point about logical reasons only being observable post-fact too. The subconscious presents a set of choices attached with emotional feelings, the conscious selects from those choices ("collapses the wavefunction"), and logical reasons can be inferred from what's left. I guess the selection part done by the conscious is where emotions and logic meet. As the conscious continues making selections, the subconscious continues expanding on pathsets reachable from the consciously-selected paths. The subconscious is always trying to predict which paths the conscious will take so it can reach & expand them more efficiently. The subconscious continues processing things while it waits for the conscious to attend to it. As a result of its ongoing processing, sometimes it sets the framing and sometimes it takes actions before the conscious is even aware of what's going on. >needing a new branch Ah, got it. I think there are different ways of storing branches/landmarks, each with their own advantages. - For things grounded in representations that never change (like the first neurons of your visual cortex to get signals from your eyes), it could be useful to store an exact representation of the underlying stimulus and focus points. For example, if you look at a picture and your conscious chooses to focus on a particular object in that picture, then your subconscious might want to remember the exact picture and the exact object location in that picture. The advantage here is that this is as close to the ground truth as the subconscious can get, which is good for learning new things from old data. The disadvantage is that it's expensive in terms of both data and compute. In machine learning, this is the sort of representation that's used for most neural network tasks. - For things whose representations change very little, the subconscious can remember something like a "path prefix" that identifies approximate landmarks. The advantage here is that it's very fast. The disadvantage is that it's not data-efficient, and these prefixes become useless if the underlying representations change a lot. In machine learning, this would be like remembering latent space values. DeepMind Retro https://www.deepmind.com/publications/improving-language-models-by-retrieving-from-trillions-of-tokens does this to look up data relevant to a task. - For things whose representations change a lot, the subconscious can store representations in a model that recognizes the landmark. The model would search for paths on-the-fly that lead to the relevant landmarks. The advantage here is that it's very flexible, and it's very data-efficient. The disadvantage is that it's much more compute-intensive. In machine learning, this is common under the name Inverse Reinforcement Learning. I don't think there's going to be a simple answer for deciding which landmarks to store how. Between these three things, you end up with a *very* large design space for optimizing storage in specialized ways for all sorts of tasks. The brain might even learn to build and optimize its storage on-the-fly. It certainly seems like a problem I'd punt off to some machine learning algorithm. >Babies would therefore have a almost flat topology field as their subconscious does not have enough information to take a shape yet, thus transformations to the topology is rather easy and very little energy cost, resulting in their exponential emotional learning ability. That's a very interesting thought. A well-trained neural network can actually learn new data faster than an untrained neural network, but it often takes a lot of representative data to actually train a neural network well. I think it has to do with what patterns the neural network learns and how they're connected. If it learns useful, fundamental patterns connected in logically-coherent ways relevant to the new task, then it will learn faster. If not, then it will learn much slower. So for things that humans learn in less-structured ways, like new languages, babies will often outperform adults. For highly strucutured tasks, adults will generally outperform babies.
>>17382 (cont & final) >Excuse me if I've misunderstood topology and their transformations It's fine, I think I understand what you mean. Normally, topology only tells you which things are connected to which other things. Density is usually associated with measure spaces or distributions. Flatness and curvature are usually associated with geometry. So the intuitive pictures we have in mind may be different, but most of what you described makes sense for topologies with or without measures and geometries. The only part that requires more than topology is about reinforcing a conscious thread (enlargening a peak), which would require a measure space. In machine learning, it's pretty common to use measure spaces (for probabilities) and geometries (for derivatives) to picture these things anyway, so it's not confusing at all to me. I think one difference in how we're thinking about this is that, when I say "landmark", the picture I have in mind isn't analogous to a point on an electron cloud. It's analogous to the electron cloud itself. Sometimes the "cloud" might get reduced down to a single point, but usually that doesn't happen. So if the conscious is traversing a topological space, it's not walking along the space, it's shifting between different subspaces within that topological space. When I think of the conscious picking a path from a pathset provided by the subconscious, what I imagine is this: - The subconscious has an overall space it's working within. - The subconscious picks out a bunch of (potentially overlapping) subspaces that seem interesting. - The conscious picks one or more of those subspaces. - The subconscious expands on that choice by finding new interesting subspaces within the (union of the) selected subspaces. >attach a feeling to something in order to process it I think we're thinking the same thing here. Tying it back to vision: the data coming into our eyes consists of only colors, but we often think of objects as being defined mostly by their shapes. The colors provide the cues we need to infer both shapes and the context (lighting), and to a lesser extend, the colors themselves provide some final cues for us to identify objects. We have landmarks in the space of objects by which we recognize objects through all of these things, shapes, context, and colors, and we associate those landmarks with language. For us to be able to process an object, we need to process the landmark associated with that object. That happens when the conscious "expands" on that landmark by focusing on its subspaces. (A subspace here would be, e.g., the object in various contexts, taking form in various shapes, and being recolored in various ways.) All of this begins with colors that come in through our eyes, and a color is just a "vision feeling". There should be a similar process going on for all feelings, including "emotion feelings". >>17345 I actually suspect that ethics and morality isn't foundational, and that it's derived from something else. I think that's why ethicists don't seem to come up with things that become widespread and uncontested, which is something most other academic fields seem able to do. People's sense of right and wrong seems to change with time. I suspect what's more important is that there's some degree of agreement in what narratives people ascribe to the world and to the roles people can play within those narratives. That gives people a common basis for discussing actions and outcomes: they can say that things are right or wrong in terms of the stories they're acting out. Here's one way to identify ethical compatibility: you can rank stories in terms of which story worlds you would prefer to live in. A robowaifu would be a match for you (in terms of ethics, at least) if and only if your rankings and hers converge "quickly enough" (which depends on how much patience you have for people with temporarily-different ethics from you).
>>17345 There's a discussion on "relevance realization" that seems relevant to what we're discussing here about the conscious selecting branches for the subconscious to expand on. It starts on 32:14 and continues until 41:27. https://www.youtube.com/watch?v=yImlXr5Tr8g&t=1934s He points out some connections to opponent processes, which was originally used to describe color perception. Here's a summary: - Relevance realization is about the perspective/framing through which information and options are made available. It determines what's salient. - Relevance realization must happen at a more fundamental level than propositional logic, or anything involving language. That's because the words we use implicitly come with a choice of framing. - The process of relevance realization can be influenced by how we represent things, but it cannot depend on any particular choice of representation. - There seems to be an evolutionary process within the brain that's involved for coming up with representations. - Vervaeke pointed out three opponent processes that seem relevant for cognition: threat-opportunity (same as valence?), relaxing-arousing, and wandering-focusing. Some background information unrelated to the video: in the vision, the three opponent processes are blue-yellow, red-green, and black-white. - There are bottom-up things that direct your attention (like a sudden clap), and top-down things that direct your attention (language). - Salience is whatever stands out to you. It's what makes subconscious aspects of relevant realization available to working memory. Working memory enables feedback loops with sensory information, and it acts as a global mechanism for coordinating subconscious processes. There seems to be evidence that working memory is a "higher order relevance filter" (i.e., something that keeps track of the relevance of information to the process of relevance realization). - Higher-order relevance filters & working memory are required when facing situations that are novel, complex, and ill-defined. Vervaeke suggests that these are the things that consciousness is required for. This seems to me like a very elegant picture of conscious-subconscious interactions. It ties together a lot of theories I've heard about consciousness, like that it's relevant for global coordination, memory, attention, and feelings.
Open file (43.38 KB 649x576 coloring.png)
When latent impressions stored from our lifetime of experiences become active they cause an emotional reaction, an actual chemical reaction in the body that activates certain parts of the brain, which then leads to a conscious thought process, which further develops into actions. If you observe your emotional reactions you will notice that most, if not all of them, are either about getting what you want or not getting what you want. If you trace them back to their source they all arise from self-preservation, either from the primal needs such as food, sex and sleep or attachment to an identity (which includes family, friends, community, country, species, environment and even ideas). Latent impressions color our thought process and bias it in many ways. Think of the word 'car' and observe your thoughts. What comes to mind first? What color is it? What shape is it? Did an actual car arise in your mind or another vehicle like a truck? Is it big or small? Do you like cars or dislike them? Do they remind you of something else or something from the past or future? If you ask friends what comes to mind first about a word, you'll find everyone colors words differently. Some very little, some a lot. Most of these colorings come from our desires being fulfilled or unfulfilled, which become stored as latent impressions and bias our attention. Language models are already fully capable of coloring 'thoughts'. The difference is their latent impressions come from an amalgamation of data collected from the internet. There's no cyclical process involved between the resulting actions affecting the latent impressions and those new ones creating fresh actions since current models do not have a plastic memory. So the first step towards creating emotions is creating a working memory. Once we have that we could have a much more productive conversation about emotions and engineering ideal ones. One idea I've had to build a working memory into off-the-shelf models is to do something akin to prefix tuning or multi-modal few-shot learning by prefixing embeddings to the context which are continuously updated to remember as much as possible, and like our own latent impressions, the context would activate different parts of the memory bank that would in turn influence the prefix embeddings and resulting generation. This would be the first step towards a working memory. From there it would need to develop into inserting embeddings into the context and coloring the token embeddings themselves within some constraints to ensure stability.
I believe OP had the right idea and that almost immediately the thread went into overthinking mode. Start simple, like reacting to low battery status. I would also like to emphasize: Start transparent. One can say that emotional states are related to different modes of problem solving and so and so forth, but this all gets very indirect. At the start, I'd rather only have emotions that are directly and immediately communicated, so you have immediate feedback about how well this works. So, ideas about simulating an emotion like nostalgia (is that even an emotion?) I would put aside for the time being. The state of the eyelids is something practical to start with. Multiple aspects could be measured and summed together for creating the overall effect. -battery status -time of the day -darkness for some time -movement (& how much & how fast & which direction) -eyelid status of other faces -low noise level for some time -sudden noise increase -human voice -voice being emotional or not (I mean what you register even without knowing a language, this can't be very complex) -hearing words with extreme or dull emotional connotation -registering vibrations -body position (standing, sitting, sitting laid back, lying flat) -extreme temperature and rapid temperature changes There is no necessity to perfectly measure an aspect (the measure just has to be better than deciding by coin flip) nor do you need to have something for all or even most aspects, summing together whatever of these silly tiny things you implement badly will make the overall effect more realistic and sophisticated than the parts.
>>17457 Excellent post Anon, thanks.
>>17457 The uncanny valley video here >>10260 describes the differences in approaches well. There are two problems to solve: 1. How do you make something emotional? 2. How do you make emotions realistic? In any case, I wrote this up: https://colab.research.google.com/drive/1B6AedPTyACKvnlynKUNyA75XPkEvVAAp?usp=sharing I have two extremes on that page. In the first cell, emotions are described with text, and they can be arbitrarily precise. In the second cell, emotions are described by a few measures that can be added. There are different advantages to each. If there are a fixed number of emotions, text-based emotions would be low complexity, easy to specify, and easy to test. If there's a continuum of simple emotions, measure-based emotions would be low complexity, harder to specify, and easy to test. If there are complex emotions, text-based emotions would be high complexity, easy to specify, and hard to test. It might not matter which approach is taken to start with since it seems possible to hybridize the two approaches. "On a scale of [...] how well does this statement match your feelings on this [...] event?" As a result, it should be possible to start with one approach, then later get the benefits of the other approach.
>>17463 replikaAI does something similar with CakeChat (which has been linked in here via Luka's GitHub) >Training data >The model was trained on a preprocessed Twitter corpus with ~50 million dialogs (11Gb of text data). To clean up the corpus, we removed URLs, retweets and citations; mentions and hashtags that are not preceded by regular words or punctuation marks; messages that contain more than 30 tokens. >We used our emotions classifier to label each utterance with one of the following 5 emotions: "neutral", "joy", "anger", "sadness", "fear", and used these labels during training. To mark-up your own corpus with emotions you can use, for example, DeepMoji tool. >Unfortunately, due to Twitter's privacy policy, we are not allowed to provide our dataset. You can train a dialog model on any text conversational dataset available to you, a great overview of existing conversational datasets can be found here: https://breakend.github.io/DialogDatasets/ >The training data should be a txt file, where each line is a valid json object, representing a list of dialog utterances. Refer to our dummy train dataset to see the necessary file structure. Replace this dummy corpus with your data before training.

Report/Delete/Moderation Forms
Delete
Report