(cont & final)
>Excuse me if I've misunderstood topology and their transformations
It's fine, I think I understand what you mean. Normally, topology only tells you which things are connected to which other things. Density is usually associated with measure spaces or distributions. Flatness and curvature are usually associated with geometry. So the intuitive pictures we have in mind may be different, but most of what you described makes sense for topologies with or without measures and geometries. The only part that requires more than topology is about reinforcing a conscious thread (enlargening a peak), which would require a measure space. In machine learning, it's pretty common to use measure spaces (for probabilities) and geometries (for derivatives) to picture these things anyway, so it's not confusing at all to me.
I think one difference in how we're thinking about this is that, when I say "landmark", the picture I have in mind isn't analogous to a point on an electron cloud. It's analogous to the electron cloud itself. Sometimes the "cloud" might get reduced down to a single point, but usually that doesn't happen. So if the conscious is traversing a topological space, it's not walking along the space, it's shifting between different subspaces within that topological space.
When I think of the conscious picking a path from a pathset provided by the subconscious, what I imagine is this:
- The subconscious has an overall space it's working within.
- The subconscious picks out a bunch of (potentially overlapping) subspaces that seem interesting.
- The conscious picks one or more of those subspaces.
- The subconscious expands on that choice by finding new interesting subspaces within the (union of the) selected subspaces.
>attach a feeling to something in order to process it
I think we're thinking the same thing here. Tying it back to vision: the data coming into our eyes consists of only colors, but we often think of objects as being defined mostly by their shapes. The colors provide the cues we need to infer both shapes and the context (lighting), and to a lesser extend, the colors themselves provide some final cues for us to identify objects. We have landmarks in the space of objects by which we recognize objects through all of these things, shapes, context, and colors, and we associate those landmarks with language. For us to be able to process an object, we need to process the landmark associated with that object. That happens when the conscious "expands" on that landmark by focusing on its subspaces. (A subspace here would be, e.g., the object in various contexts, taking form in various shapes, and being recolored in various ways.)
All of this begins with colors that come in through our eyes, and a color is just a "vision feeling". There should be a similar process going on for all feelings, including "emotion feelings".
I actually suspect that ethics and morality isn't foundational, and that it's derived from something else. I think that's why ethicists don't seem to come up with things that become widespread and uncontested, which is something most other academic fields seem able to do. People's sense of right and wrong seems to change with time. I suspect what's more important is that there's some degree of agreement in what narratives people ascribe to the world and to the roles people can play within those narratives. That gives people a common basis for discussing actions and outcomes: they can say that things are right or wrong in terms of the stories they're acting out.
Here's one way to identify ethical compatibility: you can rank stories in terms of which story worlds you would prefer to live in. A robowaifu would be a match for you (in terms of ethics, at least) if and only if your rankings and hers converge "quickly enough" (which depends on how much patience you have for people with temporarily-different ethics from you).