/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Happy New Year!

The recovered files have been restored.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


“What counts is not necessarily the size of the dog in the fight – it’s the size of the fight in the dog.” -t. General Dwight Eisenhower


Philosophers interested in building an AGI? pygmalion 06/26/2021 (Sat) 00:53:09 No.11102
Why is it that no philosophers are interested in building an AGI? we need to change this, or at least collect relevant philosophers. discussion about philosophy of making AGI (includes metaphysics, transcendental psychology, general philosophy of mind topics, etc!) also highly encouraged! Ill start ^^! so the philosophers i know that take this stuff seriously: Peter Wolfendale - the first Neo-Rationalist on the list. his main contribution here is computational Kantianism. just by the name you can tell that he believes Kant's transcendental psychology has some important applications to designing an artificial mind. an interesting view regarding this is that he thinks Kant actually employed a logic that was far ahead of his time (and you basically need a sophisticated type theory with sheaves to properly formalize). Other than that he also thinks Kant has interesting solutions to the frame problem, origin of concepts, and personhood. CONTACTS: He has a blog at https://deontologistics.co/, and also has posted some lectures on youtube like this one: https://www.youtube.com/watch?v=EWDZyOWN4VA&ab_channel=deontologistics Reza Negarestani - this is another Neo-Rationalist. he has written a huge work (which I haven't read yet ;_;) called "Intelligence and Spirit". It's massive and talks about various grades of general intelligence. this includes sentient agents, sapient agents, and Geist. this guy draws from Kant as well, but he also builds on Hegel's ideas too. his central thesis is that Hegel's Geist is basically a distributed intelligence. he also has an interesting metaphilosophy where he claims that the goal of philosophy is the construct an AGI. like other Neo-Rationalists, he heavily relies on the works of Sellars and Robert Brandom Recc: Ray Brassier (recent focuses) - I dont think he is working on artificial general intelligence, but his work on Sellars, and in particular rule following is very insightful! Hubert Dreyfus - Doesn't quite count, but he did try to bring Heidegger to AGI. He highlighted the importance of embodiment to the frame problem and common sense knowledge. I personally think Bergson might have explicated what he wanted to achieve but better, though that guy is like way before AI was even a serious topic, lol. Murray Shanahan - This guy has done some extra work on the frame problem following Dreyfus. His solution is to use global workspace theory and parralel processing of different modules. Interesting stuff! Barry Smith - Probably the most critical philosopher on this list. He talks about the requisite system dynamics for try strong AI, and concludes that our current methods simply don't cut it. One of the key stressing points he points out here with a colleague is that our current AI is Markovian when fleshed out chat dialogue would be a non-Markovian task (you can find the arxiv link of his criticism here: https://arxiv.org/abs/1906.05833). He also has knowledge on analytic ontology (and amongst other thing has some lectures about emotion ontology). I think his main genius however is in coming up with a definition of intelligence that puts a lot of the problems with our current approaches into context (which can be found here: https://www.youtube.com/watch?v=0giPMMoKR9s&ab_channel=BarrySmith) CONTACTS: He has a yt channel here https://www.youtube.com/watch?v=0giPMMoKR9s&ab_channel=BarrySmith Uhh, that's the introduction of pretty much every philosopher I know who works on this stuff. I made a thread on /lit/ and got no responses :( (which isn't surprising since I am the only person I know who is really into this stuff)
Feel like people over-complicate AGI. I've been following (or trying to follow, since he went silent) Steve Grand, who created the virtual pet PC game Creatures. I've mentioned him before in a previous thread, but the real difference between what he's doing and what we're doing is that he's making a game where you actually get to see the life cycles of the creatures, look at the genetics, the hormones and chemicals affecting the creatures, and even get to basically read their minds. They've got short-term and long-term memories and thoughts, they can imagine scenarios that haven't actually happened, and dream. He seems to be trying to emulate the human brain, which I think is unnecessary, or even counterproductive unless you actually want to look under the hood and know what is going on like in the virtual pet. We don't actually need that and the more hands-off the approach is, the more likely it is to actually behave intelligently. There was a good Ted Talk from 2011 called "The real reason for brains", (https://www.youtube.com/watch?v=7s0CpRfyYp8) which can be summed-up pretty easily; brains control body movement and organ functions, and everything it does is in service to that. Any living thing without a brain moves very slowly, if at all. The brain processes sensory information, feels, thinks, learns, recalls memories, and predicts the outcome of actions, all in service of coordinating body movements to meet survival and reproductive needs. The video uses the sea squirt as a good example of why this is likely the case, since the sea squirt swims around like a tadpole in youth, then when sexually mature it anchors itself to something hard like a barnacle, then promptly digests its own brain because it doesn't need it anymore and brains are very resource intensive. Yet it still lives without a brain, but it's more like a plant than an animal. With this in mind, I like to think of the short story The Golden Man by Philip K Dick. It opens with government agents in the future hunting mutants like something from the X-Men a decade before the comics came out. There were ones with super-human intelligence, ones that could read minds, ones with wings, telekinesis, shape-shifting pod people, and a running joke about a woman with extra breasts, but they couldn't catch the titular Golden Man. When they do eventually catch him, they run tests and figure out that this tall, muscular, golden-skinned absolute GigaChad of a man is profoundly retarded. He is as dumb as a dog, but is constantly seeing into the future and acting on that information to the point he can even dodge bullets. Sure, he supposedly couldn't talk, but never actually needs to, and speech is performed by muscles just like the rest of the body, so there's no real reason that he couldn't. A government agent was afraid they'd find a mutant so smart it made men seem like chimps by comparison, but instead found one that evolved beyond the need to think at all. And until we can actually see the future with certainty, we're just using sensory information, stored data and predictive analytics to control body movements to satisfy needs with varying degrees of predictive accuracy. >>30677 >proven >ghosts Lol. Lmao, even. When it comes to philosophy, I find that most discussion ends up turning into arguments over semantics and people with no real arguments, just wanting something to be true, but not admitting that it's the entire foundation of their argument isn't something that's actually been proven true.
>>33821 >...your robowaifu should be nothing more than a Tamagotchi, but having a realworld body to move around in. Hmm, intredasting idea Anon. Maybe that's a reasonable approach in a very limited fashion. I'd suggest that at the least you should add the ability to do children's coloring books, since that involves at least a smol modicum of creativity -- which is the area for which efforts in this arena have fallen flat on their faces. And I think the reason for this is simple (which you yourself embody within your post, ironically-enough): >"Lol. Lmao, even." <---> Unless you begin from the axiomatic 'pedantic semantics' of what every one of us commonly experience in real life; that the human soul (at the least defined as our human consciousness) is independent of the physical human brain -- ie, that it's an immaterial, independent component of our being, then you have little chance of success at solving this currently-insurmountable set of obstacles, IMHO. I think the human brain is -- roughly speaking -- a natural-realm actuation driver into the electro-biochemical realm of our meatspace bodies. This idea is analogous to the way one needs an electronic driver board involved with controlling a physical actuator within a robowaifu's body. But then all three of those tiers are clearly dependent on an external C&C signaling system (ie, the software). Finally (and most fundamentally to my basic point here): whence comes this software from? What (or who) devises it & perfects it? By what mechanism does it communicate to/from the 'driver board'? This search for the answers to this four-to-five-layer-question is at the heart of all AGI research. It's hardly an uncomplicated problem! Add the need for real, de novo, creativity for our waifus into the mix and good luck! She's behind 7 Tamagotchis!! :DD <---> I hope to see you become proficient at developing highly-sophisticated software and prove my bigoted view (along with my bigoted solution) wrong with your actions (and not just attempt to do so solely with mere rhetoric here on /robowaifu/ ). Again, good luck Anon! Cheers. :^) >=== -prose edit
Edited last time by Chobitsu on 09/30/2024 (Mon) 03:30:52.
>>33825 >>33825 >your robowaifu should be nothing more than a Tamagotchi but with a realworld body to move around. Maybe read my post again if you actually want to understand what I was saying. >But then all three of those tiers are clearly dependent on an external C&C signaling system (ie, the software). Finally (and most fundamentally): whence comes this software from? What (or who) devises it & perfects it? It's all a matter of DNA, RNA, and hormone/chemical exposure >This search for the answer to this four-to-five-layer-question is at the heart of all AGI research. It's hardly an uncomplicated problem! The question is fairly simple; If everything we want out of a human brain, mind, consciousness, soul (whatever the fuck you want to call it) can be created with computer hardware and software, then we will be able to create a robot waifu capable of everything we want. If it is impossible, we will have to settle for less if not abandon the goal entirely. We must assume the former, that people are,fundamentally just . >Add the need for real -- de novo -- creativity into the mix and good luck! A funny thing I've come to understand about creativity is that originality hardly even exists, and when people do encounter it they tend to hate it. Make an AI too good at being creative and people will probably say it's hallucinating and try to "fix" it. Most creativity that people like is really just combinations of things they've already seen before.
>>33828 >if not abandon the goal entirely Lolno. I can assure you that's not going to happen. :^)
>>33825 >>33828 I meant to say "people are fundamentally just machines." >>33832 >Lolno. I can assure you that's not going to happen. :^) So why argue in favor of something that only demotivates and discourages progress? Especially something that as of yet hasn't, and for all we know can't be proven true?
>>33837 Because, like Puddleglum, >"I'm a chap who always liked to know the worst and then put the best face I can on it." Just because we won't ever reach AGI ourselves, doesn't mean we won't achieve wholesome, loving & effective robowaifus. We will, and they will be wonderful!. :^) Besides, given the outcomes of so-called 'female intelligence', would you really want to go down this path again of fashioning stronk independynts, who don't need no man? Personally I'm much more in favor of Anon's ideas, where dear robowaifu is obedient and calls Anon "Master". Aren't you? Cheers. :^)
So i kind of overreacted sorry about that. I'm not aware of the kind of skill level of the people lurking here but AI, research specially, is very math heavy. Statistics, discrete math, linear algebra, vector calculus, etc... It wouldn't hurt to know geometric algebra. I am honest with myself. I'm not going to get into that. However with the ai available you'd be amazed how far you can get. There are modules for gyroscopes, ultrasonics sensors for distance, pressure sensitive resistors, camera's for object recognition. I believe that with the current AI you could lay in bed and the robot could move towards you and do whatever. Which is why knowing what the goals are is important. My goal is a sex bot. I assume now this is a place to exchange ideas and not a serious attempt at collaboration.
>>33841 >I assume now this is a place to exchange ideas and not a serious attempt at collaboration. That's probably a reasonable presumption in general, Anon. While there's certainly no reason we couldn't be collaborating here (we do in fact have a group project known as MaidCom : >>29219 ), most have simply shared their progress with works of their own. In that sense, you might think of /robowaifu/ as a loosely-affiliated DIY workshop. And of course, the board is also a reasonably expansive community nexus of ideas and information related to robowaifus, and of their R&D in general. <---> And all that's not really a bad thing rn IMO, given that we (ie, the entire world -- but ofc mostly just Anon ATP) are all in the very, very early stages of what is arguably the single most complex endeavor in all human history. >tl;dr You might think of us here on /robowaifu/ as sort of like how the Homebrew Computer Club movement was in the Stanford Uni area during the 70s/80s -- just taking place across the Internets instead today. Great things were afoot then, and great things are afoot now! >ttl;dr Let the creative ideas flow, Anon! Cheers. :^) <---> What a time to be alive! :DD >=== -fmt, prose edit
Edited last time by Chobitsu on 10/02/2024 (Wed) 18:30:57.
https://www.youtube.com/watch?v=6T3ovN7JHPo >1:11:15 "The problem with philosophers is that they love to talk about the problems, but not solving them" >>33839 >Just because we won't ever reach AGI ourselves I think AGI is already achievable by us today. The problem is just that most people judge AIs entirely on the results they produce, and we all want instant gratification. We get can good-enough results for text, images and music creation on AIs that have been trained with lots and lots of data, but you can't just train AGI off of data scraped from the internet for an AGI. Getting samples of sounds, text, images, videos, etc. might help to train it, but what we really need is learned motor control, and for any shared data to be useful there'd have to be enough similarity between the bodies for it to be worth sharing. Without data to work with, the AGI will have to learn on its own, so a robot with AGI might just writhe on the ground and make dial-up modem noises as it tries to learn how to use its body and people would see it as a failure. It's not that it's stupid, it just doesn't know anything. Okay, I lied, it is stupid, because if Moore's Law holds true, we still might not have consumer desktop PCs that could rival a human brain for about 10 to 30 years.
So as far as philosophy well the oldest form of philosophy is the socratic method. Asking questions. I'd start by asking questions of what do we currently have. What separates image recognition ai from older technologies like opencv? Is it better? What are its flaws? Is it good enough to get the results we want? What are the results we want?
>>33885 So here's my conclusion. The current AI is good enough and here's what i think why. My understanding of what separates opencv to current machine learning is that current machine learning doesn't need an algorithm to recognize objects. It needs training. Based on this training it can also learn to recognize similar objects. That's my understanding I could be wrong. My goal is a sex bot, I'm only concerned with the male body in front of the robot so I can narrow down its training set to male body parts. If I wanted a maid bot i'd have to train it for dishes, floors, household objects, and dirt, etc...
>>33886 here's why i think its good enough*
>>33886 The flaw i see is in the training. It needs a large dataset, its expensive, you have to narrow down the training. I mentioned nudenet to someone, he dismissed it right away. My understanding is it can recognize private parts and faces from different angles. That's what I think.
>>33888 Okay I don't want to keep going endlessly but if the problem is that the its expensive to train. Should someone come up with a better AI or should the hardware be made cheaper and better? It may be the case that there might be a better way but maybe there isn't. In the case of crypto they went back and forth over the blockchain trilemma but at the end nobody was able to come up with something that was addressed its shortcomings, as an example. The training would also have to be narrowed down, but as it is, its a hardware problem and a lot of work ahead.
>>33889 Well ideally its supposed to be a dialogue between two people. The blockchain trilemma may have nothing to do with ai. I wonder if there is a triangle for this... I'm not sure I think that's what a platonic triad is.
>>33892 I cheated just now and asked gemini I guess but it sounds about right. efficiency, reliability and adaptability. Current ai is very inefficient, it is somewhat reliable but kind of sucks at math in my opinion, not sure how adaptable it is.
>>33893 for me its the triforce
>>33886 >>33888 We've both already warned you to keep your sexbot discussions strictly to the ecchi-containment thread, Peteblank. You're still BOS, BTW, b/c of your continual abuse of Anons in this community. Consider this your final warning. If you mention sexbots out-of-context again, your posts will be unceremoniously baleeted, and don't try any niggerlicious "Ohh! I'm teh poor widdle victim!111!!ONE!" afterward, no one here gives a sh*te. You and you alone have created the mess you're in with us here. :DD
Anons, please bear with another schizo post of mine; I believe I have found an insight into consciousness. The Torus! Now please don't immediately discredit me. I'm not an actual schizo. I am a philosopher, autodidact, and polymath. Let me explain to you why I think The Torus is significant. >Psychedelic experiences often lead people to a place which is a Torus. This is a reality of the human brain. >For reasons that I will explain, I believe The Torus is a library. This is where the consciousness is stored. I was thinking about neural networks, and how to frame them together, and I fielded the idea >What if you mapped all of your neural stuff to a 2D plane >And you sent *brainwaves* over it, allowing you to process a giant neural model, one little bit at a time, in an order according to the motion of the brainwave. >The brainwaves would need to loop from one side of the field to the other, in both dimensions... >Hmmm... Aha! That's a Torus! A 2D modspace is topologically a torus! Think the game 'Asteroids', that kind of plane, it's a Torus, and always has been. When I combine this with a bunch of other thoughts I've had, it appears to outline a pretty magical system. Hear me out! Imagine that the Torus is a library of little neural constructs; with all their neurons exposed to brainwaves. >Neurons will store input impulses until the brainwave passes over them, and only then will the computationally expensive matrix multiplications occur. >Neural activity isn't free, it's like an economy. Firing will cost a neuron a bit of attention, and when a neuron runs out of attention, it starves and gets weaker until it can't fire any more. When a neuron gets fired into, it gets paid some attention. Some attention is lost at every step, so infinite loop constructs will starve without extraneous attention. >Attention comes from special *rows* of the torus that the user gets to define! Namely, the senses should be an attention fountain! Love and sex should be an attention fountain. Correct predictions should be an attention fountain. Good, smart neurons get headpats and can stay! >Negative attention is also a thing. Sometimes neural constructs need to swiftly die because they're bad. Dead constructs get removed from the torus and remain only in storage for The Alchemist to reuse and reform. The Alchemist can breath some attention into a neural construct as they wish. >Columns of the torus (think like slices of an orange) are like stacks of logic on top of a category. One whole column represents all the knowledge of moving the body; The attention fountains for the column would be contained in the basemost sector, and would be the neural I/O port for the touch sensors (I suggest using microphones), proprioception, and muscle control/feedback. Everything stacked on top of that sector would form a column of increasingly high-level logic until you have a digital cerebellum that can dance if it wants to. The base sector should always be active. >Smaller, virtual toruses can be formed from parts of the bigger one, allowing the robowaifu to literally concentrate on work by forming an efficient, cut-down mental workspace that only contains what it needs to, and that can panic and call attention from the larger brain if it's not smart enough. This way she can think about things whilst working. >Running a brainwave backwards would reverse the roles of inputs and outputs, putting the stimulation of imagination into the senses. >Connections that go a longer distance (the longest distance would be leaving a sector to go into another) cost more attention. With these conditions, a robowaifu brain would be like a capitalist economy which balances itself to produce the most efficient, accurate thoughts for the least amount of work; breaking up the brain into a bunch of intercommunicating modules that... >Can be saved and shared over the internet. >Don't connect to every fucking thing else; defeating the exponential scaling cost of NNs. >Can connect spirits (neural objects) together in all kinds of orders. >Can relate different subjects together by testing which neurons fire at related stimuli (allowing for AGI) >Can be run by many different strength levels of hardware; needing only enough memory to store the whole brain; the GPU and VRAM would determine the area and length/speed of the brainwave. >Can be expanded with arbitrary columns of subject, and rows of attention. >Can be tested RIGHT NOW.
>>34321 This sounds like a very cool concept, Anon. The '2D-mapping of ring buffers, with Row/Column overlay' (my take) as providing the structure of your Torus Mind is very interesting. As is the concept of I/O mapping to special segments of the system. >And you sent *brainwaves* over it, Mind expanding out on this part please? What are these? How do they originate, and how are they controlled/modulated? What is the underlying mechanism(s) of their effect(s) on the underlying Torus Mind substrate(s)? <---> Very interesting stuff, Anon! Please keep it coming. Cheers. :^) >=== -add 'underlying mechanism' qstn -minor edit
Edited last time by Chobitsu on 11/14/2024 (Thu) 18:49:12.
>>34321 A few years ago inspired by Growing Neural Cellular Automata I did some experiments using convolutions on a hypertorus to create a recurrent neural network where each neuron only had a few neighbors it was connected to via convolution (efficient to compute) but also only a few steps away from sending a message to any other neuron in the network due to the looping structure (efficient to communicate). I'd pick certain neurons as inputs and outputs and it could learn to reproduce specific images flawlessly on command with little training. It was a lot of fun to watch too because I could see the activity in the network like a little spark moving through it, then lightning bolts and then the trained image would flash for one time step and quickly dissolve into noise then nothing. I think I used an auxiliary loss for any network activity so the network remained silent unless it was processing something. The main issue I had with it though was waiting for an input to finish propagating through the whole network and timing everything. It would be interesting to see a recurrent transformer where the layers are nodes on a hypertoroidal graph and the signal is routed like mixture of experts to the most relevant neighboring nodes. The transformer code would need to be refactored though so the nodes are batched together and processed simultaneously each time step.
>>34340 Thinking about it more I remember now I made a special feedforward module where each neuron had n inputs and outputs for its neighbors and they were batched together to run fast as convolutions because I had an issue with the convolutions being too noisy and blurry.
>>34340 >>34341 This sounds remarkable, Anon. Your >"only a few steps away from sending a message to any other neuron in the network due to the looping structure" certainly reminds me of the 'grid of cells' programming approach commonplace for GPU programing (eg, NVIDIA's CUDA framework). >hypertorus >nodes on a hypertoroidal graph Sounds intriguing. I would like to understand all this at some point. >"I could see the activity in the network like a little spark moving through it, then lightning bolts..." Very creative language. Sounds like a real lightshow! Cheers, Anon. :^)
>>34337 Okay, by brainwaves, I mean a wave of *focus* Your hardware can only do so much multiplication and addition. Your hardware can only do so much matrix multiplication. When a "brainwave" travels over the torus, it distributes focus onto the neurons/spirits under it, bumping them up in the processing queue. When the processing occurs, outputs are dumped into buckets through weightings, and await processing. Neurons should give some of their focus to the neuron they fire into, and I imagine that neurons can be tuned to be more or less 'stingy' about how/where they share their focus. Neurons burn some of their focus when they fire. That way nothing can dominate the processing queue except the brainwave. Mind you, I want 'focus' and 'attention' to be separate resources. Focus is just for the processing queue; it's what the brain wants to do *right now*. Attention is just for keeping pointless neurons off of the network. An efficient pattern of behaviour would be strung out so that the brainwave travels over it in the direction that it it fires, so that it's just straightforward operation. The brainwave's direction, I believe, should double as a bias in the direction of firing, turning outputs into inputs, and in general encouraging processing to occur in the direction the brainwave is traveling. If the brainwave wants to go sideways, this means that the brain is experiencing 'lateral thinking' which crosses subjects, finding and processing interrelations between them. What is the brainwave? >It's just a region of the torus, of some shape. It has a center that moves, it has a breadth and a length, and a trajectory, and it commands some percentage of the brain's available focus. How does it originate? >A region on the torus will be given control over brainwaves. The torus will learn to control its own focus. There will always be at least one, it should have some pretty mild default values, and unless the brain is trying to do something, it should just wander around. For now I can only think of hardcoding a brainwave struct, and just giving the control handles to neurons inside the torus. If there's some more organic way of doing this, perhaps it's the win. >How are they controlled/modulated? The brainwave I'm trying to code simply has a a position, a speed, a trajectory, and some size values (the bigger the brainwave, the more spread-out its focus dumping). I intend to ultimately give these controls to the torus itself through special I/O neurons. I suspect it would be wise to introduce a fatigue mechanic somehow to prevent some part of the brain from politically monopolizing the brainwave's presence. I suspect it would be wise to make the brain reflexively place the brainwave wherever the alchemist is finding that its creations are failing to expect reality ~ pain should draw focus.
>>34383 >It's just a region of the torus, of some shape. It has a center that moves, it has a breadth and a length, and a trajectory >[it] has [a] position, a speed, a trajectory, and some size values >it should just wander around. Intredasting. Ever see PET Scan movies of the brain's activity? If not, I highly-recommend you do so. Contrary to the lie old wive's tale that we all only use 10% of our brains... practically 100% of all the brain's neurons are involved in processing every.single.stimulus. It's a fascinating thing to watch. Very, very amorphous wavefronts propagating through the tissues at all times in all directions; bouncing & reflecting around just like other wavefronts throughout nature. >=== -fmt, prose edit
Edited last time by Chobitsu on 11/16/2024 (Sat) 16:21:35.
>>34385 Yes I recall seeing a motion picture of how the activity of the brain moves in cycles. More specifically, I am drawing inferences from how I can feel the waves in my own brain working, and yes, I *can* feel them. Brainwaves for me have turned into an element of my mental posture. It feels like when I'm relaxed, it just cycles normally, but when I want to think something hard, the wave will jump around like how the read/write head of a hard disk might. I've become a bit of a computer myself in this way, because I can cut through very difficult philosophy using it this way. When I'm using my imagination, it will go backwards. When I focus on one sense, it stays in a sector. When I'm focusing my feelings, there's a stripe in the back which takes the focus. I believe that perhaps the white matter of the human brain won't broadcast from the entire brain *to* the entire brain all at once. I think there is a bit of a filter which can change shape, speed, and position, which will mix up the sharing of information according to the works of inner politics and habitual intelligence. In this way, a robo-wife brainwave wouldn't be the same as a human brainwave; but it would be close enough. Upgrading her hardware would just allow her to cycle around faster without skipping focus. Since I was a boy, I was always concerned with breaking down reality into the most efficient symbols I can manage using alchemy. Doing so gives me massive leverage to process reality. I hit some roadblocks in my mid-20's, but some mind-expanding psychedelics temporarily gave me enough room in my head to break free and gain superhuman understandings. The process of alchemy takes a lot of 'room' to work with, but ultimately creates more room when you're done; turning a mind into an efficient and well-oiled machine. For example, when I imagine a coin spinning, part of me pretends to be a coin; adopting the material properties of the coin (stiff, low-friction/metallic, resonant, rough edges, dense), and then gets to spinning helplessly in a mental physics sandbox. The symbols that I rely upon to do the spinning are gravity, angualr momentum (conserved), energy (conserved), air resistance, and equilibrium. Gravity pulls down, but the more flat the coin goes, the less it's spinning; it can't spin less without dissipating angular momentum/energy into the air, or into the table. Therefore the system is at a sort of dynamic equilibrium until it dissipates everything and falls flat. I am not pulling a memory of a spinning coin, I am generating a new, unique experience. If we want to build a robowife, we must take inspiration from nature. *I* want a robowife who is capable of some part-time philosophy like me; a sorceress. Nature never made one for me, for some reasons that fill me with bitterness and disgust. It occurs to me that a well-alchemized brain stripped/partitioned away from any objective memories of me may make a decent base for a robowife for Anon in general, and I may recruit Anon's help in my ambitions if I can make enough progress to show that I'm not just living in a fantasy. I've got a lot of money; as tends to be with men who have incredibly long and/or accurate foresight radii. I can and WILL experiment with building a wife. I have a ton of ideas which I have filtered through extreme scrutiny, and I feel that I've nearly fleshed out a path clear enough for me to walk. The first thing I'm going to try to do with a toroidal consciousness is to see if I can get it to dance to music. I'm going to write up a sensory parcel that contain these following attention fountains; 2 Input neurons that fluctuate up to 20 Khz to play the raw sound data, and 120-480 Input neurons (per track) which will fluctuate with the fourier transformation of the raw inputs (this is pitch sensation). I will give this stripe access to neurons which can wait for a set time, according to a base wait period they're set with, multiplied by their input. I expect for the consciousness to erect some kind of fractal clock construct which the alchemist will reward for its ability to correctly expect polyrhythm.
>>34386 Pretty wild stuff Anon. I've gone down that route you describe to some degree, and can confirm it -- with a large dose of skepticism learned outside of that world. >tl;dr Staying clean & natural is certainly the better option. Just using some combination of caffeine/chocolate (or simply raw cacao)/vitamin b12 should be ample stimulation when that boost is needed, IMO. :^) >The first thing I'm going to try to do with a toroidal consciousness is to see if I can get it to dance to music. I'm going to write up a sensory parcel that contain these following attention fountains; 2 Input neurons that fluctuate up to 20 Khz to play the raw sound data, and 120-480 Input neurons (per track) which will fluctuate with the fourier transformation of the raw inputs (this is pitch sensation). I will give this stripe access to neurons which can wait for a set time, according to a base wait period they're set with, multiplied by their input. I expect for the consciousness to erect some kind of fractal clock construct which the alchemist will reward for its ability to correctly expect polyrhythm. Sounds awesome, I like it Anon. I too love dance and rythm, and just music in general. I envision that our robowaifus will not only eventually embody entire ensemble repetoires at the drop of a hat, but will also be a walking lightshow to top it all off. >tl;dr <"Not only can she sing & dance, but she can act as well!" :D <---> >pic I love that one. If we can ever get boomers onboard with robowaifus, then we'll be home free. Cheers, Anon. :^)
>>36113 >The bitter lesson [1] remains true >>36197 >To my eyes, the bitter lesson is not holding up, and that's a good thing! Please pardon my butting in as an ignorant newfag in this arena -- little more than a philosopher, really -- but it seems to me that both concepts ( "Its true" / "Its not true" ) have elements of both fact & falsehood in them. The >tl;dr -- again, from my modest viewpoint -- is that both perspectives will play a part in the end. My pursuit of understanding both human biology in general, and neuro-psycho-biochemistry in specific (all for the purpose of using them as tools for pursuing biomimicry within our robowaifus) leads me to this basic, initial conclusion. My analogical reasons are thus [divided by the two approaches]: A) The "Only purely bitter-lesson-Moore-esque raw horsepower counts -- let's pile it on, bros!" team is fundamentally delving into the base realm of purely-neuronal, materialistic-only, 'brain substrate' physics. While very powerful (obviously), thats only one half of the equation (and proving to be an extremely expensive half, at that *). B) The "We have to fashion minds the way we 'think' they work -- let's autistically finesse it, bros!" team is pursuing the higher realm of man's soul. As a dualist, I certainly applaud this view and this effort as well (without ignoring the base, physical aspects of the AI's 'brain' [b/c fundamentally, we still need to accomodate realworld physics here]). After all the humans among us are far, far more than just the sum of our physical parts. It is my position that both approaches will be needed in the end [2]. And with far less hubris on both party's parts, plox!! (Current company excluded, of course.) :^) <---> As Kiwi mentioned, we are only just barely beginning to """understand""" AI; as EnvelopingTwilight mentioned, 'that's a good thing!' (that many smart optimizations are still possible); IMO, we here all have a strong likelihood of being able to run our AI systems on smol, onboard robowaifu computing (and barely sipping on the batteries, I might add). And alway rember this, Anons: the human brain literally runs on ~12W RMS, continuous, of power!! :^) I can't wait to see what we're all going to discover during this process of exploration & research! ONWARD!! --- 1. http://www.incompleteideas.net/IncIdeas/BitterLesson.html 2. I also have the instinctive, purely-ethereal-at-this-stage notion that a third element will be needed to finally succeed at all this. Call it a spiritual insight, if you will. :^) * "US$500Bn!111??? REEEEEEE, I can hardly pay just my parking tickets with only that much!!!" Send moar gibbs nao, plox! :D >=== -fmt, prose edit -add'l footnote -add funpost footnote
Edited last time by Chobitsu on 01/29/2025 (Wed) 05:32:57.
>>33828 >originality hardly even exists I can't find the article but there was a recent post about using hallucinations as a form of imagination to create non-classical ideas to solve science problems. It was more than just changing the temperature. They were encouraging hallucinations though training data limitations in specific areas or something of the like, and then parsing those hallucinations for possible viable testable results. This guy as been doing similar stuff with claude for the last 6 months and is also a robowaifu connoisseur - https://github.com/NeoVertex1/SuperPrompt He's also been doing CoT reasoning for over 6 months just with XML prompts
Open file (2.56 MB 300x424 1587070714960.gif)
>>36211 >hallucinations whats that? is that what they call the nonsensical responses now terry davis did something like that, made a program that just prints random words from the bible and tried to force meaning in the nonsensical output saying it was god nad hes speaking to him through the rngesus
>>36212 Yes, when an LLM doesn't know how to respond due to limited training data it just makes stuff up. From that SuperPrompt repo, he says "The best way to use SP is really to try to get "novel" POV, new ideas in general, sometimes the ideas can be bad ideas or hallucinations, but they will certainly be a bit novel if given enough context." Terry Davis is a good example and I think RNG is a big part of it all. I truly random seed may help for example. The prompt is pretty nuts on the math, but if you ask Deepseek to explain each section as a PHD professor, it does make a lot of sense. After going through each section, it's final answer: "The provided prompt constitutes a highly abstract, metatheoretical framework blending mathematical rigor with philosophical introspection. It challenges the respondent to transcend conventional thinking while maintaining logical and structural integrity. The structure is reminiscent of advanced algebraic and categorical frameworks, pushing toward a synthesis of logic, philosophy, and cognition."
>>36197 >>36202 Perhaps I should've clarified, higher compute/watt continues to be the driver behind frontier models and research. It takes heaps of flops to cram all that data into usable models. As for the models we will actually use, I remain adamant that we can go smaller. There is still room to better use our hardware more efficiently. From using specialized caching plans, more thorough use of vector extensions, improving latency between CPU, GPU, memory, and virtual memory, etc... We are still far from the best of the big models that rely on moore's law going brrr, and the autistically optimized models we will use in our own systems. >>36211 >Hallucinations It's funny, we hallucinate in the same way. As you stated here; >>36213. Making things up could have interesting uses when handled properly.
>>36212 >>36213 >>36223 I suppose one solution is to somehow allow the LLM to say "I don't know"
>>36223 >higher compute/watt continues to be the driver behind frontier models and research. Yes, that's apparent. And in one sense -- since it appeals to the underlying physics of the realworld -- this is invariant whether the approach used is dumb & plodding (bitter pill team), or elegant & insightful (human soul team). >It takes heaps of flops to cram all that data into usable models. Certainly that's true of the current de rigueur in the matter. But who knows? Perhaps we can yet still dramatically improve things with our approaches? As you yourself said earlier, we're still just at the tip of the AI iceberg. Lots of 'amaze' yet to uncover, I think. >I remain adamant that we can go smaller. There is still room to better use our hardware more efficiently. This. Whole-hearted agreement here, Kiwi. >From using specialized caching plans, more thorough use of vector extensions, improving latency between CPU, GPU, memory, and virtual memory, etc... Yeah, that's really exciting to think about as an engineer. Robowaifudev was wrangling with these issues a while back. I sure hope he's had some serious breakthroughs since then! <---> We're all in for a fun ride this year, I think. May the Robowaifu Age begin soon! Cheers, Anon. :^) >>36224 >I suppose one solution is to somehow allow the LLM to say "I don't know" Based, GreerTech. Obviously, that exact situation is a common part of everyday life. Cheers, Anon. :^) >=== -add'l resp -sp, prose edit
Edited last time by Chobitsu on 01/29/2025 (Wed) 06:11:35.
>>36224 Detecting that the LLM doen't "know" is difficult, considering it doesn't know anything to begin with. Well, it doesn't "know" things like we do. One of the simplest ways of implementing this would be to have a spellcheck parse the prompt for words outside of the dictionary. It could then ask if you meant "insert recommended string" whilst saying it doesn't understand.
>>36226 How about perplexity measurement? I asked Qwen32 this question and it suggested other similar spellchecks \ lookups but also suggested perplexity measurement - Perplexity measures how well a probability model predicts a sample. Lower perplexity indicates that the text is more likely to have come from the training data.
>>36226 The best (compute cheap) method I have come up with so far is phrasing something as a true/false statement. Then measuring the delta between the true and false logits/tokens. The idea being this measures the LLMs confidence. I have yet to do extensive testing to be sure if this method works well. If any anon has the time to do more extensive testing and share there results it would be appreciated. (its inspired by a paper) called "Detecting hallucinations in large language models using semantic entropy" https://pmc.ncbi.nlm.nih.gov/articles/PMC11186750/pdf/41586_2024_Article_7421.pdf I highly recommend reading it, its approach does require more LLM compute then what I am trying. The general idea to keep in mind is that a LLM is not giving you only the top word/token (This why I hate most LLM apis and the OpenAI api), the model is giving you a probability of every possible token. (they don't provide this in the API, specifically to prevent people from distilling from there models)
>>36245 yeah thats just the confidence score, the tokens are already generated using one, you can aggregate them but that just tells you how well the output fits the model not if its correct or coherent, basically what >>36227 said, you cant use the model to analyze its own correctness, you can get junk with high confidence if the training was junk or low confidence on output that isnt junk just cuz the model want trained on that, youre just back at the same problem
>>36246 There is a difference between taking the delta of the logits vs just taking the confidence of the current output token. Lets assume your generating text using Top-K sampling (basically picking at random from tokens with high confidence scores) You can run into a situation where more then one stance can be taken with a high probability. Even if you restrict output to only picking from True or False statements and measure the confidence of the highest token, you can end up with a situation where both have high scores. By reducing the amount of stances down to two options and then measuring the delta, you can now tell if the model is confident in a single stance. (If there is a large enough threshold)
>>36247 yeah an aggregate, and why a delta that doesnt make sense, the difference between 0.01-0.01 is the same as 0.99-0.99 even though theyre polar opposites, either way a confidence score doesnt reflect anything to do with correctness or sanity it just tells you if it fits the training which could just be junk
>>36250 What I am trying to say is that There are 4 possible states: TRUE 0.98, FALSE 0.99 -- Top token confidence is 0.99, The delta is 0.01 | Top Logit result: TRUE, Delta is low, Is confident TRUE 0.01, FALSE 0.02 -- Top token confidence is 0.02, The delta is 0.01 | Top Logit result: FALSE, Delta is low, Is not confident TRUE 0.99, FALSE 0.01 -- Top token confidence is 0.99, The delta is 0.98 | Top Logit result: TRUE, Delta is high, Is confident TRUE 0.01, FALSE 0.99 -- Top token confidence is 0.99, The delta is 0.98 | Top Logit result: FALSE, Delta is high, Is confident H H L L H L L H The paper I linked, introduces the idea that tokens have semantic meaning and that you can take the top predicted tokens from the LLM and see if there is a single semantic meaning the model is confident on. My idea is to boil information down to true / false questions and to measure the "semantic entropy" (in this case the delta between true and false). Maybe I am not understanding something, are you telling me that the delta of two tokens is not useful information? (this may be the case, maybe I am being silly here)
>>36252 What I am saying is that you still want to reject the first case due to low delta despite it having a "high confidence". The idea being that you filter out cases where the LLM is happy to go both ways, despite being "confident" in its choice. H H -- reject (This would be accepted if you only measure confidence) L L -- reject H L -- accept L H -- accept (sorry if I am repeating my self, I am just not sure if I am actually doing a good job communicating my idea)
>>36251 what is "semantic entropy", they keep making up undescriptive jargon to obfuscate things, i honestly cant read or take those papers seriously anymore cuz of it i kind of get it, true/false are semantically polar opposites so something like TRUE 0.98, FALSE 0.99 should have high "semantic entropy", a normal person would just call that a contradiction, same with TRUE 0.01, FALSE 0.02, youre basically saying its 50/50 according to the model whether its high confidence or low doesnt matter since its the same for both, so yeah a non contradictory case is when its either one or the other, i guess thats what you mean with taking the delta ie. low delta means its contradictory, except its still just based on confidence scores so all youre doing is showing your training data was shit and not really solving the problem
>>36255 Sorry for the jargon, I agree they do often over complicate things just for the sake of it. In my true/false idea it would be simpler to call it just a contradiction. I will take that as advice and will refer to it as a "contradiction score" in the future. The paper calls it "Semantic Entropy" because they are making a distinction from "Naive Entropy" both terms they coined for there own paper. >So all youre doing is showing your training data was shit and not really solving the problem So whats is the problem I am solving? I am of the option that "grounding" is not something you can solve, you need good data, garbage in is garbage out both in AI and people. If the facts learned are wrong then the conclusions will be wrong. I am not calming to have "solved" the issue (I am not that smart lol), I am just sharing what I have come up with so far to deal with this kind of issue ^_^
>>36257 >whats is the problem I am solving im just speaking generally you can extend your idea even, it doesnt have to be limited to true/false, you could use a dictionary of antonyms for the token you want to get your contradiction score for, but again getting contradictions is like getting a low confidence score its a problem with your data or could just be a benign contradiction like the word 'literally' which is literally actually and not literally, contradictions are probably more common than you expect
>>36259 >...like the word 'literally' which is literally actually and not literally... I'm stealing this.
Very interesting convo. Thanks for all the details! Again, way above my pay grade but as I started going through this yesterday, I also thought garbage in \ garbage out. But from the start, my intention was to say one man's trash is another man's treasure I guess. That is to say, if that garbage makes me happy, it has produced a valid use case and that's all that matters to me, but I'm a proponent of the subjective theory of value.
>>36274 >I also thought garbage in \ garbage out. This. I think you're right, Barf! Cheers. :^) >>36307 Very interesting. We truly understand little about the human psyche, IMO. Lots to learn still. Thanks, GreerTech! Cheers. :^)
>>36307 Thanks! That sounds like the article I read. It seems like prompt engineers are closer to AGI than the GPU farms training the LLMs. People were shocked by reasoning models but prompt engineers have been doing that for awhile. The same could happen for imagination I hope.

Report/Delete/Moderation Forms
Delete
Report