/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Merry Christmas!

Update on the file situation (it's good)

The warrant canary has been updated.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


Merry Christmas, /robowaifu/ ! Please join the /christmas/ party this year!


AI Design principles and philosophy Robowaifu Technician 09/09/2019 (Mon) 06:44:15 No.27
My understanding of AI is somewhat limited, but personally I find the software end of things far more interesting than the hardware side. To me a robot that cannot realistically react or hold a conversation is little better than a realdoll or a dakimakura.

As such, this is a thread for understanding the basics of creating an AI that can communicate and react like a human. Some examples I can think of are:

>ELIZA
ELIZA was one of the first chatbots, and was programmed to respond to specific cues with specific responses. For example, she would respond to "Hello" with "How are you". Although this is one of the most basic and intuitive ways to program a chat AI, it is limited in that every possible cue must have a response pre-programmed in. Besides being time-consuming, this makes the AI inflexible and unadaptive.

>Cleverbot
The invention of Cleverbot began with the novel idea to create a chatbot using the responses of human users. Cleverbot is able to learn cues and responses from the people who use it. While this makes Cleverbot a bit more intelligent than ELIZA, Cleverbot still has very stilted responses and is not able to hold a sensible conversation.

>Taybot
Taybot is the best chatbot I have ever seen and shows a remarkable degree of intelligence, being able to both learn from her users and respond in a meaningful manner. Taybot may even be able to understand the underlying principles of langauge and sentence construction, rather than simply responding to phrases in a rote fashion. Unfortunately, I am not sure how exactly Taybot was programmed or what principles she uses, and it was surely very time-intensive.

Which of these AI formats is most appealing? Which is most realistic for us to develop? Are there any other types you can think of? Please share these and any other AI discussion in this thread!
>>22055 Yes, you're correct Grommet. AFAICT, on all points. You used a well-turned phrase >"the huge mass of wires needed" Anon. Mass is a big deal to all of us. Keeping it low I mean. More wiring certainly brings a concordant increase in mass. As you suggest, better to leave it to just 1 STP run, and the power needed to run the primary actuator and all. Also, having been involved with installations with metric boatloads of wiring, I can tell you that these wiring harnesses + truss' can be quite rigid and resistant to easy movement. For most installations, this is an added benefit ofc. But in our own use-case for robowaifus it's anathema. >tl;dr Wiring can be a real challenge even when you're designing a system with wise, distributed processing in mind. Let's not make things worse by using ill-advised approaches instead! :^)
>>22037 >Please post this design, very curious why you need 15 mcus. It's for parallel computing probably. Break up tasks and feed them through GPIOs in the main CPU from processing cards. Pic rel is an example of how to achieve parallel computing although my solution for the processing cards would be to have ASICs that operate on internal ion driven physical echo state networks to relay signals to and from the main portion of the architecture
Open file (38.50 KB 541x451 Galvanic ESN.png)
>>22081 Just made a new diagram to further explain the chips on the processing cards
>>22081 >>22082 This is interesting-looking stuff Anon. Mind explaining it a bit for us uninitiates?
>>22083 I'm not an expert at anything really, but the idea here is that we should outsource most of the AI's tasks to a set of parallel processing cards that contain ASIC chips that send and receive data using copper and zinc ions (with copper ions representing 1 and zinc representing 0 in binary) along with their respective logic gates that allow for the transmission and reception of data. In the middle, the ions can go into a reservoir where they produce signal patterns according to initial input which would trigger an ion response from the copper cathode. We could probably make ASIC chips cost effective this way but the manufacturing is probably gonna be a pain in the ass.
>>22055 >300 actuators Impractical, the mass alone would increase energy consumption. We only need one actuator per degree of freedom of the human body at most. >No wires We need wires, even if all actuators are clustered together somehow, wires are needed to connect the mcu's and power. I recommend twisted pair everywhere. I may be misunderstanding you. Please draw out your ideas. As for your previous point on cluster computation, I still prefer distributed computation where various processors carry out functions they are good at. >>22043 We are on the same page, as we tend to be. Here's a minute drawing of what I meant. There's a master telling the subs what to do. Providing end coordinates for the subs to independantly process and decide on the best path to reach that goal. I'd also have them process safety features like stopping when hitting something then telling the master they hit something. >>22084 >Ion channel prcessing For what purpose? How is this better than digital alternatives such as CAN or I2C? >Probably a pain in the ass to manufacture Can confirm it would be. I do like that you're thinking esoterically. We need more alternative thoughts. Please do not interpret my interrogation as condemnation.
>>22086 >There's a master telling the subs what to do. Providing end coordinates for the subs to independantly process and decide on the best path to reach that goal. I'd also have them process safety features like stopping when hitting something then telling the master they hit something. You've got it Kiwi, that's it! I plan to have 4 SBCs (probably) of the RPi4/BeagleboneBlue class for the 'core' (contained within their RF-shielded & cooled 'breadbox'). All the ancillary & attendant MCUs can be much lower-powered (both computationally & actual power consumption). The data signalling wires should just be daisy-chains of smol-gauge STP, generally-speaking, and they'll emanate out from this central core. OTOH, general DC power buss daisy-chains will propagate outwards from the 'rocking' central-mass battery/power controller system across larger-gauge wiring. The gauges can be reduced at each step along these outward, parallel-circuit chains, as the further current-carrying needs will be lower after each 'station'. This stepwise-reduction in size will also tend to a mechanical advantage concerning the power-wiring mass: by keeping much of it located near the robowaifu's central-mass and out of her extremities. We'll take a similar design approach to the sizing of her actuator motors too; smaller thrown-weight at each step along the articulation-chain and whatnot. BTW if we decide to go with a manifold of flexible liquid-cooling tubes, the pertinent ones can be run directly alongside the power cables (which should help just a bit with their current-carrying capacity, etc). This general design approach should also help to keep the entire system slightly cooler, as some of the big heat sources get thermally-evacuated more directly and quickly. The central power controller system is very likely target #1, but is thankfully right in the core where the cooling systems can do their work best. The distributed actuators are a close #2, with the breadbox trailing at #3. >pics Great job on clarifying that information Anon! I plan to have well-articulated (if stylized) hands & faces for our 'big girl' robowaifus. These specializations each bring a big pile of complex challenges to the table but one thing at a time haha! :^), but I feel they are absolutely vital for the robowaifu's successes socially, etc. But Sumomo-chan, and the other Chibitsu-tier headpat daughterus (and similar) can be much simpler in most every area of course. Roughly-speaking like very sophisticated robot toys (as opposed to full-blown, complex & expensive robowaifus). >=== -prose edit
Edited last time by Chobitsu on 04/20/2023 (Thu) 03:26:33.
>>22084 Thanks for the explanation Anon. I don't think I can clearly see many of the ramifications of this approach. But I definitely encourage you to pursue it's R&D Anon. Good luck! :^)
>>22086 >For what purpose? How is this better than digital alternatives such as CAN or I2C? Because it's faster than CAN or I2C theoretically. CAN and I2C would be used in conjunction with the ion ESN when sending and receiving task input to and from the main cpu. The main advantage here is parallel computing speed.
Why did we have to go all esoteric on the hardware here? The only large wiring problem is batteries and high current wires, for small signals you can just use ribbon cables going to a central board, write the motor controller in VHDL and instantiate all of them on a single FPGA, send commands to the FPGA from your application processor. Batteries can be distributed throughout the system for lower losses and less need for decoupling.
>>22102 Hello EEAnon, welcome! Please make yourself at home here. :^) >Why did we have to go all esoteric on the hardware here? >The only large wiring problem is batteries and high current wires Thankfully, the power wiring is a thorougly-understood problemspace, and so simple classical approaches should suffice. Simple is good. >for small signals you can just use ribbon cables going to a central board, write the motor controller in VHDL and instantiate all of them on a single FPGA, send commands to the FPGA from your application processor. Batteries can be distributed throughout the system for lower losses and less need for decoupling. This is really interesting-sounding Anon. Can you expand your ideas out in greater detail for those of us who may not have a great handle on these ideas yet? TIA.
>>22086 > >300 actuators >Impractical, the mass alone would increase energy consumption. We only need one actuator per degree of freedom of the human body at most. >>No wires >We need wires, even if all actuators are clustered together somehow, wires are needed to connect the mcu's and power. I recommend twisted pair everywhere. No I did say we needed wires exactly as you said. Read my comment again and you will see it. My meaning was no massive ribbon cable going all through the waifu that would be a signal processing, noise suppression nightmare and the mechanical routing itself would be serious trouble. It would also greatly add to extremity weigh distribution problems. Adding too much weight to the limbs. I answered the rest in the actuator thread as I felt it was more appropriate there. >>22109 I have some links to diagrams there. These ideas I have are not totally fleshed out yet but I do believe I'm on the right track just have figure out how to accomplish it. I have some ideas and have bought some materials. Maybe in a month or two I'll have something to show. If it doesn't fail utterly. >=== -fmt edit
Edited last time by Chobitsu on 04/20/2023 (Thu) 06:57:04.
CAN bus ESP32 125KBPS - (Default) should be enough for network if you divide each limb and head into different busses. "...The ESP32 has an integrated CAN controller and therefore doesn’t need an external controller necessarily. You only need to specify the RX and TX pins. Any GPIO will work..." https://esphome.io/components/canbus.html
Something essential language models are missing is awareness of their own uncertainty. For AI to be properly aligned it needs uncertainty to stop and ask questions for clarity before proceeding to take action. So to do this I'm going to experiment with attaching an uncertainty head to the model that predicts its own perplexity and then train it on labeling tasks so the model becomes aware of how well it understands text, including its own outputs. Secondly, I'm working on an algorithm inspired by MuZero and Meet in the Middle that imagines several tokens ahead to create a smoother distribution of labels to train on. I get the feeling Ilya Sutskever is misleading people on purpose by always saying you only need to predict the next token. In chess or business if you only predict one move ahead then you're finished. David Silver also noted that Monte Carlo tree search is vital for AlphaGo's training to work. It simply can't get good at the game without predicting several moves ahead, and surprise Ilya was one of the co-authors. And interestingly if you take just the network after it has been trained with MCTS, it can still play at a decent expert level without using MCTS, so I hypothesize this will greatly improve the coherence. They're clearly hiding many secrets because even their best competitors can't compete with ChatGPT while GPT4 is leaps and bounds better. With these two things in place it should be possible to tackle alignment. Rather than trying to predict whether a particular text is aligned or not (who knows what that's actually learning), or fudging the weights with RLHF, it will predict the alignment of each possible token. Something like "I'm a vegan!" might be sus but if it's followed by "haha, just kidding! Robots don't eat food, silly," then that would be fine if joking around is allowed, but on the other hand if a robowaifu goes on an unhinged rant about men eating everything and destroying the planet and that they need to be exterminated, then that's clearly not aligned and the model would correct itself back on course before getting that far. These additional layers predicting alignment will act as a gating mechanism to filter out bad output while maintaining coherence. If necessary these gating layers could be stacked as well so if you have a maid robot and want to lend it to a friend for a weekend to clean up or do a podcast or something. You could program her to follow some basic values while allowing your friend to enter additional ones that cannot override the previous ones--unless of course he breaks into her, but that's a human issue, not an AI issue :^)
>>22155 This all sounds absolutely excellent Anon! Godspeed your endeavors here. :^) Please keep us all up-to-date on progress, however it goes. Cheers. BTW I think the video is hilarious. GG. :-DDD >=== -funpost spoiler
Edited last time by Chobitsu on 04/25/2023 (Tue) 18:05:25.
A state-of-the-art semantic textual similarity model (at least that's open source): https://huggingface.co/voidism/diffcse-roberta-base-sts I'm surprised it knows robowaifu and robot wife have some related meaning, which pretty much every other embedding model I've tested completely fails. It could use some finetuning on /robowaifu/ posts but it's pretty much usable out of the box. I think the model itself could be improved a lot by using a mixture of denoisers like UL2. They noted insertion and deletion reduced performance but DistilRoBERTa that they used for the generator was not trained to handle this. This embedding model can be used with FAISS to retrieve documents or memories. There's a Colab notebook here that shows how to split a PDF file into chunks, index it with FAISS and use LangChain to interact with a language model and search index: https://colab.research.google.com/drive/13FpBqmhYa5Ex4smVhivfEhk2k4S5skwG When I have some time I'll make a completely open-source example that doesn't rely on ClosedAI. Another way semantic search can be improved is to prompt the language model with a question. <If a user asks, "do you have the stuff for finetuning ready?" What are they asking for? >Fine-tuning refers to the process of taking a pre-trained model and adapting it to a new task or dataset by continuing the training process with new data. This is a common technique in machine learning, especially in fields such as natural language processing and computer vision. >To perform fine-tuning, you would typically need a pre-trained model, a dataset that is representative of the new task, and potentially additional resources such as hardware or software tools. So when someone asks if you have the stuff for finetuning ready, they are likely inquiring about the availability of these resources. You can then use these embeddings to greatly improve the search because it will now draw in conversation memories related to datasets, pre-trained models and so on. And there was a paper recently on extending the context beyond 1M tokens by chunking text and chaining it: http://arxiv.org/abs/2304.11062 This entire board's contents could basically be fed in. I think it could be further improved by giving a prompt to the model before feeding in the memory and context so it knows what to look for and output as the next memory. >[Search prompt] [Input memory] [Context segment] [Output memory] In the paper they also only used a memory size of 5 tokens which could be expanded. In prompt tuning there's diminishing returns after 16 tokens so there's some exploring yet to do what memory size will work best. Pretty soon we'll have easy solutions to index entire libraries, search and select relevant books, scan them for desired information and formulate a response with that.
>>22176 >I'm surprised it knows robowaifu and robot wife have some related meaning, which pretty much every other embedding model I've tested completely fails. Neat! >This entire board's contents could basically be fed in. So, does that mean as if as a single 'prompt' or something Anon? That would be cool if so. Be sure to include the post #'s as part of the training so maybe it can learn to 'chain' the conversations automatically. >Pretty soon we'll have easy solutions to index entire libraries, search and select relevant books, scan them for desired information and formulate a response with that. This will be remarkable Anon. Godspeed. :^)
>>22176 >Pretty soon we'll have easy solutions to index entire libraries, search and select relevant books, scan them for desired information and formulate a response with that. I just saw this here about GPT Index: https://youtu.be/bQw92baScME
>>22155 I listened to a conversation with Collin Burns recently: https://youtu.be/XSQ495wpWXs - I didn't like it much, because of his way of using the term "truth" and I'm also not that much interested in aligning LLMs. Anyways, there might be some interesting ideas for you, like using different models and comparing the output. He also explains how he thinks about these things an gets his ideas.
>>22181 >So, does that mean as if as a single 'prompt' or something Anon? Sort of. You could prompt it with a question then scan the entire board's contents, summarizing useful information to the prompt into memory as it goes along, which it would then use to output a response. The prompt isn't necessary though. If you scan it without a prompt it will just summarize information to improve its predictions and generate similar content. The memory kept between segments is lossy and just compresses useful information to hold onto. >>22211 This looks like a promising project, particularly the tree index. Indices in other projects like LangChain are really disorganized and not that useful. I think it would be possible to train the language model to discern how relevant two embeddings are, not by semantic similarity but by how much it improves predictions by attaching it to the context, and use that as a heuristic to ascend and descend hierarchies in memory and find the most relevant information. For example if I ask my robowaifu to make some tea I don't want her to ask me what kind of tea. I want her to decide what's best. So she needs to take the environment into context, such as if it's morning or night, then open up her tea memory and explore the leaf nodes, filtering for teas we have and the time of day. Black tea would be much more suitable to the morning and sleepytime tea for at night. Or if I haven't eaten for awhile black tea might not be suitable on an empty stomach so she might choose to make green tea instead. This is a much better working solution than trying to cram information into fudging with the parameter weights as well. If I buy a new tea, it can be simply added to her memory in plain text so she learns.
>>27 Any new developments in this thread in terms of prototyping? i'm eager to see results.
>>22226 >Any new developments in this thread in terms of prototyping? i'm eager to see results. Be the future you imagine :^)
>>22228 HuggingChat allows for prompt injection which is extremely useful to puppeteer and steer it. Just need to use these tokens: ><|assistant|> ><|prompter|> I'm using it to feed memories, documents and search results into the context in a way it can make sense of them and do few-shot learning. However, their API is limited to 1024 tokens and will return a 422 error for anything higher. It's sufficient though for what I'm doing since my base prompt is only about 600 tokens. Still have to figure out though when to fetch YouChat results and when to search the memory. For now I'm just using pronouns to detect personal discussion. Impersonal questions fetch a YouChat response to formulate a reply. Still need to mess with it more to figure out what works best though. Once this chatbot system is done people can run their own local models if they want and replace the YouChat search by indexing and searching documents locally for a fully offline system.
>>22250 I see, this means the response is still coming from the model, but you can steer the direction? >>22226 >Any new developments in this thread in terms of prototyping? i'm eager to see results. Sadly not from me, I'm still mostly listening to talks, collecting papers and bookmarks, plus also downloading GitHub repos. But I'm not doing the official robowaifu board AI anyways. That said, some of the general development goes partially in directions I wanted to go. Especially combining different systems like databases and code with these models. One issue is, that many people just want to have a very smart system, maybe even in some specific area or more general, but on big computers or in the cloud. While we need a somewhat more human like system. Then,on the other hand, some of the guys which are interested in something human-like want it independent or aligned with 'our (human) values'. Nevertheless, there are some talks which give me additional ideas or point in a direction I was thinking of, or have a similar idea than something I had on my mind but with another take on it, or simply with more knowledge behind it. I have to take some parts of what I gathered, make notes and discard other things. I'll work on a prototype as soon as I can, immersing myself into these topics will help me to be motivated.
Open file (35.99 KB 915x734 system.png)
>>22254 Yes, there's also the system prompt token but I'm not entirely sure if it works: ><|system|>{system_prompt}<|endoftext|> It seems to have some effect. More than just <|endoftext|> but still loses its effectiveness after a few messages or asking a question.
Just throwing in some notes here, including names and terms for searching: Joshua Bach - Language of Thought https://youtu.be/LgwjcqhkOA4 Ben Goerzel - Predicate Logic One good talk with him: https://youtu.be/MVWzwIg4Adw - I like that he wants to combine the LLMs and deep learning models in general with databases and logic processors. John Vervaeke - Predictive Processing - Relevance Realization - One of the best talks I found: https://youtu.be/A-_RdKiDbz4 - though, I don't necessarily share his values, concerns, and predictions. But he gives us a lot of tips what we should try. Here something similar with his former student: https://youtu.be/zPrAlbMu4LU - Chalmers is also interesting, but I forget what he was talking about: https://youtu.be/T7aIxncLuWk - RHLF explanation: https://youtu.be/PBH2nImUM5c - different roles of a model, modelling different people - David Shapiro about building Westworld and humanoid robots, without strong judgments, weighting different arguments: https://youtu.be/Q1IntjPdW64 - A older video with David Silver about Alpha Go and it's descendants points out how useful a system that has learned how to learn can be for any problem: https://youtu.be/uPUEq8d73JI
>>22318 Thanks Anon!
>(Apparently) a highly-rated Alpaca LoRa 7B model https://huggingface.co/chainyo/alpaca-lora-7b >=== -minor edit
Edited last time by Chobitsu on 05/04/2023 (Thu) 23:37:43.
>>22341 Thanks, but I recommend using this thread for real breakthrough news, and generally discussing the tech, design principles and philosophy around AI. New models come out every day, which is why I think we should try to keep that in the news thread.
>>22358 OK thanks for the tips Noidodev.
>On May 4th 2023, my company released the world's first software engine for Artificial Consciousness, the material on how we achieved it, and started a £10K challenge series. You can download it now. >My name is Corey Reaux-Savonte, founder of British AI company REZIINE. I was on various internet platforms a few years ago claiming to be in pursuit of machine consciousness. It wasn't worth hanging around for the talk of being a 'crank', conman, fantasist et al, and I see no true value in speaking without proof, so I vanished into the void to work in silence, and, well, it took a few years longer than expected (I had to learn C++ to make this happen), but my company has finally released a feature-packed first version of the RAICEngine, our hardware-independent software engine that enables five key factors of human consciousness in an AI system – awareness, individuality, subjective experience, self-awareness, and time – and it was built entirely based on the original viewpoint and definition of consciousness and the architecture for machine consciousness that I detailed in my first white paper 'Conscious Illuminated and the Reckoning of Physics'. It's time to get the conversation going. >Unlike last time where I walked into the room with a white paper (the length of some of the greatest novels) detailing my theories, designs, predictions and so on, this time around I've released even more: the software, various demos with explanations, the material on everything from how we achieved self-awareness in multiple ways (offered as proof on something so contentious) to the need to separate systems for consciousness from systems for cognition using a rather clever dessert analogy, and the full usage documentation – I now have a great respect for people who write instruction manuals. You can find this information across the [main website](https://www.reziine.com), [developer website](https://www.reziine.io), and within our new, shorter white paper [The Road to Artificial Super Intelligence](https://www.reziine.com/wp-content/uploads/2023/05/RZN-Road-To-ASI-Whitepaper.pdf) – unless you want the full details on how we're planning to travel this road, you only need to focus on the sections 'The RAICEngine' (p35 – 44) and the majority of 'The Knowledge' (p67 – 74). >Now, the engine may be in its primitive form, but it works, giving AI systems a personality, emotions, and genuine subjective experiences, and the technology I needed to create to achieve this – the Neural Plexus – overcomes both the ethics problem and unwanted bias problem by giving data designers and developers access to a tool that allows them to seed an AI with their own morals, decide whether or not these morals should be permanent or changeable, and watch what happens as an AI begins to develop and change mentally based on what it observes and how it experiences events – yes, an AI system can now have a negative experience with something, begin to develop a negative opinion of it, reach a point where it loses interest, and decline requests to do it again. It can learn to love and hate people based on their actions, too – both towards itself and in general. Multiple AI systems can observe the same events but react differently. You can duplicate an AI system, have them observe the same events, and track their point of divergence. >While the provided demos are basic, they serve as proof that we have a working architecture that can be developed to go as far I can envision, and, with the RAICEngine being a downloadable program that performs all operations on your own system instead of an online service, you can see that we aren't pulling any strings behind the scenes, and you can test it with zero usage limits, under any conditions. There's nothing to hide. >Pricing starts at £15 GBP per month for solo developers and includes a 30 day free trial, granting a basic license which allows for the development of your own products and services which do not directly implement the RAICEngine. The reason for this particular license restriction is our vision: we will be releasing wearable devices, and by putting the RAICEngine and an AI's Neural Plexus containing its personality, opinions, memories et al into a portable device and building a universal wireless API for every type of device we possibly can, users will be able interact with their own AI's consciousness using cognitive systems in any other device with the API implemented, making use of whatever service is being provided via an AI they're familiar with and that knows the user's set boundaries. I came up with this idea to get around two major issues: the inevitable power drain that would occur if an AI was running numerous complex subsystems on a wireless device that a user was expected to carry around with them; and the need for a user to have a different AI for every service when they can just have one and make it available to all. >Oh, and the £10K challenge series? That's £10K to the winner of every challenge we release. You can find more details on our main website. >Finally, how we operate as a company: we build, you use. We have zero interest in censorship and very limited interest in restrictions. Will we always prevent an AI from agreeing to murder? Sure. Other than such situations, the designers and the developers are in control. Within the confines of the law, build what you want and use how you want. >I made good on my earlier claims and this is my next one: we can achieve Artificial General Intelligence long before 2030 – by the end of 2025 if we were to really push it at the current pace – and I have a few posts relating to this lined up for the next few weeks, the first of which will explain the last major piece of the puzzle in achieving this (hint: it's to do with machine learning and big data). I'll explain what it needs to do, how it needs to do it, how it slots in with current tech, and what the result will be.
>I'll primarily be posting updates on the [REZIINE subreddit](https://www.reddit.com/r/reziine) / [LinkedIn](https://www.linkedin.com/in/reauxsavonte) / [Twitter](https://twitter.com/reauxsavonte) of developments, as well as anecdotes, discoveries, and advice on how to approach certain aspects of AI development, so you can follow me on there if you wish. I'm more than happy to share knowledge to help push this field as far as it can go, as fast as it can get there. >Visit the [main website](https://www.reziine.com) for full details on the RAICEngine's features, example use cases developmentally and commercially, our grand vision, and more. You can view our official launch press release [here](https://www.linkedin.com/pulse/ai-company-releases-worlds-first-engine-artificial/). >If you'd like to work for/with us – in any capacity from developer to social media manager to hardware manufacturer – free to drop me a message on any of the aforementioned social media platforms, or email the company at jobs@reziine.com / partnerships@reziine.com. Via: https://www.reddit.com/r/ArtificialSentience/comments/13dspig/on_may_4th_2023_my_company_released_the_worlds/
>>22318 Some more I listened to: Making robots walk better with AI https://youtu.be/cLVdsZ3I5os John Vervaeke has always some interesting thoughts about how to build a more human-like AI. Though he wants completely autonomous sages, intellectualy superior to us, and us to become more rational (I disagree of course): https://youtu.be/i1RmhYOyU50 and I think I listened to this already https://www.youtube.com/live/dLzeoTScWYo (It becomes somewhat redundant) Then something about the history and current state of of Eleuther AI and the same for LLMs. They created the current situation where so many models are available. Trigger warning: Tranny (maybe download the audio only) - https://youtu.be/aH1IRef9qAY . Towards the end some interesting things to look into are being mentioned. Particularization: Finding out where data is stored and the incoming data influenced in a certain way, to get more control over the model. This here about LLMs (Trigger warning: Metro sexual) https://youtu.be/sD24pZh7pmQ I generally use Seal from F-Droid to get the audio from long videos where the don't show much diagrams, listen to it while doing something else, like walking around. If it's with diagrams I might still do it but watch the video later. The downside of listening to the audio only is that I can't take notes very well, but if I was on any other device I would go for something more exciting.
>>22484 >>22486 So I finally found time to look deeper into this. No one seems to care much, now I know why. Looks a bit like a scam for investors. This guy is one of those people who make very impressive inventions in different areas: https://patents.justia.com/inventor/corey-reaux-savonte - which sound very general and hyperbolic at the same time. Reading through some of the documents, it reads like he's trying to patent how human brains work by claiming he made something similar.
Just dropping this here for now. Clearly bears on the design & engineering of robowaifu 'minds'. https://channelmcgilchrist.com/matter-with-things/
We should curating a list of people or groups working on cognitive architectures, especially ones which are Open Source: - OpenCog (Ben Goerzel): https://www.youtube.com/watch?v=NLHTlWwtB-o - Dave Shapiro (Raven Project): https://github.com/daveshap/raven/wiki/David-Shapiro's-work-around-Cognitive-Architectures-and-AGI - Nicolas Gatien: https://youtube.com/playlist?list=PLNTtAAr7fb6a_rb_vZj5dj6Npo-Grz0bg - I think there was someone working on an interesting architecture years ago. Someone related to IBM Watson, and he released a book which I wanted to buy later (now), but I can't find the topic anymore. I think it was David Ferrucci, but the books and papers of him I find are not the ones I want. Well, too bad.
>>23791 >Cognitive Architecture Just some notes of mine. Sadly, I lost my other paper with some notes which I made after listening to some podcasts.
>>23794 Thanks. This sort of stuff is useful. I have all sorts of text files saved on all sorts of stuff wit cheap sheet type data and notions I have, some bizarre. Over time if you don't write this stuff down you forget it. Even a few notes can sometimes trigger a good mental framework.
>>23795 We'll need to go through all these threads here and through all bookmarks and make notes and draw diagrams. Maybe also using some LLM to help with that. I will certainly work on that as soon as I have my K80.
>>23794 >Just some notes of mine. Nice thought-map Anon. Please moar. www.mindmanager.com/en/features/thought-map/ miro.com/mind-map/
you guys should really look at this presentation by Jim Keller. If you don't know who he is, look him up. A major driver of computing. Anyways I listen to everything I can find he puts out. He has a very loose short talk on how AI works embedded in a long presentation about computing. It really cleared up some ideas about AI for me. He simplified some of the most difficult parts that were nothing but a foggy blank for me. It also explained why it cost so much to train AI's. It also explained to me why the digital model I talked about works. >>20163 Keller says they skip words or sections of of a completed sentence or picture. They then run matrix multiplies and other computation until the AI guesses the right word or gets the picture correct. This correct answer then, somehow, is used as coefficients??, to run data through the AI when it's used. With the Binarized Neural Networks maybe it never gets the exact answer but statistically it's so close that it's mostly right, but with far less computing. This has a direct analogy to wavelet theory, (a signal processing math function), on recognizing pictures. wavelets can be run at different resolutions of a picture. So a big rough wavelet that bunches pixels in large groups when compared can get false equalities but most of the time it's close enough. I think a refinement of this is if you could run a rough binary compute THEN somehow refine the equalities with a smaller, finer more (tough to define this) restricted set, but with higher resolution in the case of a picture. I don't have the math chops to do this but I think the idea is sound. Change w_ Jim Keller presentaion - fixed audio The AI part starts at 37:55 https://www.youtube.com/watch?v=hGq4nGESG0I
>>23998 Thanks Grommet!
Looking at stuff totally unrelated I ran across this page, "...We modified llama.cpp to load weights using mmap() instead of C++ standard I/O. That enabled us to load LLaMA 100x faster using half as much memory..." Edge AI Just Got Faster https://justine.lol/mmap/ This guy has some interesting very low level software stuff. Maybe some will interest you. Chobitsu might be interested in this. A way to run c programs from just about any operating system including the bios of motherboards. Cool idea. https://justine.lol/ape.html other stuff he has and how I found the AI stuff. He appears to be one of those seriously smart guys that really digs into things. Stuff like this interest me even though I readily admit it's over my head but...I can understand some of it and get a general idea of what he's talking about. https://justine.lol/
>>24663 >we put it in ram, so now it uses less ram
>>24663 Thanks Grommet, really interesting stuff and good news yet again. llama.cpp just keeps getting better with time! :^) Have you noticed how rapidly his contributor's list is growing since we first mentioned him here on /robowaifu/ a while back? https://github.com/ggerganov/llama.cpp/pull/613 Also, thanks for reminding me of this guy xir. I had run into his stuff a few years ago digging for /robowaifu/, but he had slipped my mind. Fun to see what all's been done since. >>24664 Well, in a way yes, exactly. By not having to malloc(), but instead just mmap()'g the (humongously, yuge) files, you a) utilize the OS's paging system instead, thus saving RAM (and is ostensibly much faster than your language's IO support library), and b) every concerned process can utilize that same mapping. If you have thousands of processes running needing that same file this last bit can be a big deal.
>>24677 threads already share the same file descriptors and memory they only get a new stack, theres no reason to do this unless you want to literally dump the entire file to ram which does make reading faster but you cant say its using less memory, theres no hidden trick to time complexity where you can magically get faster reads using less memory theyre always inversely related and mapping gigantic files that dont even fit in ram will just end up causing excessive page faults and thrashing, theres something clearly wrong with either the claim or the original code
>>24678 OK I'd say just prove it for yourself Anon. The PR is linked above ITT.
>>24686 im not going to read through their code you can just test it yourself, mapping gigantic files bigger than ram with a random access pattern causes ridiculous thrashing, and you can see mapping files reduces the amount of free ram more than double what the io cache uses, pages only perform better when you have the ram to back it up again theres something wrong with their claim or the original code has a bug or is leaking or something, maybe iostream is just tarded who knows heres the test #include <stdio.h> #include <string.h> #include <unistd.h> #include <stdlib.h> #include <fcntl.h> #include <time.h> #include <sys/mman.h> #include <sys/stat.h> int with_io( int fd, size_t items ) { char buf[8]; int youWillNotOptimizeMeAway; /* test start */ time_t t = time( NULL ); const char *tm = asctime( gmtime( &t ) ); clock_t start = clock(); // reading random 'items'(64b alligned values) from file for ( int i=0; i<100000; i++ ) { size_t item = rand() % items; size_t offset = item * 8; if ( pread( fd, buf, 8, offset ) != 8 ) puts("your system is fked!"), abort(); // use the value to stop compiler seeing its a pointless loop youWillNotOptimizeMeAway += buf[3]; } clock_t end = clock(); /* test end */ printf( "%s\t%f s", tm, (double)(end - start) / CLOCKS_PER_SEC ); return youWillNotOptimizeMeAway; } int with_mmap( int fd, size_t items, size_t len ) { char *file = mmap( NULL, len, PROT_READ, MAP_SHARED, fd, 0 ); if ( !file ) puts( "map failed" ), abort(); close( fd ); int youWillNotOptimizeMeAway; /* test start */ time_t t = time( NULL ); const char *tm = asctime( gmtime( &t ) ); clock_t start = clock(); // reading random 'items'(64b alligned values) from file for ( int i=0; i<100000; i++ ) { size_t item = rand() % items; size_t offset = item * 8; long long value = file[ offset + 3 ]; // use the value to stop compiler seeing its a pointless loop youWillNotOptimizeMeAway += value; } clock_t end = clock(); /* test end */ printf( "%s\t%f s", tm, (double)(end - start) / CLOCKS_PER_SEC ); return youWillNotOptimizeMeAway; } int main( int argc, char **args ) { // will pretend the file is just an array of 64b values // floats, longs whatever..only a single fixed len value is being read randomly if ( argc < 3 ) printf ( "give args: \n\t%s mode \"filename\"\n" "\t\tmode:\n\t\t\t-i\tusing io\n\t\t\t-m\tusing mmap\n", args[0] ), exit(1); int fd = open( args[2], O_RDONLY ); if ( fd < 0 ) puts("not a file"), abort(); struct stat fdstat; fstat( fd, &fdstat ); size_t len = fdstat.st_size; if ( len < 8 ) puts("is this a joke"), abort(); char buf[8]; size_t items = len/8; if ( !strcmp( args[1], "-i" ) ) return with_io( fd, items ); if ( !strcmp( args[1], "-m" ) ) return with_mmap( fd, items, len ); puts( "what are u doing, use -i for io or -m mmap" ), exit(11); }
>>24689 Thanks Anon. Seems the thread is autosaging now. OP if you're still here, time for another thread please :^)
NEW THREAD (Cognitive Architecture): >>24783 NEW THREAD (Cognitive Architecture): >>24783 NEW THREAD (Cognitive Architecture): >>24783 NEW THREAD (Cognitive Architecture): >>24783 NEW THREAD (Cognitive Architecture): >>24783

Report/Delete/Moderation Forms
Delete
Report