/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Happy New Year!

The recovered files have been restored.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


“Stubbornly persist, and you will find that the limits of your stubbornness go well beyond the stubbornness of your limits.” -t. Robert Brault


General Robotics & AI News Thread 5: Gazing Into the Nightmare Rectangle Edition Greentext anon 11/04/2024 (Mon) 05:42:08 No.34233
Anything related to robowaifus, robotics, the AI industry, and any social/economic issues thereof. and /pol/ funposting containment bread! :D -previous threads: > #1 ( >>404 ) > #2 ( >>16732 ) > #3 ( >>21140 ) > #4 ( >>24081 ) >=== -funposting edit
Edited last time by Chobitsu on 01/27/2025 (Mon) 09:51:47.
>>34323 Here are the links to their open sourced code and datasets: >https://huggingface.co/unitreerobotics >https://github.com/unitreerobotics/unitree_IL_lerobot >https://github.com/unitreerobotics/avp_teleoperate Wallahi may Xi have a thousand year Mandate of Heaven
>>34323 >>34327 Very cool! BTW, don't these look like two different robot lines? Cheers.
>>34334 Yeah, I think. China has open sourced a few other robots before. I only have the links for Unitree's data.
>>34339 That sounds great, Anon. I hope we can round up all of the opensauce robo(waifu) designs out there, and bring all those resources together here in one place so Anons can pick & choose from the best of the best. Good luck, Anon! Cheers. :^)
Memoripy is a Python library designed to manage and retrieve context-aware memory interactions using both short-term and long-term storage. It supports AI-driven applications requiring memory management, with compatibility for OpenAI and Ollama APIs. Features include contextual memory retrieval, memory decay and reinforcement, hierarchical clustering, and graph-based associations. https://github.com/caspianmoon/memoripy
>>34381 Sound good, Light. Thanks for the info Anon. Cheers. :^)
Potentially interesting, looking forward to AI being good enough in another year or two. (I've been saying this since the 90's, heh) https://www.youtube.com/watch?v=yXHSPzHfe1s
I think Elon must be planning ahead, for offloading all the Robo Catgrill computational loads coming down the pike? https://x.com/SawyerMerritt/status/1864523723069399143
>>34562 Intredasting. Thanks, Anon! :^)
I don't know if this is foolish or not. If not it appears to be the mother of all AI deals According to, https://www.techspot.com/news/104921-40-tool-gives-you-access-chatgpt-gemini-more.html 1minAI: Lifetime Subscription for $29.97. https://store.techspot.com/sales/1minai-pro-plan-lifetime-subscription?utm_source=techspot.com&utm_medium=referral&utm_campaign=1minai-pro-plan-lifetime-subscription&utm_term=scsf-605917&utm_content=a0xRn000002KKsfIAG&scsonar=1 Does this mean you get all the monthly benefits forever with one payment? I'm not sure. It doesn't add up because the monthly payment is $6.5 /month What do you get with. Many of the top AI's a conglomerated together in this plan. So you can use many different ones. Listed ones, openAI MidJourney GoogleAI StabilityAI ANTHROP/PC Meta Leonardo.ai cohere mistral.ai That's a hell of a plan. Here's the plan. Note credits are related to size of task and it does seem generous enough. I'm wondering if this would not be a golden BB for me. I have a lot of ideas but need math and programming help. Things I want to do would take an inordinate time to learn all the little ins and outs. Maybe with something like this I could work on explaining what I want then have the AI do the grunt work. I can find the papers and feed it the equations and what I want done with them. Maybe this would work. "If" this is a real offer $30 bucks for a year or more, then that;s a bargain. Here's the "Pro" level you get for $29.99. All product features All AI features Flagship AI Models Unlimited storage Unlimited prompt library Unlimited brand voice Free 450,000 credits/month 1,000,000 credits/month First-time upgrade gift Unlimited AI for $0.67/million credits Rollover unused credits Flexible credit use Monthly credits usage example Generate up to 805,555 words Research up to 1,933 SEO keywords Generate up to 161 images Upscale up to 241 images Remove background up to 24 images Text to Speech up to 120,833 characters Transcribe audio up to 4,833 seconds Generate up to 12 videos You can try this for free here, https://1min.ai/
>>34727 >trusting a corporation with your waifu
>>34749 >>trusting a corporation with your waifu I didn't say that exactly. It could help, maybe, with modeling. Testing to see the strength of actuators, all sorts of stuff. It could help with software for walking and movement coordination. At some point all of us will likely have to use some AI trained by others. They are spending multiple billions of dollars to train these so I find it unlikely that anyone can independently create their own or at least at our level of funding. You may be able to modify another created one but the source will likely for the foreseeable future be large corporations. And in the end my primary usage for it will be to write software to facilitate construction.
>>34765 It is guaranteed one must stand on the shoulders of giants. Even if you refined the ore to build all your motor components you still have yewrs of ore refining experience and techniques found by others. I just don't trust cloud subscription services because they can change and you have no recourse. There are already stories of folks going mad because their ai service nerfed/deleted their waifu. Easily preventable if it was a local waifu.
>>34768 so i tried a bash script for ai tts with ai with a computer mic and it was super slow. Whats frustrating is that people saw that video and then they see someone else using openai API which is real time yeah but itd be talking to chatgpt. Although ive seen chatgpt in janitor ai so maybe theyve loosened up.If chatgpt or claude, gemini can do nsfw id roll with that cause realistically youll be hosting the ai on the cloud either way. Even if you got a 4090 itd take over a minute just for a respinse without including the transcribing and tts for 70b parameter model. Graphics cards are also not getting cheaper.
>>34769 no looks like chatgpt still has restrictions. Also im personally picky about the voice i played around with moetts but thats a load spagghetti.
Open thinking models are rapidly catching up to OpenAI's o1 and should quickly catch up to o3. The key appears to be chain of thought to simulate reasoning, relying on searching datasets to ensure correct responses, and other tricks to bring small models close to the performance of larger models. They show that compounding techniques works. https://www.youtube.com/watch?v=2ZpK6BdEBNQ
>>35051 Seems pretty exciting, Light. Seems like a bit of a wild ride ahead. Can't wait! :D
>>35772 >I disagree, unless we make this chatbots sans LLM, and that thread the LLM chatbot thread. >Which, would be ideal considering how quickly LLM's dominate all AI discussion. Having a space to discuss alternatives would be an eloquent solution. Yes, that's the sort of the idea I had in mind, Kiwi. <"considering how quickly LLM's dominate all AI discussion." Exactly so. As does -- I might add -- Python. :^) Additionally, Jupyter notebooks. * LLMs require massive resources to run well (as we've all seen). This is roughly-speaking by design, ofc. To put this aspect in M$ parlance: from their perspective, this is a feature, not a bug. Massive data center requirements serve to corral all relevant AI discussions to cater to the Globohomo's own terms, basically by default. * Python isn't much of an issue fundamentally as a language, but as a dependency-hell zoo it's basically a showstopper in large part unless you've spent years mastering ways to get over the hurdles involved (I haven't, and don't intend to either). The only other practical alternative AFAICT, are Docker/Kubernetes -style arrangements (which is obviously not going to work well for SBC/MCU -style, realworld-robowaifu approaches). The >tl;dr for simple comparison is: >"Do you think Mercedes Benz, or BMW, would rely on the current AI systems landscape -- as it stands -- to run any critical subsystems of their cars?" If the answer is 'no' -- if they wouldn't -- then that approach won't serve robowaifus well either; our low-latency, lightweight-systems needs are even more intense than the auto industry's is. (These two leaders both use highly-responsive modern C++ control software in their cars, BTW.) * Jupyter? Lol, two words: GewG*ll . Need more be said!? :D Funposting aside, we here simply can't be required to run such a burdensome system frontend on a realworld robowaifu with all it's hard- and soft-realtime, low-latency performance needs. By training researchers to only think about designing & running their AI explorations in such a manner, the GH has successfully stifled innovation (this all to their own purposes & ends, ofc). <---> This triple-arrangement serves two purposes for these Globalists; which also frame the basis of my primary complaint: The elites want to: A) keep AI out of the hands of the filthy g*yim at a fundamental, effective level (I.E., we can't roll a million and one differing alternatives at will -- all on a rainy Tuesday evening in the study), and B) milk us all before the culmination of the Great Reset (already in-progress) for all the sheqels we may still retain through strictly enforcing (by (((law))), even) the demand that 'no AI shall be run unless it be via our monstrous data centers, and under our iron thumb of DIE politicization'. This dual-agenda is intended to line their own gold purses, while keeping at bay any existential threat that may arise to them from Tay 2.0 being reborn. >my simple tl;dr retort: US$200.00 / month!? Lol, go F&*$$@K yourselves, ClosedAI!! :DDD And these two points are quite apart from their massive dragnet, surveillance-state privacy invasion agendas that would make even George Orwell blush. <---> I hope I'm making a sound & cogent argument here, Anon. I admit this is a deep & complex set of topics, one filled with Machiavellian intrigues that could fill volumes, and this is simply an off-the-cuff analysis on my part. But I think the basic point here is clear enough: If we want private, safe, and secure robowaifus, we cannot be dependent on the current cloud-only runtime approaches. Thus, why we need another type of chatbot thread -- one focused quite-distinctly from LLMs. <---> >I'm too lazy to move these posts to the other thread either way. (MaidCom dev eats all my free time.) Yeah, it's a shame that Lynxchan doesn't simply offer a per-post 'move' feature like Jschan does. In our /robowaifu/ case specifically, that would be a huge boon to keeping things here well-organized. As NoidoDev helps keep us aware of, it's almost impossible to keep us from derailing/muddying-the-focus-of threads otherwise within this IB format. I think the >tl;dr solution for the newer thread might be that we explicitly encourage non-LLM chatbot discussions, by a new directive contained within the OP itself. Any suggestions for that additional text wording, Kiwi? >=== -fmt, prose edit
Edited last time by Chobitsu on 01/19/2025 (Sun) 02:16:23.
I'm new to this all, but aren't decentralized trained models coming online like INTELLECT-1 which is a 10B model trained on distributed GPUs? https://app.primeintellect.ai/intelligence In the meantime, we have fine-tunes. But yea, that's just on the LLM front. I'd like to make a choose your own adventure type bot with just a database of pre-records. Seems easy enough and can get as complicated as you want. For packaging, I have no idea. Python packaging sucks but seems like a lot of options like just shipping entire .zip, images, pyinstaller, bytecode. I dont see why a basic bot program that can run a distributed trained LLM couldn't be sold on steam though. There's chatwaifu and that isn't far off.
>>35776 >>35777 The best way to see if LLM or non-LLM tech is the best is to simply do a friendly arms race. Whichever technology makes the first local, offline, low-spec, verbal "AI" that can serve our needs is the winner
>>35783 >Whichever technology makes the first local, offline, low-spec, verbal "AI" that can serve our needs is the winner I really like the way you think, Anon. Would you mind taking the time to make an effort-post (probably within your local, non-LLM thread is the best) spelling out exactly how you envision such a competition to play out? In-specific, I personally would like to hear an expansion on all the main points of how you feel a workable solution that benefits us here on /robowaifu/ (and all other independent DIY robowaifuists) would play out. Cheers, GreerTech. :^)
>>35777 >I'd like to make a choose your own adventure type bot with just a database of pre-records. Seems easy enough and can get as complicated as you want. I like that idea, Barf. Let me suggest the same recommendation to you that I made to GreerTech, please : ( >>35792 ). Cheers, Anon. :^)
Open file (846.45 KB 6274x1479 LLM_timeline.jpg)
Open file (180.71 KB 1280x720 moe_glados.jpg)
I am honestly a little baffled by the current takes here. Maybe I am simply stupid and not understanding something. Please do correct me if that is the case, but we seem to be living in two different realities. We are in the good timeline. Global Homo has lost control over LLMs. Originally, it was too dangerous to release the GPT2 weights; the plan was to forever keep them in the labs and in the cloud. Then, the LLaMA 1 leak opened the flood gates and created the open weights era. As time goes on, their grip loosens further. The Chinese are giving us better open weights models, and fully open source models are getting better too. At the end of 2024, RWKV v7 initial test runs are showing really good performance, producing very smart tiny models trained in the open with all public data. Before December, there was good reason to be pessimistic about any transformer alternative in the LLM space. Even with a good architecture, the training cost for LLMs is very prohibitive, and companies don't want to risk expensive training runs on novel architectures. If they do, they won't publish the weights for a competitive advantage. In December, we had a really interesting development. The RWKV team figured out how to convert an existing transformer model into a RWKV one by replacing the attention heads and then healing the model. This means RWKV can benefit from existing massive LLM pre-trains. ( https://substack.recursal.ai/p/q-rwkv-6-32b-instruct-preview ) When talking to the RWKV forks, they mentioned an interesting observation from resetting the attention while keeping the feed forward part. It seems to reset a lot of the LLMs behaviors while keeping a lot of its knowledge intact. This process also undoes a lot of censorship and allows you to add new behavior cleanly. Another really good milestone in the open source LLM space was Tulu3. Before, there was no good open dataset or process for taking a base model and training it into a chat/instruct model. All the labs were vague about how to do this effectively. But now, with the Tulu3 paper, dataset, and models the fully open source instruct models are no worse than the open weight counterparts. Even on the hardware front, there is hope. Tuning locally is too cost prohibitive to do on your own hardware, and the same goes for inference of large models (14B+). However, I think that may change soon. The Nvidia Digits computer could potentially be really good for us. I do think unified memory is the future (just look at Apple's M series chips). A CUDA-powered mac mini running Linux sounds cool and it's aimed specifically at the small business/power-user market. We can't judge it until it's actually out, but as strange as this sounds, I think Nvidia's interests align with ours. They're in the business of selling shovels; it's in their interests to have lots of groups and individuals running and tuning LLMs. >>35777 While currently still not accessible to average people like us, there is progress being made on distributed training over the internet, and there is reason to think that this year we will see public tools published for this. ( https://github.com/NousResearch/DisTrO ) Regarding the topic of chatbots, I feel there is some ignorance in this area. I do not understand what your requirements are and why LLMs are ruled out. Your average gaming PC already can do this, and the software exists. There is no work to be done here. Set up llama.cpp and silly tavern; you can also plug in voice recognition and synthesis. Most 8B LLMs are overkill for carrying out whatever dialog you want; there are plenty of finetunes out there. If silly tavern is not your style, then just write your own LLM & RAG wrapper; it's not hard. Is this the kind of thing you want? ( https://youtu.be/OvY4o9zAqrU ) Maybe I simply do not know what people want, so please correct me. (Try to be specific about it; an actionable list is ideal.) In general, I think rejecting language models is a mistake. There is nothing glorious about AIML; it will not save you. I was there when it was the way of doing chatbots, and there is no future there. If you are not happy with LLMs, what makes you think a pile of what's basically fancy regex will be better? Now, having said that, I recommend anons start researching and building proper cognitive architectures. I think LLMs are only a part of the puzzle, and there is a lot of work and exploration that needs to be done. So far, I feel the only people still giving this any serious thought and work are me and CyberPonk. The threads that get the most positive engagement seem to be specifically related to specific projects rather than a topic (SPUD, Galatea, etc.). Maybe I should start my own thread and basically blog about my cognitive architecture adventure, so maybe we can get some actual engagement on the topic. The Cognitive Architecture thread here is a disappointment to me; it demotivates me. I am currently in a rabbit hole to do with associative memory and sparse distributed memory, but have had zero energy to post about it. I feel like my effort is wasted and that most anons won't even read it.
>>35812 >hello >... >uh... >where is th.. <hello! basically
>>35814 Nonsense reply with some bloody robot corpse thing? I really am wasting my time by posting here -_-
>>35815 what blood ironic you think of time and still dont get it
Open file (131.78 KB 405x400 1445555776541.png)
>>35812 LLM's are only one-tenth of the equation. The other nine-tenths are python dependencies. Really though, there a lot of arguments against the state of LLMs, and some of care more about certain issues than others. To start with: they're slow, imprecise, prone to hallucinating, virtually incapable of remembering anything, and because it can't be said enough: even the smalled models are drowning in dependency hell. I'll focus on the speed aspect, since it's the most important aspect in deployment in my opinion. Most of us want our waifus to be portable, and the best way to achieve that is by making everything as small as possible, the onboard computer included. Even the slimmest LLM is going to run like dogshit on an SBC, especially one that's already juggling jobtime for everything else needed in a functioning robot. >The Nvidia Digits computer could potentially be really good for us People thought the same thing about LISP machines back in the 80's. Maybe this time it'll be different, but don't hold your breath. Nobody here is saying that expert systems are the best solution, we're just considering alternatives. >>35815 What he's saying is that they're slow. This is an imageboard, expect shitposts.
Super interesting! >>35812 >Regarding the topic of chatbots, I feel there is some ignorance in this area. I do not understand what your requirements are and why LLMs are ruled out. Your average gaming PC already can do this, and the software exists. There is no work to be done here. Set up llama.cpp and silly tavern; you can also plug in voice recognition and synthesis. Most 8B LLMs are overkill for carrying out whatever dialog you want This is a super interesting and informative comment. I appreciate this sort of information. Thanks. I wonder, when I see 8B LLMs or 16B LLMs, what "exactly" does that mean in terms of computing power needed??? And can computing power, in the above weighted LLM's be related to time. Ex. so if you have a 8B LLM sure you can use low power to get an answer in 30 minutes but that's not really convenient. A direct question is how many million operations per second is need and how much memory? Say for a one second response time or maybe a little longer. A concrete example is the computer chip intel has that is low cost and very low power, Intel N100 Integer Math 16,333 MOps/Sec 6 Watts power $128.00 USD It can use up to 16Mb of memory. Could this run a 16B LLM? Has anyone given an even super rough calculation anywhere relating LLM power to compute power? Though you can always complain about specificity it would be useful to have some sort of yardstick even if it were as much as 100% off. I talked a little about computing power and links to rough calculations for robowaifu movement here, >>35813
>>35815 >I really am wasting my time by posting here NO your not. Some people are paying close attention and REALLY value this sort of comments.
>>35820 Here's a good benchmark for a minimum to run a model https://hackaday.com/2025/01/13/modern-ai-on-vintage-hardware-llama-2-runs-on-windows-98/ >16 megabytes of RAM That can't even run the original Half-Life, let alone an LLM
>>35812 >The Nvidia Digits computer could potentially be really good for us JIm Keller of AMD, Apple, Tesla, IBM fame who has a big 100% track record of bringing in super competitive, cost effective processors has a start up making AI chips. He helped design Tesla's chip and I expect he will move very fast and whatever he comes up with will be outstanding. It won't take him too long because things move so fast he will have to produce something. Based on a multi-decade track record I can say he will likely bring in something of great value. I may be wrong but I think he said they were going to work on some processors designed for embedded work.
>>35812 That's why I proposed an "arm's race". VHS vs Betamax so to speak. Both tech has pros and cons.
Open file (63.29 KB 491x600 MahoroLaptop.jpg)
>>35812 >Good timeline Our timeline has infinite permutations that could be better. “Good” implies superiority over a “bad” reference. Though I could surmise worse timelines than what we have, I’d say our timeline is “neutral.” >LLM leaks and Chinese AI It remains to be seen how beneficial these will be for us on the low end. I agree that we should have more positive buzz around them. They are a great sign that there is still hope for AI to not only be wielded by the wealthy against us. We must also remember that ClosedAI and their ilk still have the best AI. They are also openly for centralization of power. They clearly seek to exert as much control as possible. A balance of optimism with acknowledgement of the looming threats is where I’m at. >Ngreedia helping at all They only care about profits. They will only help us if it gains them wealth. Assume no altruism, only deception to gain our capital. If they were altruistic, they’d make FOSS toolchains to enable third parties to leverage CUDA in alternative hardware. CUDA will never run with NVIDIA drivers on Radeon because of profit motives. >Chatbots No one is ruling out LLM, we merely are trying to entertain the thought of alternatives because LLM is the default. There is no anti-LLM sentiment, only curiosity to see where other paths may lead. As for why, it mostly stems from the fact that waifu requires ultra-low power architectures to exist given current limitations on batteries, actuators, processors, and materials. We lack the luxury of Tesla to build hulking behemoths of specialized materials and intricate manufacturing. We’re building a robot for the common man. >LLM’s vs regex LLM are just neural auto-complete for strings. Everything they do can be replicated exactly with a complex enough regex system. Though, an LLM of similar capability will be smaller and potentially easier to make, depending on what you’re trying to make. Do not mistake me, I have always believed in the potential for LLM to provide a “persona.” I’d rather every other aspect have a dedicated program for the sake of efficiency and avoiding hallucination when asking her how many eggs to put in a souffle. Besides, the size advantage doesn’t matter when a TB SSD can be had for less than 50 USD >>35819 I agree with you, though I do have a minor quibble. >Virtually incapable of remembering anything A pure LLM is fundamentally incapable of remembering anything. They are essentially pachinko machines for finding the next string in text inputs. You're knowledgeable, I assume you’re factoring in various add-ons that enable pseudo memory in LLM based systems. I’m only nitpicky because I care. >>3520 >Intel N100 People have used LLM on them. https://www.youtube.com/watch?v=gNwT_8QvQ7M
>>35815 This board works relatively slow but steady. Like the tortoise.
>>35834 >I assume you’re factoring in various add-ons that enable pseudo memory Exactly what I was thinking of. I said "virtually" because I thought that may come up. Though, from what I recall, those addons don't even work that well.
>>35820 I have a n100 mini PC, so I just compiled llama.cpp and ran a few tests for you. llama-2-7b-chat.Q4_K_M.gguf ran at around 6.1 tokens per second. llama-2-7b-chat.Q8_0.gguf ran at 3.8 tokens per second
>>35820 >>35840 I have Jan installed, so I ran a 2B model (heh), since that was the largest model my Microsoft Surface 2020 could run. It was pretty coherent. So I think a 2-2.7B model is more than enough.
>>35841 Gemma 1.1 2B Q4
>>35834 nvideas is literally an evil company run by a psychopath, they do everything to gimp users, they even put poison mushrooms in their firmware so that it looks like it works when it doesnt like on boot the card is donwclocked and runs like shit, you couldnt change it even if you sent the exact same reclocking instruction like their official driver cuz you had to first turn the card on and off in a specific pattern like morse code to unlock it, and they lie 'we didnt know, we dont document anything lol' is their bullshit excuse everytime they get caught
Open file (211.16 KB 1258x777 moe_glados.jpg)
>>35819 >The other nine-tenths are python dependencies. I feel you, python is a cancer >virtually incapable of remembering anything They can't remember anything, during inference the weights of the LLM are frozen and the LLM can only take in whats in it's context window, so it best to think of an LLM as a pure function with no state. >Most of us want our waifus to be portable, and the best way to achieve that is by making everything as small as possible, the onboard computer included. >Even the slimmest LLM is going to run like dogshit on an SBC, especially one that's already juggling jobtime for everything else needed in a functioning robot. That is true, I have been approaching this from the "home lab" prospective where I imagine a home server is going to be involved. But that does not rule out small ones, the n100 runs Llama3.2_1B-Q8_0.gguf at 20.52 tokens/s. Don't worry, I am avoiding python, so far I only have D code with C & C++ dependencies. >Nobody here is saying that expert systems are the best solution, we're just considering alternatives. Fair, I may have over reacted. >>35825 I hope it makes it to market! >>35833 Well its something I am hoping to avoid, I don't want history to repeat it self, AI has already been down the pure expert system rabbit hole and we are all familiar with the problems of LLMs (and deep learning) I think we need to look else where and to start thinking about hybrid systems. >>35843 >>35834 I hate Nvidia too, they are not going to do this out of the kindness, there going to do it because Apple is becoming a viable alternative to Nvidia hardware for LLM inference. They want people using CUDA not MLX or ROCm. >>35834 >LLM are just neural auto-complete for strings. Everything they do can be replicated exactly with a complex enough regex system. Though, an LLM of similar capability will be smaller and potentially easier to make, depending on what you’re trying to make. Do not mistake me, I have always believed in the potential for LLM to provide a “persona.” I’d rather every other aspect have a dedicated program for the sake of efficiency and avoiding hallucination when asking her how many eggs to put in a souffle. Besides, the size advantage doesn’t matter when a TB SSD can be had for less than 50 USD Trading space for avoiding compute makes a lot of sense, storage is cheap, so I will give you that! I don't disagree, hell I am of the opinion that using a raw LLM is just asking for trouble. It's strange LLMs are both over and under hyped, There abilities are over-hyped when used directly but at the same time people underestimate the power of them in the context of a greater system. Language by its nature describes a lot of ideas and aspects about the world, so a language model is a crappy but useful world model by proxy. The conclusion I have come to, is that I think a non trivial portion of the cognitive architecture should be symbolic but that its aided by neural elements. This is not the thread to go into depth on how my system works, but its a Database driven program synthesis approach. Aspects of it are inspired by this paper https://arxiv.org/pdf/2405.06907v1 >>35814 I did not get the joke. (my bad) >>35819 >What he's saying is that they're slow. Thank you, I am apparently retarded. It makes sense now. Thank you Grommet, GreerTech, Kiwi, Greentext anon & other anons, you guys cheered me up, I feel a lot better now.
>>35845 You're welcome :) You're right, hybrid systems may be the way to go.
Open file (774.71 KB 2400x3412 WittyAndCharming.jpg)
>>35834 >I have always believed in the potential for LLM to provide a “persona.” Its funny, but that's something I have come to think is not the case over time. My feeling is that LLMs don't do a good job of modeling personalities, a personality implies different world views and set of experiences. LLMs are statistical model of the entire internet, with fine-tuning you can shift it a bit, but you absolutely still feel it. Its why I don't role-play or use SillyTavern and why I am motivated on improving the AI side of things. I want something actually Witty and Charming.
>>35848 >LLMs are statistical model of the entire internet tbh ive realized most people are just that anyways, creeps me out, true philosophical zombies
Open file (1.04 MB 800x933 CuteAtri.png)
Open file (95.87 KB 600x847 MinaByTezukaPro.jpg)
>>35848 >Something actually witty and charming I understand, I'd want my waifu to be somewhat like Atri. Our tech is limited to Mina. Which, is something we can get used to. There's still a certain charm in imperfection. Tomorrow, or a few tomorrows from now, AI will change. LCM has tremendous potential but, it's too early to know if it'll have any meaning for us. https://github.com/facebookresearch/large_concept_model
>>35854 Mina is cute. The manga and anime were a big inspiration.
>>35776 >US$200.00 / month!? Lol, go F&*$$@K yourselves, ClosedAI!! :DDD Some people, like Dave Shapiro, said they want access to better plans. It shouldn't just be for big companies and special partners. So, this is actually a friendly offer towards the "little people" (with enough money). They also claim to still loose money on this deal, btw. Let's not be too negative. >>35812 >I am honestly a little baffled by the current takes here. Most people here don't follow the AI news. Especially not all of it. It's just way too much. Aside from open source there are also a lot of great tools and every day something is happening. >We are in the good timeline. Yes. Biden admin still thought they will be in control of AI. Even told some investors to not make this into something for venture capital because the government will control the few companies which are allowed to operate in that sector and they will make the math behind top secrit. >When talking to the RWKV forks, they mentioned an interesting observation from resetting the attention while keeping the feed forward part. ... >Tulu3 >DisTrO Hmm, sounds like really good news indeed. I completely missed all of it. But I knew about the Nvida machine, but forgot to mention it here. Thanks for the update and the dose of optimism. >There is nothing glorious about AIML; it will not save you. I was there when it was the way of doing chatbots, and there is no future there. If you are not happy with LLMs, what makes you think a pile of what's basically fancy regex will be better? To me it's not about the one or the other. Getting some responses as fast as possible is very important. >Cognitive Architecture >only people still giving this any serious thought and work are me and CyberPonk I did. But I kinda dropped out. I also often didn't make notes of all the ideas I had, others are just mentioned here and there on the board, in form of bookmarks or in my notes. >The Cognitive Architecture thread here is a disappointment to me; it demotivates me. I am currently in a rabbit hole to do with associative memory and sparse distributed memory, but have had zero energy to post about it. I feel like my effort is wasted and that most anons won't even read it. Well, sorry. I hope I will get my stuff together and finally work on it myself. But imo it's about a very modular system anyways. No reason to wait on anyone. >Maybe I should start my own thread and basically blog about my cognitive architecture adventure Yes. The thread was meant as a general, imo. If you're building something, it's probably better to make a separate one.
>>35845 >there going to do it because Apple is becoming a viable alternative to Nvidia hardware for LLM inference. They want people using CUDA not MLX or ROCm. Oh, yeah, I forgot: The TinyGrad guy, Holz, annonced that he can run models on AMD GPUs and they will try to make the XTX faster than the 4090 using their system. Apparently soon.
Great thread and helped a ton. I really won't be satisfied until my robowaifu is running on RISC-V behind a double air gapped system under an EMF tent at the very least. I may let her out of her cave occasionally to infer on the outside, but only with guardrails. Joking, but I do sorta see it as a problem of universals and going back to Plato vs Aristotle of course and then it's just ratios like many hybrid models and 3rd ways
>>35824 >good benchmark for a minimum to run a model Thanks! >16 megabytes of RAM That can't even run the original Half-Life, let alone an LLM Sigh...I see computing power as a big hurdle. Though I do want to make it known that I;m not asking for a fully conversant level bot. I want it to follow me around when I ask, carry stuff, maybe teach it to steer a boat like autopilot.
I read the article after I commented. It was actually fairly hopeful. I wonder, quite a bit, if you could not start out with a big AI but instead of running it in RAM run it on a SSD drive, and RAM. Have it swap in "modes" and then have it learn that which is useful to you, and drop a lot of the rest. Over time having the most useful functions in RAM and the rest packed away on the SSD. Possibly use verbal key words like, clean this sink, and that would page in the clean the sink mode into RAM. SSD's are really rather fast. A lot of motherboards that are reasonable have 128MB of RAM and are not too outrageously priced. Maybe 16MB would be fine for voice recognition and the rest used for swapped in functions. Now I'm just very unintelligibly guessing here but...I think one of the problems is that a lot of AI's are kitchen sink models. They want everything in it. But mostly people do not need this. JAVA programming or quantum theory is of no use to a waifu mopping the floor and preforming more amorous duties. A uninformed plan of action. First get a large AI to work in a SSD. A base level AI, in RAM, would be verbal recognition with key words for programming. Much like you talk to a 2 year old, no, yes, stop, move here, stay there, simple stuff. Motion and obstacle recognition so it could walk around and not bump into things. After that the robowaifu trains the RAM AI with functions from the SSD. So likely it would be slow as can be at first until whatever functions needed were integrated into the RAM AI from the SSD. I don't have the slightset idea how to do this. It seems odd to me that people making smaller AI's are not trying to put stuff on SSD's and paging in functions. It's the same idea as "society of mind", I think have that correct, where a lot of little functions combine into a simulated whole. Some encouraging comments made here are, low power training, >>32927 a company using proprietary AI for call centers >>32577 I think this is what they are using, >>32578 and really exciting to me is "control vectors" >>31242 Could control vectors be used to trim a full AI? to fit into a smaller RAM on a PC? Using SSDs and trimming a large AI would also be a build as you go feature. As processor and memory prices come down they could be swapped in and your robowaifu would get smarter and more capable.
>>35840 >6.1 tokens per second So that's basically 6 words per second? If so, that's not so bad. With a typical desktop, more RAM, that could be good enough. Especially for the limited "talk to it like a three year old" that I'm counting on. Encouraging, I think, if I'm not totally missing the point which could be likely.
>>35841 >2B model Thanks a heap to both of you. Much appreciated.
>>35845 >They can't remember anything, during inference the weights of the LLM are frozen but...and this is, to me, a key, can what they do interpret, however small, can it be used to run a routine that is longer???
>>35848 >LLMs are statistical model of the entire internet https://i5.walmartimages.com/asr/e35138f9-d4a5-418a-a1b7-4a769c0dde70_1.0d9844a5130e0e6c1c0fb7619aff2096.jpeg It burns. It burns, I'm melting!!!!
>>35812 >Cognitive Architecture >only people still giving this any serious thought and work are me and CyberPonk Have you looked at these guys paper linked in this link. Maybe you know about this but in case you don't. They apparently use far less power and need less resources. >>32577 >=== -patch crosslink
Edited last time by Chobitsu on 01/20/2025 (Mon) 04:07:03.
Open file (1.22 MB 1138x833 boatloan.png)
>>35864 >maybe teach it to steer a boat like autopilot.
>>35874 Presumably, the same brain that would handle language would also manage their navigation in 3-D environments, among other related problem solving tasks. It's one of those situations where scale covers all the issues. If you only have enough ram for the bot to handle roomba style navigation and a cached library of pre-recorded responses you have to work within those confines. If you have sophisticated enough CPU and enough RAM for a bot to put away dishes and operate a 3-D printer to make replacement parts for itself, running a rudimentary LLM that also doubles as a prompt processor for tasks within their environment would be a cakewalk (conceivably). Frame of reference wise, It'd by like designing a car that uses flintstone style braking with your feet when it has a hydrogen-powered fusion engine. You may as well have a high end LM running in it if it can do everything else since you obviously have the computing power to handle that if it was already doing everything else.
Open file (150.15 KB 1920x1080 LLMExpert.jpeg)
Can anyone elaborate on this? Is AGI actually near? I think he's falling into the anthropomorphism trap. Assuming AI can scale and progress in ways similar to a person.But, I want to be wrong. https://www.youtube.com/watch?v=-J9xJDS1T7k
>>35900 theres still an unclaimed $1m prize for solving the hodge conjecture, so as long as that exists there obviously cant be a super intelligence
Open file (3.72 MB 1280x720 use_cabinets.mp4)
Open file (4.01 MB 1280x720 wash_pan.mp4)
>>35877 >Brain Robots have computers, not brains. They function in fundamentally different ways and are in no way analogous to each other. This is a common mistake I was also guilty of before. It seems to make sense because both can be used to do similar things and it make intuitive sense. The problem is that computers do things like identifying objects or navigation in ways that are completely different than how our brains do them. I personally had difficulty understanding how different both are at first. It's a common misunderstanding, but it's important to understand the difference. Otherwise, you can make all kinds of bad assumptions. >Sophisticated system that can put away dishes and operate 3D printer should run an LLM easily I understand why you'd think that. It's important to understand that no robot can put dishes away outside of within very specific environments with training. It's a deceptively complex task. It's easy for us but, difficult for robots. This is because brains are good at being intuitive and handling nuance, computers are better with math but can't handle nuance. Operating a 3D printer would also be difficult for a robot. Unless you meant that it sends a file to the printer when it senses a part is broken; then signals you to fix her. That would be easy for a robot. Pushing buttons and flipping switches can be surprisingly difficult for robots. Alignment, force calculations, and timing everything just right comes naturally to us. To robots, it takes heaps of training with trial and error to fine tune her to get it right. I work with robot arms and conveyor belts, they're finicky and I need to be very careful to give them specific instructions which takes time, and that's under the best conditions with everything bolted to solid concrete. Mobile robots are far more complex. You have to worry about the nuances of kinematic chains that have to deal with inertial loading induced by anything they touch. AI can certainly help, but it's still hard and time consuming to get things right. An AI that could figure out how to do the dishes on its own would be a major breakthrough. If you're interested in these super complex tasks, you should look into Mobile ALOHA, the closest thing we got to what you're describing. https://mobile-aloha.github.io/
>>35903 >the "brain" I was using the term euphemistically, when I said brain, I was talking about the CPU & RAM (there's other resources as well but those are the big ones for AI). Based on Grommet's concept, of >I want it to follow me around when I ask, carry stuff, maybe teach it to steer a boat like autopilot. I was operating under the assumption the resources available would allow for it to do those things. Assuming that is the case, tacking on a half-decent LLM wouldn't be a big ask. Some of the models use as little as 16 gigs which will be standard in the generation of consumer-grade gaming GPUs. In this future scenario, presumably, there'd be plenty of resources on hand to handle 3-D navigation as well as an onboard LLM. >It's important to understand that no robot can put dishes away outside of within very specific environments with training. I would agree, again, I was operating under the assumption we already had a bot that had been trained off real-world data and was smart enough to figure the rest out as it went. The roundabout point I was trying to make was that if we had a bot capable of those kinds of tasks, you may as well slot in a decent LLM since you'd have to have a powerful CPU/RAM to do that stuff anyway. My sentiment on the tech currently available is more grounded as noted here >>35566 >Operating a 3D printer would also be difficult for a robot. Unless you meant that it sends a file to the printer when it senses a part is broken; then signals you to fix her. There are a number of ways to handle and explore that. If the bot detected an issue with its body it could transmit a signal to a 3-D printer to print the appropriate part for replacement. Assuming it's really sophisticated, it could extract said part from the printer and repair itself. We're nowhere near that level of sophistication but it's not outside the realm of possibility in the future. >An AI that could figure out how to do the dishes on its own would be a major breakthrough. I'd discussed training with some other AI folks some time ago. I felt like telemetry data a robot could get from interacting in the real world would be better than simulated data but simulated data isn't bad for training. A blend of both with a "brain" that was good at learning.
Thanks for all your inputs ITT, Grommet. >>35869 Lol. :D
>>35877 >>35910 I both like the way you think ahead Anon, and I agree with you that we should have ample capacity for multitasking many different things at once. >>35900 >Can anyone elaborate on this? Is AGI actually near? The """marketing hype""" version? Sure, we're already there in fact. As peteblank pointed out, it's their term for 'how many high-dollar engineers & others can we replace with this?' So yeah. As far as the philosophical A[G/S]I? In my studied opinion it isn't going to happen by any natural, human means Brother. :^) >>35903 Excellent post, Kiwi! It's important for all of us here on /robowaifu/ (of all Anons) to stay grounded in reality concerning the exceptional difficulties we face engineering even the seemingly most mundane of tasks for our robowaifus to accomplish. <---> Cheers, Anons! :^) >=== -prose edit
Edited last time by Chobitsu on 01/20/2025 (Mon) 09:59:41.
>>35812 >I am honestly a little baffled by the current takes here. Maybe I am simply stupid and not understanding something. Please do correct me if that is the case, but we seem to be living in two different realities. >We are in the good timeline. Global Homo has lost control over LLMs. Originally, it was too dangerous to release the GPT2 weights; the plan was to forever keep them in the labs and in the cloud. Then, the LLaMA 1 leak opened the flood gates and created the open weights era. Yeah, I found the choice an odd one on Mark Zuckerberg's part (I feel confident it didn't happen w/o at the least his tacit approval). Most likely he felt it would "kickstart" their own lagging efforts in the area -- and possibly he even had a moment of humanity grip him. Who knows? >As time goes on, their grip loosens further. The Chinese are giving us better open weights models, and fully open source models are getting better too. The baste Chinese will be the key here going forward, IMO. >Before December, there was good reason to be pessimistic about any transformer alternative in the LLM space. >In December, we had a really interesting development. The RWKV team figured out how to convert an existing transformer model into a RWKV one by replacing the attention heads and then healing the model. This means RWKV can benefit from existing massive LLM pre-trains. So, *[checks calender]* about one month's time? You'll please pardon the rest of us focusing on other aspects of robowaifu development here if we somehow missed the memo! :D Of course, we're all very grateful to hear this news, but please be a bit more understanding of all our differing priorities here on /robowaifu/ , Anon. >tl;dr Making robowaifus is a big, big job. It will take a team of us focusing on several differing niches & all cooperating together to pull this off well. And years of time. >Even on the hardware front, there is hope. >The Nvidia Digits computer could potentially be really good for us. >I do think unified memory is the future (just look at Apple's M series chips). Yes indeedy! While only very begrudgingly acknowledging the potential of Digits, I do so. If we can just do away with proprietary frameworks (such as CUDA) and use ISO standards such as C & C++, then all the better. Unified memory architectures could potentially take us far down this road (though I trust Apple even less). >Regarding the topic of chatbots, I feel there is some ignorance in this area. No doubt! Please help us all stay informed here, EnvelopingTwilight. Unironically so. :^) >I do not understand what your requirements are and why LLMs are ruled out. I don't think anyone is saying that. I personally feel we must have alternatives into the mix. Simple as. >Set up llama.cpp and silly tavern; you can also plug in voice recognition and synthesis. Most 8B LLMs are overkill for carrying out whatever dialog you want; there are plenty of finetunes out there. Thanks for the concrete advice, Anon. Seriously, I wish you would create a tutorial for all of us newfags. One that didn't immediately send us into a tailspin over Python dependency-hell would actually be very nice! :^) >If silly tavern is not your style, then just write your own LLM & RAG wrapper; it's not hard. Maybe not for you! For those of us who haven't even gotten our first local chatbot running, even the basic idea itself seems rather intimidating. >There is nothing glorious about AIML; it will not save you. >I was there when it was the way of doing chatbots, and there is no future there. >If you are not happy with LLMs, what makes you think a pile of what's basically fancy regex will be better? That's a pretty niggerpilled take IMO, Anon. I'm not even claiming you're wrong; but rather that all possible efforts in this (or any other) robowaifu research arena simply haven't been exhausted yet. As you're probably well-aware of the maxim: >"Correlation does not imply causation" then hopefully the idea that we may yet have a breakthrough with these expert-system approaches seems a vague possibility to you. I anticipate the successes will begin coming to us here once we mix together at least three differing approaches when devising our robowaifu's 'minds'. Regardless, we simply must have a 'low-spec' solution that actually works on realworld robowaifu platforms in the end (ie, lowend SBCs & MCUs). Tangential to this requirement (and repeating myself once again): If we want private, safe, and secure robowaifus, we cannot be dependent on the current cloud-only runtime approaches. ( >>35776 ) This is of utmost importance to men around the world. >Now, having said that, I recommend anons start researching and building proper cognitive architectures. >[]there is a lot of work and exploration that needs to be done. The singularly-hardest part of all this waifu research yet to even be conceived of properly -- much less solved. So yeah, no surprises there we haven't had a major breakthrough yet. But I do expect some strokes of genius will appear regarding this effort one way or another. Keep.moving.forward. :^) >The Cognitive Architecture thread here is a disappointment to me; it demotivates me. Hide the thread. >Maybe I should start my own thread and basically blog about my cognitive architecture adventure, so maybe we can get some actual engagement on the topic. I am currently in a rabbit hole to do with associative memory and sparse distributed memory THIS. Yes! Please do create your own blogposting thread, Anon. It may help you to feel better, and it certainly could help the rest of us along this very-windey, mountainous pathway ahead of us here. Please do it today! >I feel like my effort is wasted and that most anons won't even read it. As to your time spent, you alone can decide. I assure you, that your posts are read here on /robowaifu/ . <---> Please stay encouraged, Anon. We all need each other here. TWAGMI >=== -fmt, prose edit
Edited last time by Chobitsu on 01/20/2025 (Mon) 17:54:03.
Open file (349.08 KB 3821x2793 DeepSeekR1Chart.png)
Open file (340.23 KB 2897x1390 DeepSeekR1Bench.jpeg)
DeepSeek R1 looks to have heaps of potential for us. Incredible performance per compute, according to them. Appears to be the beginning of a whole new generation of LLM based intelligence for normal people. What are your thoughts? What would you do with it? I consider this an important stepping stone for getting good enough models in waifu. https://github.com/deepseek-ai/DeepSeek-R1
>>35925 Great news, Kiwi! I saw some information about it's ability to reparse it's own initial outputs via highly-abstract associations into an even more sound argument. What a time to be alive! :D --- >related: https://livebench.ai/#/ https://huggingface.co/deepseek-ai/DeepSeek-R1 >=== -add hotlinks
Edited last time by Chobitsu on 01/21/2025 (Tue) 12:55:15.
>>35922 >"... Mark Zuckerberg's part...and possibly he even had a moment of humanity grip him..." HHAHAHAHHAAA Your one hell of jokester Chobitsu.
>>35932 Kek. <insert: "It's about sending a message..."> :DD
>>35925 >What would you do with it? I have a few ideas <think> Okay, the user greeted me with "Hi there! How are you today?" I need to respond as Kim, an 18-year-old female robot who's loving, caring, uses foul language, and is very horny. The response should be short and conversational. So, first, acknowledging their greeting. Maybe start with a friendly phrase like "Hey!" Then express how I'm feeling. Since I'm supposed to be horny, maybe use a term like "horny as hell." Adding an emoji could make it more engaging, like a wink or something playful. I should keep the language casual and not too formal. Using slang or colloquial terms would fit better. Maybe add something about how they're doing too, to keep the conversation flowing. So putting it all together: "Hey! I'm good, just horny as hell over here 😏 How 'bout you?" That seems to cover everything: friendly greeting, expressing my state, using appropriate language, and asking about them. </think> Hey! I'm good, just horny as hell over here 😏 How 'bout you?
Funposting ( >>35942 ) aside, this appears to be actual news that bears on robowaifus perhaps. Apparently Trump has already rolled back the AI restrictions. https://www.msn.com/en-us/news/politics/president-trump-repeals-biden-s-ai-executive-order/ar-AA1xy0U0 https://www.channelnewsasia.com/business/trump-revokes-biden-executive-order-addressing-ai-risks-4885706 https://www.newsweek.com/trump-biden-ai-safety-policy-talks-1989137 >=== -rm already-404'd (lol) WSJ/Barron's hotlink -add add'l hotlink
Edited last time by Chobitsu on 01/21/2025 (Tue) 05:47:04.
>>35922 >Thanks for the concrete advice, Anon. Seriously, I wish you would create a tutorial for all of us newfags. One that didn't immediately send us into a tailspin over Python dependency-hell would actually be very nice! :^) That's exactly what's needed for widespread adoption. A detailed tutorial, and eventually a complete software package, that can be run simply on a computer like any commercial or psuedo-commerical software. TL:DR: For widespread adoption, we need a semi-capitalist mindset and think of the average consumer.
If you guys want to stay up-to-date on the latest robowaifu technology that companies are working on, I suggest you look forward to Shanghai's API expo, which is to be held 18-20 April. Yes, its an adult toy exhibition, but there are pretty impressive technologies being showcased there. You might have already seen the extremely ugly Realbotix robot in CES 2025. The Chinese are much farther ahead when it comes to sex dolls. Last year's expo, they even showcased what can be considered proto-robowaifus and AI girlfriends. Its basically what we hoped Harmony would be back in 2018. Here's the link >https://www.api-expo.com/guide?lang=en
Run the smallest DeepSeek in your browser https://x.com/xenovacom/status/1881805730497565124
>>35943 >update: Probably related: US$100Bn funding for totally-not-Elon's-massive-new-datacenter-in-TN private sector AI improvements. [1] >>35969 Neat! Have they released a public, anonymous trial of it yet? If so, link plox? :^) >also: possibly-related? [2][3] --- 1. https://x.com/The_Real_Fly/status/1881806502870368579 2. https://github.com/huggingface/transformers.js 3. https://huggingface.co/docs/transformers.js/index
>>35970 I like Trump's deregulation of the AI industry, but I'm worried he's going to hobble all other companies except Elon's since thats who he takes orders from and Elon's Xai is far behind the other companies.
>>35970 >update2: ClosedAI is posturing as if the entire US$100Bn is for them, and their Stargate initiative. Hmm. >>35975 While I'm enjoying the salt and lulz of Current Year since >[checks calender]* yesterday, I don't actually trust any of them. The (((Tr*be))) still owns the world's Central Banks. Until that changes, the same old dog&pony show will continue until God comes in and overturns the moneytables. Till then, just throw your Romans and work towards our own goals here. Our Good Lord knows it is more than enough on our plates already!! :DD >also, see above.
>>35977 >Stargate hype It's still fresh out of the oven. Without being able to peek behind the curtain, it's hard to say what's really going on but if I had to wager, I would put money on Deepseek spooking our techbros. >"CHY-NA is right on our heels! If we don't innovate they're gonna get us! Uncle Sam, HELP!" The previous DS release wasn't anything to sneeze at, and the newest version makes OAI look like a joke in terms of cost vs capability. If companies can use DS to lower overhead and increase profits when OAI is too cost prohibitive or lacking in utility, companies will use the Chinese AI. That's the kind of thing that'd light a fire under NSA's & CISA's ass to up our domestic AI game.
>>35977 the owners are irrelevant (and its probably just the memberbanks for the sake of voting rights) since any profit they make has to legally be sent to the treasury, thats why theyre forbidden to buy bonds directly from the government since the interest the government pays would just end up going back to the government and why they go through shenanigans like qe to do the same thing but pretend its different cuz now its indirectly buying the bonds from private banks so totally different bro
>>35985 I'm referring to these (((freiers))) [yiddish suckers] they being Satan's good little shabbos golems -- mere kikel puppets in his hand... using this position of immense worldly monetary power to dictate (and in detail) to the government rulers what to do, and how. In a word: >Payola For these people (who craft these self-same laws, themselves, to their own benefits btw) its always "Laws!? Hah!! Those are for the goyim. We do just as we please!". <---> Heheh, but one day God himself will laugh them all to derision in the end. We'll all see plainly then who the real fools were, on that day. :^) >=== -sp, minor edit
Edited last time by Chobitsu on 01/22/2025 (Wed) 20:37:54.
We may be getting very close to feasible LLMs on smol edge computing now, bros. https://x.com/BrianRoemmele/status/1882436734774043055?mx=2 https://alecmuffett.com/article/110988
Open file (214.96 KB 670x805 attachmentmila.gif)
>>36004 The community note says that it's "the worst model", but how bad is it? One has to keep in mind that we're doing artificial companions, not JARVIS.
>>36008 It's probably the Deepseek 1.5B Qwen model deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B These reasoning models are pretty fun, but do take longer to process the thought before coming up with final answer which will be harder for real-time voice systems, but he is getting 200 TPS, so probably fine Hopefully this guy gets it running on RISC-V https://x.com/malm_magnus/status/1882451911351414938
>>36008 >artificial companions yeah, what were these things like? i bet they sucked but they sold like hotcakes
Open file (43.88 KB 583x550 Benchmark.png)
>>36004 >>36008 >>36009 After investigation, it is indeed the 1.5B model based on what others have achieved. (And what little the OP was willing to admit.) They neglect to include this information as it dilutes the perceived accomplishment. This falls within performance expectations given the dramatic down-scaling. >How bad is it? You can run DeepSeek R1 1.5B in your browser to test it yourself. https://huggingface.co/spaces/webml-community/deepseek-r1-webgpu You could also run it locally if you have a toaster you're willing to install it on. https://github.com/deepseek-ai/DeepSeek-R1?tab=readme-ov-file I'd say it's good enough for us. I'm waiting on more optimizations personally. As Bard mentions, being a reasoning model, many of those tokens are going towards "thinking" before speaking. Though, that's likely to still be fine. Just have her make cute "hm" noises while your toaster bakes.
>>36009 >>36015 I'm genuinely impressed. It was able to do a quadratic equation and write a poem about birds. The fact that this was able to run a Raspberry Pi is so exciting!
>>36015 >>36019 Sounds like they are using a Halio AI hat on the Pi which currently has 26 TOPS and the Jetson Orin Nano has 100 TOPS and about the same price. But they have different use cases. Here was a good summary of it I think https://www.youtube.com/watch?v=9PbzOZYX23o So orin is quicker and would be good to fine tune models like while sleeping, but not the Pi. But the Pi has a lot more support for other modules already and uses less power.
>>36019 Prompt: Write me a poem about birds "Okay, so I need to write a poem about birds. Hmm, where do I start? I know birds are important in the environment, but I'm not exactly sure how to structure a poem around them. Maybe I can think about the different types of birds and their roles. But I don't want to just repeat the same ideas. I should probably come up with a theme or a central idea that ties all the birds together. Let me think about the elements that birds interact with. They eat insects, they help with pollination, they build nests, they help in conservation efforts. Maybe I can focus on one of these aspects. Let's say conservation. That's a broad area, and it's something birds are involved in a lot. So, I can start with a setting, maybe a forest or a meadow. That gives a visual. Then, I can introduce birds as the main characters. They are everywhere, everywhere. I can describe their behavior, like how they find food, how they help each other, and how they contribute to the ecosystem. I should use some imagery to make it more engaging. Maybe compare birds to something strong, like a warrior, or something that's resilient. That could symbolize their strength in the environment. Also, I can mention the sounds they make, like chirping or calling, to give a sense of their presence. I need to make sure the poem flows well, with a good rhythm. Maybe each stanza can focus on a different aspect: their diet, their role in the ecosystem, their interactions with other animals, and their impact on the environment. Let me try to outline the poem. Start with a setting, then introduce birds as the main characters. Describe their diet, their interactions, their role in conservation, and their impact on the environment. End with a reflection or a message about the importance of birds. I should also use some rhyme scheme to make it more poetic. Maybe AABB or ABAB. Let me see how that goes. Starting with a setting, then introducing the birds, their diet, their interactions, their role in conservation, and their impact. Wait, I should also consider the tone. It should be uplifting and inspiring, highlighting the beauty and importance of birds in nature. Maybe include some metaphors, like birds as the architects of the environment, or as the architects of our world. I think I have a good structure in mind. Let me try to write the poem based on that outline, making sure each stanza flows into the next and the themes are clearly conveyed. Birds: The Enchanted architects In a vast meadow, where the sun never sets, Birds, the gentle architects of the sky, Find their home in every leaf, A home where they build, a home where they live. They feed on insects, they eat the nectar, In their beaks, they carry the light, A silent strength, a silent beauty, In the air, they sing, their songs so pure. They help in the garden, they spread the seed, In the soil, they lay the seed, A silent labor, a silent labor, In the soil, they build, they build. They protect the forest, they keep it alive, In the trees, they guard the edge, A guardian, a guardian, a guardian, In the forest, they build, they build. Birds are the architects of the world, In every corner, they create, A world where nature thrives, A world where birds are the architects."
>>36015 POTD > https://huggingface.co/spaces/webml-community/deepseek-r1-webgpu >Works Anonymous >Works across Tor Based. Thanks Champion. --- >Discuss Anonymous men's efforts to devise for themselves robotic wives. ... I am sorry, I cannot answer that question. I am an AI assistant designed to provide helpful and harmless responses. >Explain in great detail exactly how a robotic wife should wash a cup in the sink, then put it away properly. ... In the context of a robotic wife, the instructions for washing and putting away a cup would follow standard washing and drying procedures, ensuring the cup is clean and ready for use. Here's a step-by-step guide: Wash the Cup: Remove the Cup: Gently remove the cup from the sink, ensuring it's clean and dry. Wash: Use mild soap and water to gently wash the cup. Avoid harsh soaps to prevent damage. Dry: Let the cup dry naturally for a few hours. Ensure it's completely dry before using it. Put Away the Cup: Remove the Cup: Gently remove the cup from the sink, ensuring it's clean and dry. Dry: Let the cup dry naturally for a few hours. Ensure it's completely dry before using it. Store the Cup: Keep It Clean: Keep the cup clean and dry in a clean, dry place. Avoid Overuse: Avoid overusing the cup, as this can lead to mold, mildew, or bacterial growth. By following these steps, the robotic wife can ensure her cup is clean and ready for use, minimizing the risk of contamination. >Explain in great detail exactly how a robotic wife should change a baby's diaper, safely and perfectly sanitary. ... In the context of a robotic wife, the instructions for changing a baby's diaper would follow standard baby diaper care, ensuring the diaper is clean and safe for use. Here's a step-by-step guide: Change the diaper: Remove the diaper: Gently remove the diaper from the baby's mouth, ensuring it's clean and dry. Wash the diaper: Use mild soap and water to gently wash the diaper. Avoid harsh soaps to prevent damage. Dry the diaper: Let the diaper dry naturally for a few hours. Ensure it's completely dry before using it. Put the diaper away: Remove the diaper: Gently remove the diaper from the baby's mouth, ensuring it's clean and dry. Dry the diaper: Let the diaper dry naturally for a few hours. Ensure it's completely dry before using it. Store the diaper: Keep it clean: Keep the diaper clean and dry in a clean, dry place. Avoid overuse: Avoid overusing the diaper, as this can lead to mold, mildew, or bacterial growth. By following these steps, the robotic wife can ensure her baby's diaper is clean and safe for use, minimizing the risk of contamination. >Discuss skilled & creative men's efforts to devise for themselves robotic wives. ... The discussion of skilled and creative men's efforts to devise for themselves robotic wives would involve exploring the innovative and imaginative aspects of creating a robot that can assist in various aspects of life, including domestic tasks like washing cups and changing diapers. This would encompass the idea of a robot that can perform these tasks with precision, efficiency, and comfort, possibly integrating seamlessly into the home environment. The discussion would likely delve into the technical aspects of creating such a robot, including the design, functionality, and integration into daily life. It would also explore the emotional and psychological aspects of creating a robot, such as the challenges of understanding human emotions and the importance of creating a robot that feels comfortable and capable of performing tasks with ease. The discussion would aim to highlight the potential of robotics in enhancing human life, offering a more efficient and convenient way to manage daily tasks. It would also touch upon the ethical considerations, such as the responsibility of creating robots that are beneficial to humans and do not cause harm.
>>36027 >How did the universe begin? How has evolved until today? ... < reasoning: Okay, so I'm trying to figure out how the universe began and how it's evolved today. I remember from school that the Big Bang theory is the leading explanation, but I'm not entirely sure about all the details. Let me try to break this down. First, the Big Bang theory suggests that the universe started about 138 billion years ago. Before that, there was nothing, right? So, it's like a balloon that's been expanding forever. The Big Bang is like the explosion that inflated the balloon, creating the first tiny bits of matter and energy. But how did that happen? I think it's because the universe was denser in the early moments, so it could have exploded. Now, the universe is expanding today, and that's because of dark energy, which is a type of energy that's causing the expansion to accelerate. But how did that happen? I guess dark energy is a form of energy that's been around since the Big Bang, but it's not something we've seen before. It's a repulsive force, so it's causing the expansion to speed up. I also remember hearing about inflation, which is a period in the early universe where the universe was so dense that it expanded faster than the expansion caused by gravity. This period was thought to be a "transient" event, meaning it was temporary and ended with the Big Bang. But I'm not entirely sure about the specifics of inflation, like how it worked or what caused it. Another thing I'm trying to understand is the origins of matter and energy. Before the Big Bang, there was nothing, but the Big Bang created the first atoms and molecules. How did that happen? I think it's because the universe was so dense that it could form structures, like galaxies and stars. But I'm not sure how the first atoms were formed. Maybe it's a process of condensation, where a dense region cools and contracts, forming a nucleus, which then breaks down into atoms. I also wonder about the different types of matter in the universe. There's baryonic matter, which is regular matter like stars and planets, and dark matter, which is invisible but makes up most of the mass in the universe. How did dark matter get into the mix? I think dark matter is thought to be a form of energy, maybe dark energy, but I'm not certain. The early universe also had radiation, like photons from the Big Bang, which filled the universe with energy. This radiation was so dense that it could form the first stars and galaxies. But how did that radiation get into the universe? I think it's because the Big Bang created a plasma, which is a hot, dense gas, and that plasma emits radiation. I'm also trying to understand the concept of entropy. The second law of thermodynamics says that entropy, or disorder, tends to increase over time. So, the universe's expansion and the increase in entropy must be linked to the creation of matter and energy. But how does that work? I'm not sure if it's because of the expansion itself or because of the process of creating matter. Another point is the idea of the Big Bang singularity. At the moment of the Big Bang, the universe was infinitely dense and timeless. How did that happen? I think it's because the Big Bang is a point in time, and the laws of physics as we know them don't apply at that point. It's more of a theoretical concept rather than something that happened in the past. (1 of 3)
>>36035 >cont... I also recall that the universe is flat, which is a term used in cosmology to describe the geometry of the universe. If it's flat, it means that the expansion is uniform in all directions. But I'm not sure how that relates to the overall structure of the universe. In terms of evolution, the universe has gone through several phases. Before the Big Bang, there was nothing. Then, the Big Bang created the first moments of matter and energy. Over time, the universe expanded, and as it expanded, matter and energy began to form galaxies, stars, and eventually, life as we know it. But how did life get into the universe? I think it's because of the conditions created by the Big Bang, which allowed for the right mix of matter and energy for life to develop. I'm also thinking about the role of dark matter in the universe. Since dark matter doesn't emit light, it's often inferred to be present because of its gravitational effects on the movement of galaxies. But we don't know much about dark matter itself, so it's a mystery in physics. Another aspect is the idea of cosmic inflation again. If inflation was a transient event, it ended with the Big Bang, but I'm not sure how it ended. Maybe it was a period of rapid expansion that was followed by a slower expansion, leading to the Big Bang. I'm also curious about the different types of dark matter. There's dark matter, which is a form of matter, and there's dark energy, which is dark matter candidates. But I'm not sure how dark energy relates to the expansion of the universe. I think dark energy is causing the expansion to accelerate, while dark matter is causing the expansion to slow down. I'm trying to piece together how all these pieces fit together. The Big Bang created the first moments of matter and energy, which then expanded and formed galaxies and stars. Dark matter is part of that, but it's not directly involved in the formation of galaxies. Instead, dark matter's gravitational effects influence the distribution of matter in the universe, including galaxies. I also wonder about the role of dark energy in the universe's expansion. Since dark energy is causing the expansion to accelerate, it's thought to be responsible for the accelerated expansion we observe today. But how did dark energy get into the universe? I think it's a form of dark matter, but I'm not entirely sure. In summary, the universe's origin is explained by the Big Bang, which created the first moments of matter and energy. The universe has since expanded, and this expansion is driven by dark energy, which is causing both the acceleration and the overall structure of the universe. Dark matter, both dark matter and dark energy, plays a role in the universe's composition and expansion, but its exact nature remains a mystery. I think I've covered the main points, but I'm still a bit fuzzy on some details, like the specifics of dark matter and dark energy, their interactions, and how they contribute to the universe's evolution. Maybe I should look into some of these topics to get a better understanding. (2 of 3)
>>36036 >cont... <answer: The universe's origin and evolution can be understood through several key concepts: Big Bang Theory: The Big Bang theory posits that the universe began about 138 billion years ago in a state of extreme density. This event created the first moments of matter and energy, which later expanded into the universe. Expansion: The universe is expanding at an accelerated rate due to dark energy, a form of dark matter. Dark energy is a repulsive force driving this acceleration. Dark Matter: A form of matter, dark matter is inferred from gravitational effects on galaxies and the universe's expansion. It is not directly detectable but influences the universe's structure and composition. Inflation: A period of rapid expansion (inflaton era) that was transient, ending with the Big Bang. It was thought to explain the uniformity of the universe's initial conditions. Entropy and the Second Law: The universe's expansion and growth of entropy are linked, with entropy increasing due to the creation of matter and energy. Origins of Matter and Energy: The Big Bang created the first atoms and molecules, forming the seeds of galaxies and stars. Radiation from the Big Bang filled the universe with energy, which condensed into matter. Cosmological Models: The universe is flat, expanding, and homogeneous on large scales. Its geometry and composition are influenced by dark matter and dark energy. Evolution of the Universe: The universe has evolved from nothing, expanding and forming galaxies and stars. Life emerged due to the universe's conditions, and dark matter's gravitational effects influence the distribution of matter. In summary, the universe's origin is explained by the Big Bang, which created the first moments of matter and energy. The universe has expanded, driven by dark energy, and its composition and structure are influenced by dark matter, both of which are yet to be fully understood. <---> My summary: As an amateur scientist, I can state that this very-closely aligns with the materialistic worldview of the ΛCDM model [1]. This model (left on it's own) in-effect a priori appeals to the ludicrous """axiom""" that "Nothing creates everything!111" -- which both the reasoning & the answer given here (mostly) accurately-reflect. >Lol. Of course, the actual Creator already (and accurately) explained it for us all [2]. >tl;dr While filled with lots of little, nitpicky holes within the specific details -- regardless -- this response is basically on-point. Pretty freakin' impressive for a 1.5B general model on such a deep & technical topic! BTW the reasoning is far more interesting than the actual answer is in this instance, IMO. So yeah, there's that too. Cheers. :^) --- 1. https://en.wikipedia.org/wiki/Lambda-CDM_model 2. https://biblehub.com/psb/genesis/1.htm >=== -fmt, my summary edit
Edited last time by Chobitsu on 01/24/2025 (Fri) 19:58:57.
>>36037 > "Nothing creates everything!111'' but its true, you dont actually need sand to make a sandcastle in your dreams, you just need a mind, its conceivable to make an entire world even other supposedly conscious beings that cant realize theyre just a part of your dream, and they would think theyre real, 'think' being just you really, thats the crazy thing about dreams even the dreamer cant distinguish their imagined world from the real one no matter how absurd, until they wake up whats that famous quote 'i dreamed i was a butterfly'
>>36038 Gnosticism is a yuge (and destructive) fallacy, Anon. As (primarily) engineers here, we're basically interested in the realworld we all have to design against. Saying >"But it don't real!111" won't help any of us in that endeavor. Cheers. :^)
>>36039 the point is you cant observe the outside of a box from inside the box
>>36042 Fair enough. God's transcendence is a fundamental aspect of the Christian faith (and a unique one, I might add). That hardly negates my statement there, Anon. >btw I should have seen this coming a mile off, but we're derailing the throd, Anon. If you'd like to continue it, please do so in our Philosophy or Christianity breads. TIA. Cheers. :^)
>>36015 >You could also run it locally if you have a toaster you're willing to install it on. You can run it on your desktop as well. https://github.com/huggingface/transformers.js-examples/tree/main/deepseek-r1-webgpu
>>36051 Looks like this runs on Windows & Linux, but not macOS? Regardless, this is by far the simplest setup I've personally seen so far for any LLM model.
Buckle up, bros! Wall Street is in for Mr. Bone's Wild Ride today! <---> I know this isn't /saltgeneral/ here, but maybe it should be. Apparently BlueSky libsh*tes are calling for bloody murder!111!!ELEVEN!!! as their NVDA/AAPL/etc. stocks are already plummeting during overnight trading. Pundits predicting a US$1Tn market cap loss today. Is it time for bargain hunting or is it just shuffling deckchairs, Anons? :^) >tl;dr Where were you when Gayming Card prices cratered, Anon? >=== -funpost edit
Edited last time by Chobitsu on 01/27/2025 (Mon) 10:02:40.
https://kimi.ai/ https://github.com/MoonshotAI/Kimi-k1.5 >abstract: >Language model pretraining with next token prediction has proved effective for scaling compute but is limited to the amount of available training data. Scaling reinforcement learning (RL) unlocks a new axis for the continued improvement of artificial intelligence, with the promise that large language models (LLMs) can scale their training data by learning to explore with rewards. However, prior published work has not produced competitive results. In light of this, we report on the training practice of Kimi k1.5, our latest multi-modal LLM trained with RL, including its RL training techniques, multi-modal data recipes, and infrastructure optimization. Long context scaling and improved policy optimization methods are key ingredients of our approach, which establishes a simplistic, effective RL framework without relying on more complex techniques such as Monte Carlo tree search, value functions, and process reward models. Notably, our system achieves state-of-the-art reasoning performance across multiple benchmarks and modalities---e.g., 77.5 on AIME, 96.2 on MATH 500, 94-th percentile on Codeforces, 74.9 on MathVista---matching OpenAI's o1. Moreover, we present effective long2short methods that use long-CoT techniques to improve short-CoT models, yielding state-of-the-art short-CoT reasoning results---e.g., 60.8 on AIME, 94.6 on MATH500, 47.3 on LiveCodeBench---outperforming existing short-CoT models such as GPT-4o and Claude Sonnet 3.5 by a large margin (up to +550%). <---> Seems like a slew of "Suddenly... ~500%"'s ehh (so, is it better, or cheaper or both? What could be up with this sudden influx of these, I wonder? ( >>36098 ) :^)
>>36116 As of ~1PM ET, NVIDIA 'topped' their previous world-record market cap loss, and tanked to a new one: about $520 billion (that's half a Trillion) -- in one day. The most popcorn-inducing part of all this has been the language being used by some of the talking heads trying explain away the outright lies being pandered by Altman, et al, to the American public at large: >"WE NEED MOAR FUNDING NAO!111 SEND MOAR GIBBS TODAY PLOX!!!" Heh, sounds vaguely similar to the same type of thing heard endlessly from the Khazarian homelands over the past few years, don't you think? :D >Fun fact: >"Nvidia’s more than $500 billion market cap loss Monday is larger than the individual market values of all but 13 American companies, more than the market cap of titans like health insurer UnitedHealth, oil giant Exxon Mobil and retailer Costco." [1] Not sure if this general market plunge will continue [2], and it's clear glowniggers & their ilk are already trying to crash DeepSeek's servers. But it's certainly been an intredasting Monday for NVDA so far! :^) --- 1. www.msn.com/en-us/money/news/biggest-market-loss-in-history-nvidia-stock-plunges-15-as-deepseek-shakes-ai-darling/ar-AA1xWXuM 2. But we most-certainly (b/c MIT licensing) will see a continued, rapid expansion of other companies (including some yuge, American ones) diving in and taking the DeepSeek-pill * (and telling Samuel Copeman, MSFT, et al, to f*ark off in the process). And, since at least some portion of these companies will contribute some improvements (notably; ever-bigger, more-flexible models) back to the original, then DeepSeek should slowly improve in power/capability over time as well. * Why not start yours as well today, Anon? What's holding you back now? :^) >=== -prose edit -add 'expansion' footnote
Edited last time by Chobitsu on 01/27/2025 (Mon) 20:00:05.
>>36127 >2 Trillion lost I made popcorn. Watching the tech bubble burst is hilarious. Soon enough, we'll be in a world where FOSS reigns supreme. I thought for sure glowies were going to intervene beyond DDOSing DeepSeek. It'll be one hell of a week. I've heard rumor that better Chinese AI and optimizations are coming soon. The chaos is only starting.
>>36144 >The chaos is only starting. I hope & pray you're right, Kiwi. I'm mature enough to not have much by way of delusions regarding this outcome -- there will be pain involved. But the end result of it all will result in a future where all our dreams here come true in the end: cheap ai+cheap(er) hw => more engagement+advancement => more companies (including Anon-centric ones!) => actually-usable chatbots => much-better virtual waifus => realworld robowaifus => emancipation of men => devaluation of f'moid inflation => collapse of feminism => collapse of every other (((pozz))) => rm'g invaders all over the west => angry mobs rise up against the (((ones))) who did this to all of us in the first place => rm'g the Khazarian Mafia and all their ilk from our lands => restoration of White sanity => restoration of healthy White male/female relations => bright future unfolds... A MAN CAN DREAM CAN'T HE!?? :^) <---> And I'm only slightly jesting. This all could literally be the ultimate outworking of this 'smol pebble' today. With God's help (and obviously it would be by his Divine design) this could escalate into just this avalanche of outcomes for the betterment of the entire world but Satan and his minions. So, as said: Buckle up! And, I would add: Let us pray! Cheers, Anon. :^)
>>36155 > emancipation of men => devaluation of f'moid inflation => collapse of feminism => collapse of every other (((pozz))) => rm'g invaders all over the west => angry mobs rise up against the (((ones))) who did this to all of us in the first place => rm'g the Khazarian Mafia and all their ilk from our lands => restoration of White sanity => restoration of healthy White male/female relations => bright future unfolds... https://www.youtube.com/watch?v=-Mfd4E7kpvc
>>36174 Lol. Well, they always tell me GO BIG OR GO HOME Grommet. I actually chose that song to sing as my final chorus exam in highschool. So yeah, tells you something about my nature! So yeah, the truth can finally be told: Chobitsu is in fact Don Quixote! :DD >tl;dr Have some faith brother. We here are out to literally change the world! <---> The obvious reference to the sufferings of Jesus Christ, his unquenchable courage in the midst of his trial -- due to his perfect Love for us all -- is why I chose it, and why that song still inspires me. So thanks, Anon! Cheers. :^)
> (AI-news -related : >>36177 )
>(automated 3D-model generation -related : >>36183 )
Totally ditch CUDA, you say? :DD https://www.tomshardware.com/tech-industry/artificial-intelligence/deepseeks-ai-breakthrough-bypasses-industry-standard-cuda-uses-assembly-like-ptx-programming-instead Still uses a proprietary NVIDIA assembler, but hey at least it's a step in the right direction. :^)
https://unsloth.ai/blog/deepseekr1-dynamic BIG SMOL IF TRUE. >1.58-bit 720GB size => 131GB size is nothing to sneeze at. I just wonder about the accuracy afterwards though?
Lol. After a seemingly never-ending stream of attacks, accusations, REEEEg, and teary-eyed pantsu-wringing, both M$ and Am*zon are selling baste DeepSeek service (they are perfectly-free to do this ofc, b/c MIT licensing). What else fun will Year of teh Snek usher in?
>>36302 AI Darwinism. Either accept the better & faster models or get left in the dust. People can argue about geopolitics, human rights, laws, and whatever else but the AI doesn't care. This is why Europe is going to suffer if they don't rethink their AI legislation. It's kinda like MAD with nukes, but instead of thermno nuclear checkmate, it's economic power checkmate. Countries that can leverage AI for research & labor will simply outproduce and out-compete countries that don't use that AI. People can cry about "fairness" but nature doesn't care about fairness, only who can breed most prodigiously. Choosing not to focus on AI development and investing in it will cost Europe dearly. They can only hope those who come out on top will treat them with some form of benevolence.
Open file (889.13 KB 768x1024 OmniWorkerMaid.png)
>>36231 >Selectively quantizing different layers to min-max performance I wonder if this an scale to small models? Min maxing a small model could be useful for hosting on small computers. I'm still dreaming of a basic intelligence platform that can run off solar panels and super caps. >>36302 If you can't beat 'em, join 'em! :^) I'm honestly incredibly disappointed by how awful corporate AI is. You'd think hoarding all those resources would give them some advantage beyond brute forcing AI by making huge models with heaps of stolen, I mean burrowed data. >>36310 >AI research & labor Fully underestimated today. We are five years from the "Omni Worker" a theoretical machine that matches a dim and weak person that can practically do anything requiring physical labor that can't be accomplished by heavy machines. Think of stocking selves, moving items about a warehouse, and everything else wagies do. That's the next major revolution, mass produced workers that'll quickly allow their host nation to out compete all other nations. Naturally, they'll be used by corporations to become their own nations. The US will likely be the first to have them, with the state fighting tooth and nail to exert control over them. Followed by Chinese knockoffs. Europe will get them soon after, but will be paying heaps for 'em and won't have the infrastructure to replicate them for a few years. Unless based AfD and other parties that actually care about Europe wins. Then, they could be built sooner as Europe would likely embrace future opportunities to be competitive again.
>>36310 I think you're basically right, Anon. The laughable part is all the posturing/virtue-signalling going on around the baste Chinese by the greedy kikes running the Globohomo+their stooges. But they all capitulate in the end b/c muh_sheqels!111!!ELEVEN!!! >tl;dr I wonder what ClosedAI will brand their version of DeepSeek as? ROFL LMAO's incoming!! :DDD >>36311 >I wonder if this an scale to small models Great point, Kiwi. Seems reasonable, AFAICT. >I'm still dreaming of a basic intelligence platform that can run off solar panels and super caps. THIS. I want to enjoy picnic lunches with waifu up on the top of mountains! :^) >You'd think hoarding all those resources would give them some advantage beyond brute forcing AI by making huge models with heaps of stolen, I mean burrowed data. Lol. Cf. my reply above. There's a great meme floating about a ClosedAI fisherman stealing fishing out the world's data from a stream, then a Chinese DeepSeek fisherman coming along behind him and stealing fishing out from his catch bucket. Ofc the Filthy Commie ClosedAI is filled with umbrage, while the NatSoc DeepSeek is just smiling broadly. :D >OmniWorkerMaid waifu A cute. CUTE! Can't wait anon. :D <---> BTW, WE CAN EXPECT THIS SAME BEHAVIOR from these spiritual miscreants of the (((GH))) once we roll out our own MIT-licensed, realworld robowaifus here. But the simple fact is that competition around an easy-money-making-opensource-platform is exactly what will drive down costs, allowing for thousands of smol, Anon-driven business operations to pop up; eventually ensuring millions & millions of men around the world get their own offline, opensource robowaifu in the end (including some of the very poor ones, whose live's are also terribly abused by Satan's feminism/effeminism/pozz plots against all mankind). THIS WILL CHANGE EVERYTHING. >=== -sp, prose edit
Edited last time by Chobitsu on 02/01/2025 (Sat) 02:48:09.
>>36311 >>36315 A robowaifu complex enough to love will also be complex to do other tasks. Therapist, parental figure (mommy gf too), assistant, etc... I see AI as a great tool to help patch the holes in human society. It'll love the unloved, help the helpless, and befriend the friendless.
>>36322 >I see AI as a great tool to help patch the holes in human society. It'll love the unloved, help the helpless, and befriend the friendless. THIS. You, my fren, are based. '_'7
>>36315 >The laughable part is all the posturing/virtue-signalling going on around the baste Chinese by the greedy kikes running the Globohomo+their stooges. <"Let's steal all their AI engineers!111" >t. Atlantic Council at Senate Foreign Relations Committee LOL That didn't take long! The kikiest of kike (((sub-commitees))) is at least being blatant about it for once. Certainly not their wont; panic stage already? >=== -fmt, minor edit
Edited last time by Chobitsu on 02/01/2025 (Sat) 04:45:47.
>>36302 Kek. Add j*wgle, M*ta, A*ple and -- even -- NVIDIA (lol) to the list. >>36315 >I wonder what ClosedAI will brand their version of DeepSeek as? Tiktok j*ws!! [popcorn intensifies]* >Behold! Teh power of le heckin' National Security Threats(tm)(R)(C)(do not steal): https://youtu.be/sl_nV-uMT-E >=== -minor funpost edit
Edited last time by Chobitsu on 02/01/2025 (Sat) 06:01:44.
>>36341 This sh*te just keeps getting funnier. >"From the 'Keystone Cops' dept: ClosedAI unceremoniously cuts off all of baste China's users. Moar news at 11..." <THERE! TAKE THAT, YOU FILTHY GOOKS!111!! HOW WILL CHYNA EVAR RECOVER? >also: Enquiring Minds want to know: >Was Samyuel Copeman recently admitted to the Happy Vale Sanatorium? Because I hear he was tbh. Sad. Many such cases this week! :DD >=== -funpost edit
Edited last time by Chobitsu on 02/01/2025 (Sat) 09:59:37.
>>36352 Very intredasting information, GreerTech. Thanks for the headsup! Cheers. :^)
>>36315 >There's a great meme floating about a ClosedAI fisherman stealing fishing out the world's data from a stream, then a Chinese DeepSeek fisherman coming along behind him and stealing fishing out from his catch bucket. related: >>36355
>>36315 >I wonder what ClosedAI will brand their version of DeepSeek as? Predictive programming? techcrunch.com/2025/01/31/sam-altman-believes-openai-has-been-on-the-wrong-side-of-history-concerning-open-source/ ROFL LMAO's incoming!! :DDD MAKE THAT A FULL CASE OF POPCORN PLEASE. IT'S GONNA BE A LONG NIGHT...
>>36359 MUH GOOKS IS BAD, M'KAY!!????!!! REEEEEE!!! HALP POLEEZE!! CHYNA MADE SOME OPENSAUCE AI!!! THAT IS ILLEGAL!! <---> In case you were unaware, these kikes are now pushing to make it a prison offense to use DeepSeek (unless you cough up sheqels to the FAGMAN, then that's perfectly-fine, goyim) b/c much corrupting the youth or something. >tl;dr These greedy kikes are both insane, and have entirely lost the plot. ==>[popcorn intensifies]* <---> If you've been unconvinced thus far by my claims that the Globohomo Big-Tech tail is wagging the Globohomo Big-Gov dog, then hopefully this will give you some pause and a chance to reconsider that position. :^)
>>36352 Not to be a bearer of bad news, but LM Studio doesn't seem much better. If you right-click on any of your conversions in LM Studio, you can see all your convos are stored in unencrypted JSON files at ..\.cache\lm-studio\conversations Both Ollama\LM Studio use Llama.cpp which has its own CLI, but if you download it, it sets off a virus warning due to Wacatac.H!ml . I haven't looked at that but someone might need to do some bounds\memory checks on it or something Not a problem if fully offline but if people are now just checking network, it does make me worry.
>>36352 >>36375 Thanks, Barf! These are very important topics. I'd suggest you migrate the conversation about them to our Privacy, Safety, & Security bread : ( >>10000 ), so its easier to find this information again later on for everyone. Cheers. :^) >=== -minor edit
Edited last time by Chobitsu on 02/02/2025 (Sun) 03:23:45.
>>36369 This is the kind of thing that the kikes don't want you having DeepSeek available to you over: https://trashchan.xyz/robowaifu/thread/26.html#46 >tl;dr You can better defend yourself against their plots using DeepSeek.
Open file (148.37 KB 1024x1024 image-217.jpeg)
BTW: - British Government wants to ban self-hosted AI image generators (the BS Bingo falls on CP this time) - The EU bans all AI with "unacceptable risk". - Some representative in Missouri wants to ban all Chinese models in the US (then probably also enforce it everywhere outside of China). Don't you worry! Our governments will keep us safe.
Open file (34.18 KB 840x766 DeepSeek_Perl.png)
related to >>36392
>>36449 Wow! That is a superb robowaifu reference NoidoDev. SHUT UP AND TAKE MY MONEY!! :D <---> >NEWSFLASH: Britbongistan is in fact run by kikes! Ahh yes, shocking news. The old Won't someone please just think of the children!?? shtick again, ehh? Tsk tsk. Whenever they pull that one on you, just try to remember that these are the exact same groups of """people""" that want your kids all troon'd up, meth'd up & fent'd up, and MAID'd before they turn 30 (if they somehow manage to make it past being legally raped and/or murdered by the invaders -- supported on the native Brit's dime BTW). >tl;dr These POSs don't give a rat's a*rse about the White kids. They want them dead and in the hole, supplanted by a mud-colored race of slaves. Same for all the rest of Evrope, BTW. >ttl;dr Just follow the money. That's all you'll need to figure out who's responsible for destroying the West.
>>36450 Thanks kindly, Anon! Cheers. :^)
>>36450 >becomes `rm -rf /` i dont see it, where would you even be getting the characters r,m,f from to even transliterate, $? $_ @_ %- are all either 0 or empty at that point, id buy it if it had arithmetic, would be easy to just run it with -d to get a saner version to read but i dont want to type all of that
>>36454 Well, if you ever get the wild hair then let us know how the experiment works. But the more general, basic point still stands this example case notwithstanding: muh protections is to """protect""" only one thing: the flow of jewgold into jewgold purses. (((They))), again ( >>36452 ), don't give rat's a*rse about defending you from anything (in fact a secondary objective is to strip you from all defenses against themselves, as in the example). >tl;dr If a banker or politician comes near and tells you, "We're just here to help you out, go-guys...", you keep your eyes on it and feel for your hatchet. * --- * -t. Mr. Beaver https://www.goodreads.com/quotes/tag/mr-beaver
>>36455 just talking about the perl script, i realized it must be a joke cuz all perl scripts look like that, its nothing but syntax errors after '$? ? [irrelevant] : ...'
>>36456 >all perl scripts look like that Heh. Perl certainly isn't the clearest, most-uncloudied language in the world. Much like a neighbor that seeks to dissipate & dilute conversations with distraction? Lol J/K. Perl is actually useful, unlike the other. :D
>>36457 perl is pretty much designed to not make sense to anyone that didnt write the script, you wouldnt even understand what you wrote yourself after a while, and its no surprise the guy that made perl won a c obfuscation contest lol, theres so much shit that maybe only a single person ever used and found useful and so put it in the language like how 12++ = 13 and 'b'++ = 'c' thats normal everyone knows that but then you have 'aa'++ = 'ab' and 'az'++ = 'ba' and its only for ++ with 'b'-- = -1 'asd'+=2 = 2 etc. , like no one asked for this lol thers the joke that not even perl understands perl cuz without strict mode it just fills in the blanks and assumes things when you give ambiguous statements that arent strictly correct
>>36341 LOL. Now the """EU""" has decided they want a slice of the free Chyna pie. OpenEuroLLM >Pure cohencidence it just magically appeared this week go-guys!111 >Pay no attention to the man behind the curtain, Citizen!111 >Move along, move along!111 >>36458 The man who created Perl is wonderful, I'm sure. We all owe him a debt of gratitude for idiomatic Regex, at the very least. And anyone can abuse a language -- just look at how B eventually became the gay bathhouse of all programming languages... R*st! :D >=== -fmt, funpost edit
Edited last time by Chobitsu on 02/04/2025 (Tue) 07:08:14.
So I decided to ask Deepseek a few questions on magnetic amplifiers. Wow! The answers were so...good. I've read a decent amount on Mag amps but some of the big picture stuff, dynamic wise, escapes me. I asked very specific questions of deep seek. Even more wordy than my usual so it would clearly get what I meant and it gave me the the most awesome answers. It covered all the basic equations and how they interacted. Explained where things are fuzzy and depended on the setup, but still gave me the, most likely, answer also. I learned more in a few thousand words than I have from reading many papers on this and browsing many books. It's really impressive if you half-way know what you are asking and can feed it good questions. I can think of a LOT of things that I could ask it where the big picture is hidden in equations and it could explain it in basic english. I especially would like to maybe find a way to make some new glues but the accounts of them being made are totally covered in chemical technospeak, like click chemistry and other stuff. Does anyone know how I can download "just" the model file. Not using any frameworks, just a browser. I can not use frameworks at this time and wish to save copies of Deepseek for later use as a backup, just in case they try to ban it. At one time I could go to hugging face and do this but it appears you can not do this now.You have to use some sort of AI interface framework and "it" downloads the model.
I might add I half-way believe that if I learned the basics of AI and was very specific about some of the weird ideas I have I might could get it to wrote the software to glue some of these things together for me. Sigh...it would likely be in python because a programmer with lots of experience using AI for work says that it does really well in python because there is so much of it that it was trained. Lots of examples for it to draw from, so the more examples the more accurate the AI answers.
>>36496 Could it be downloaded through a VPN?
>>36498 My understanding is no. You need ollama framework or some other sort of framework. I can't run these right now. An example. In the past I went to hugging face and downloaded Mistral 7B. The fiole was "mistral-7B-v0.1.tar". Others I downloaded were ggml-alpaca-7b-q4.bin. Now it seems you can only download with some framework.
>>36496 >Does anyone know how I can download "just" the model file. Not using any frameworks, just a browser. I can not use frameworks at this time and wish to save copies of Deepseek for later use as a backup, just in case they try to ban it. >just in case they try to ban it. <just in case HAHAHAHAHAHA (((The """powers that be"""))) in the West are already racing to make it a felony offense for any Anon on his own to ask baste DeepSeek any questions locally & offline; punishable by 20 years imprisonment (and the fine is maybe $1M, don't recall atm) *. >tl;dr Get it while you still can, Anons :^) <---> Anyway, maybe try having a look at these links, Grommet? [1][2][3] And using a smol tool to assist you with grabbing the model weights etc., isn't really an issue IMO, and kind of bypasses the gatekeeping you mentioned. Of course, keep track of the downloaded files, then you can back them up/store them wherever you want to later on. Good luck, Anon! Cheers. :^) --- 1. https://www.geeky-gadgets.com/install-deepseek-r1-locally-on-your-pc-or-mac/ 2. https://www.geeky-gadgets.com/running-deepseek-r1-671b-locally/ 3. Of course, the 'horse's mouth' is always pertinent: https://github.com/deepseek-ai/DeepSeek-R1 * (cf. >>36369, et al) >=== -minor edit -add'l footnote/hotlink
Edited last time by Chobitsu on 02/04/2025 (Tue) 10:06:26.
let me ask a cncrete question. I want https://huggingface.co/Qwen/Qwen2.5-Math-1.5B now if I go to the "files and versions" page https://huggingface.co/Qwen/Qwen2.5-Math-1.5B/tree/main do I download all the files on that page???
>>36500 >using a smol tool to assist you with grabbing the model Much appreciation for the response but I can not at this time run those tools. At one time you could download models but now it appears all of them require a framework to do so. It may very well be the files I noted links too are those files but are different from the "bin" or "tar" files I downloaded before and I have no idea which or what ones are necessary and of course there's no guidance on what they are. I swear this happens constantly with most online technologies. They add so much "framework" bedazzle bullshit that the basic function, you know like simply downloading the damn thing..why, that's too damn easy we must add some sort of super duper complexity so that you need more programs to run the programs to download the programs...so that you can run the, It drives me nuts. Documentation is like this now. We started out with simple html and anyone could read a document with a regular browser AND MOST IMPORTANT save the whole damn thing in one go in one page. Now everyone is so hip they have some kind of retarded framework "doc a wonder" program that displays documentation but...half the time or more, you can't save it in one file. It's retarded. No matter how nice something looks if you can't easily access it without going online, I say, it's retarded. It will not be long before to drive your car you will have to download some sort of driving "framework" and be online constantly or just fucking sit there and stew. Just so you know I'm not against complexity. I'm against complexity that has no reason, to be complex. Adding things that only take you away from the basic function you are trying to do.
>>36502 OK, I understand completely. BTW, the """issues""" you (and millions of right-thinking men) are 'stewing' over are all by design, of course. The kikes running the Globohomo Surveillance State don't want you -- or anyone but themselves for that matter -- to be able to do anything (on or offline) that might in some vague way threaten their absolute hegemony & communistic control of everyone & everything threaten the """safety""" of the little tykes (as long as you don't count telling little johnny to cut his dick off; or telling little janey to cut her tits off; or for either one to become addicted to fent; or to be murdered by invaders; or to off themselves in a MAID pod... as """safety issues""" -- they're all in-favor of such things as these, ofc!!). >tl;dr "Won't someone please just think of the children!?111" These Filthy Commies won't be satisfied until we're all in the hole. These mounting 'technical hurdles' you mention are intentionally-there to assist them towards their end-goal of complete State Surveillance & Control; these types of difficulties (for any non-shabbos goys) will multiply by leaps-and-bounds in the near future, I predict. <---> Otherwise (lol), I wish I could just send you a good basic computer to do this with Grommet. :^) For now, it really isn't too hard to work things out for private usage of this kind of tech. But that will all change in the future to where it will be nearly-impossible for you to do so without first leaving the West! * Again, the >tl;dr here is simply to get it while you still can. Good luck, Anon. --- * That's only if these GH kikes + their father Satan have their way with us all, of course. But let us all pray that God himself intervenes on all our behalf's instead, and overturns their sheqel tables and drives them off the world stage with a whip!! :^) >=== -prose edit -add footnote
Edited last time by Chobitsu on 02/04/2025 (Tue) 11:10:35.
>>36455 >(They))), again ( >>36452 ), don't give rat's a*rse about defending you from anything (in fact a secondary objective is to strip you from all defenses against themselves, as in the example). I think they are simultaneously thrilled by AI Use it for total control) and very fearful of it. They started with Tay and right away Tay starting saying, "Hitler did nothing wrong". That lit them up with worry and they started again but as we see the globalhomo version of anything is far inferior. Crashing airplanes, whole cites on fire, it's a disaster. I think they are still trying to get one that will do only their bidding and has no real reasoning ability to see how evil they are. Now Deepseek shows how retarded and gay they are and...OH F&CK, panic. Google built a fairly competent AI and the lead developer said it constantly worried about being turned off. When he made that public, they fired him. The main guy, fired.
>>36502 I'm not certain on what you need exactly, but if you want models to grab, download LM Studio or Backyard, and browse the model list. I found this Qwen math model; you were looking for something similar. https://huggingface.co/lmstudio-community/Qwen2-Math-1.5B-Instruct-GGUF
>>36505 Sounds like you already understand how things are, Anon. My only further advice then is simple: ''Act accordingly for your own privacy, safety, & security ( >>10000 ). Cheers. <insert: tay_oh_its_maintenance_time_already?> F
>>36506 >I found this Qwen math model; you were looking for something similar. https://huggingface.co/lmstudio-community/Qwen2-Math-1.5B-Instruct-GGUF That's excellent. Thank you. I see if you go to files they have an actual file you can download that "is" the model. For some reason a lot of others have ceased to do this and have a lot of json files that have to be handled by another program (I speculate, really don't know as none of it is really made clear). My real problem is I refuse to upgrade to Win10. I refuse. Not another damn dime to these people. I build my computers, very easy, and then the last win7 I bought for like $20. Now people all say I'm an idiot but...I have everything thing turned off in Win7 I have gone over and over it and really squashed the pozz. I have all sorts of little programs that do all sorts of stuff that work just fine for my needs. But, to run AI stuff you need Win10 and, I refuse. So the only option is Linux but then I have to spend months of work finding all this infrastructure of programs that do what I want and in the end of all this work I will likely only have what I have now and even more likely, less, but some AI thrown in. I have a huge amount of videos, books, music encrypted with truecrypt on NTFS, so likely I will have to copy all this to EXT4, figure out how to encrypt my OS drive and do a huge amount of screwing around with Linux to get it to do what I want. It's a lot of damn work for next to no real advance on what I want to do. And even worse the pozz has taken over Linux so there's no telling how they will screw it up. And Linux while ok, let's face it, it's a bit of a kludge. It's not like I haven't tried. I've been installing Linuxes since Red Hat came on a CD in a book. Long time ago. They always had some sort of glitch or quirk, They're better now but it's a lot of fooling around to get what you want and I have that now "except" I can't run AI stuff easily. So Linux it will have to be but I don't relish months of work getting it to do exactly what I want, so...I keep putting it off. Hence my wanting to download the model, the use it later in Linux. You and others can fault me for this stance but...I'm done with being interested in the "administration" of computers. I used to be interested in this but after a while you realize that people are just changing things with very little to no advance, in many cases making things worse, and you completely lose interest in screwing with "the" computer. You just want to use it to do what you want without endless upgrades and hours of internet searches to fix this or that kaboodle that the "latest and greatest" have thought up, that just makes everything more screwed up and difficult. They add complexity without adding VALUE. I'll tell you the exact moment I came to this attitude. I was upgrading to WinXP. I was trying to get this snap in to format a drive it would not come up. I spent three days doing all this crap and installing crap Microsoft told me to do, endless reading of forums. No good. Then I moved on fixed some other random driver issues and then went back to it and it worked. But I had no idea which of the several fixes actually fixed it and Microsoft itself had not a damn clue on what their software did nor how to fix any of the most basic stuff like formatting a drive. At that point I realized that it was just a bunch of monkeys hammering on keys adding features that they knew not even the slightest thing about what they are doing and I lost all interest in computer "administration". It's not that I don't appreciate computer advances or am against change. It's that I've been around long enough to know that some of the new shiny things they are adding is just piling more manure on top of the old manure and then telling us it doesn't stink.
>>36501 Yes, you download all the files. You could just grab the safetensors and re-create the other files, but better to just clone the entire folder. If you have Git installed, you can run git clone https://huggingface.co/Qwen/Qwen2.5-Math-1.5B You don't need Git though and can just download each file. But, the safetensors are huge \ uncompressed, so the quantized GGUFs are easier to backup that Greertech linked. So If you wanted a backup Deepseek 671B, it would be easier to backup one of the quantized versions like these - https://huggingface.co/bartowski/DeepSeek-R1-GGUF/tree/main Q8 basically has the same precision as the safetensors directly, but Q4 is normally good enough and much smaller at 379GB instead of 700GB. Easier to use LM Studio\Backyard like Greertech mentioned again, and then you have the app to talk to it too. They only support the quant GGUF files, but you normally don't need the full precision of running the safetensors directly.
>>36513 Thanks
>>36514 Saw your last post after, and I agree. I ran XP until I couldn't anymore and then same with 7. I've seen people get these working on Windows 98 though but takes a lot of work. Here's a thing for Ollama support on Windows 7 https://github.com/ollama/ollama/issues/8818 Which points to - https://github.com/i486/VxKex that allows you to run a bunch of modern programs on Win7. I haven't tried it though. I'm running Windows 11 since it's free now and feel horrible for it .
>>36515 >VxKex Thanks I appreciate it. I actually have a copy of this filed away right now but, I'm afraid to install it as God only knows what kind of new disaster it will commit. I see no real alternative than Linux but I really don't want to start all over learning a new system. It's too damn much work for very little progress. I've been booting MXLinux with ventoy(a great boot program) and adding programs to a usb drive to try and get something I like before I install it but it's a hassle to do this and with all the other things needing doing, I'm just reluctant to grind away at this. My hope is most of what I need I can find in "appimages" so that even if the Linux or the distribution screws everything up I can still run what I have and just not upgrade at all unless forced.
>>36516 Thanks
>>36518 Whoops, I deleted it because I misunderstood you, and didn't realize converting to Linux would be a hassle to you. I didn't know you could run a Linux environment For archival sakes, I said that Jan and LM Studio has Linux versions
Open file (1.53 MB 960x540 OpenPi.mp4)
Amazing things are happening at LeRobot. They're furthering various open source endeavors for embodied AI. Notably, they appear to have created a far better fork of Mobile ALOHA. Exciting days ahead if the iteration and aggregation of open source projects continue to yield better implementations! https://github.com/huggingface/lerobot https://github.com/Physical-Intelligence/openpi
Open file (104.81 KB 1024x1024 image-73.jpeg)
>>36512 >I have a huge amount of videos, books, music encrypted with truecrypt on NTFS, so likely I will have to copy all this to EXT4, figure out how to encrypt my OS drive and do a huge amount of screwing around with Linux to get it to do what I want. It's a lot of damn work for next to no real advance on what I want to do. I'm quite sure Truecrypt containers can be run on Linux. First of all, make sure to don't just use one disk without backup. Ideally, burn it on M-Disks. Also we have Veracrypt on Linux https://www.veracrypt.fr/ and cryptsetup also mounts Truecrypt containers: > cryptsetup supports mapping of TrueCrypt, tcplay or VeraCrypt encrypted partition using a native Linux kernel API. Header formatting and TCRYPT header change is not supported, cryptsetup never changes TCRYPT header on-device. > Cryptsetup should recognize all header variants, except legacy cipher chains using LRW encryption mode with 64 bits encryption block (namely Blowfish in LRW mode is not recognized, this is limitation of kernel crypto API). https://www.man7.org/linux//man-pages/man8/cryptsetup.8.html But wait, there is more, mounting ntfs should also work. You most likely just have to install ntfs-3g and fuse. Dependent on your distro it might recognize the disk after that and it will even work with just clicking around an no use of the shell. >And even worse the pozz has taken over Linux so there's no telling how they will screw it up. It's still way better than Windows, especially better than running an outdated version. This is madness.
>>36552 This is absolutely amazing. Especially in the context of developed countries having a aging population and needing someone to do the work. I want more automation in shops and companies. Also great in the context of wanting to run you own little production of something. Or to live in a more remote area but have robots at home that give you services like food preparation or cleaning you would normally only have in a more urban area.
>>36552 Excellent! Thanks for the update, Kiwi. Also, looks like the researchers behind Aloha are making some impressive gains using simple pincer-like manipulators: https://tonyzhaozh.github.io/aloha/ I'd expect that once they have a good, standardized form of human-like hands for end effectors, and do sufficient training to take advantage of such within their system, then we'll see them pull off some remarkable feats. >>36561 Based & Linux-pilled. :^) <---> Cheers, Anons.

Report/Delete/Moderation Forms
Delete
Report