/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Happy New Year!

The recovered files have been restored.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

Captcha
no cookies?
More

(used to delete files and postings)


“What counts is not necessarily the size of the dog in the fight – it’s the size of the fight in the dog.” -t. General Dwight Eisenhower


Open file (1.45 MB 1650x1640 gynoid_anime.jpg)
Robowaifus in Media: Thread 02 Robowaifus in Media: Thread 02 01/14/2023 (Sat) 23:49:54 No.18711 [Reply] [Last]
Post about whatever media predominately features at least one fembot aka gynoid (female form of an android) as an important character, ideally a robowaifu or synthetic girlfriend. Some freedom with the topic is allowed, virtual waifus or bodiless AI might also qualify if she is female. Negative framing of the whole topic should be mentioned with negative sentiment. Cyborgs with a human brain or uploads of women don't really fit in, but if she's very nice (Alita) and maybe not a human based cyborg (catborg/netoborg) we can let it slide. Magical dolls have also been included in the past, especially when the guy had to hand-made them. Look through the old thread, it's worth it: >>82 - Picrel shows some of the more well know shows from a few years ago. I made a long list with links to more info's on most known anime shows about fembots/robowaifus, without hentai but including some ecchi and posted it here in the old thread: >>7008 - It also features some live-action shows and movies. I repost the updated list here, not claiming that it is complete, especially when it comes to live action and shows/movies we don't really like. >In some cases I can only assume that the show includes robogirls, since the show is about various robots, but I haven't seen it yet. A.D. Police Files: https://www.anime-planet.com/anime/ad-police-files Andromeda Stories: https://www.anime-planet.com/anime/andromeda-stories Angelic Layer: https://www.anime-planet.com/anime/angelic-layer Armitage III: https://www.anime-planet.com/anime/armitage-iii Azusa will help: https://www.anime-planet.com/anime/azusa-will-help Blade Runner 2022: https://www.anime-planet.com/anime/blade-runner-black-out-2022 Busou Shinki: https://www.anime-planet.com/anime/busou-shinki-moon-angel Butobi CPU: https://www.anime-planet.com/anime/buttobi-cpu Casshern Sins: https://www.anime-planet.com/anime/casshern-sins Chobits: https://www.anime-planet.com/anime/chobits

Message too long. Click here to view full text.

Edited last time by Chobitsu on 10/14/2023 (Sat) 05:44:28.
436 posts and 390 images omitted.
>>36422 I see what you mean, Anon! Looks very interesting, whether she's a strict robowaifu or no. Thanks, NoidoDev. Cheers. :^)
Open file (266.79 KB 340x569 bestgirlmila.png)
>>35266 >>35281 Mila best girl
>>35911 >Make a Girl After some research, it appears to be a movie based on a lonely nerd inventor making a clone of his crush. Naturally, instead of a brain, she has a computer, she may have more mechanical bits under her grown skin. The main plot points seem to be around how uncertain he is of her love, given that she has no choice. Also, his crush seems to have reciprocated his feelings. He was just too autistic to understand her flirting. Leading to a confused love triangle. Somewhere in the mix a super villain abducts the robot girl. >Does Zero count? I'd say yes. Her body may be mostly biological but, she's got a robot soul. It's what's inside that counts most. Overall, looks like heaps of fun. Especially if the synthetic girl wins. Looking forward to it coming to the west. The original short that inspired the movie; https://www.youtube.com/watch?v=jw5NWWyiW70
>>36549 > instead of a brain, she has a computer, Thanks, I thought she's not purely biological after seeing another video in which here eyes were glowing. This makes her clearly more of a robot, though in form of a "cybernetic organism". Like Cameron. >The main plot points seem to be around how uncertain he is of her love, given that she has no choice. Ooof, here we go again. To much soy in the Japanese food. If he programs her to love him, then she loves him. She doesn't need "a choice". Women had no choice in most of history and probably still most worldwide don't. They are made by evolution to adjust to that. Robots are even more like that, without the faults, doubts, counter force of the wish for autonomy or hypergamy and so on.
>>36549 Thanks for the information, Anon! >>36564 >She doesn't need "a choice". This. If men choose to go that route, then they'll end up in hell-hole world (cf. >>36544 ). :^) >tl;dr Won't someone just please think of the vacuum cleaners!??

Open file (8.45 MB 2000x2811 ClipboardImage.png)
Cognitivie Architecture : Discussion Kiwi 08/22/2023 (Tue) 05:03:37 No.24783 [Reply] [Last]
Chii Cogito Ergo Chii Chii thinks, therefore Chii is. Cognitive architecture is the study of the building blocks which lead to cognition. The structures from which thought emerges. Let's start with the three main aspects of mind; Sentience: Ability to experience sensations and feelings. Her sensors communicate states to her. She senses your hand holding hers and can react. Feelings, having emotions. Her hand being held bring her happiness. This builds on her capacity for subjective experience, related to qualia. Self-awareness: Capacity to differentiate the self from external actors and objects. When presented with a mirror, echo, or other self referential sensory input is recognized as the self. She sees herself in your eyes reflection and recognizes that is her, that she is being held by you. Sapience: Perception of knowledge. Linking concepts and meanings. Able to discern correlations congruent with having wisdom. She sees you collapse into your chair. She infers your state of exhaustion and brings you something to drink. These building blocks integrate and allow her to be. She doesn't just feel, she has qualia. She doesn't see her reflection, she sees herself reflected, she acknowledges her own existence. She doesn't just find relevant data, she works with concepts and integrates her feelings and personality when forming a response. Cognition, subjective thought reliant on a conscious separation of the self and external reality that integrates knowledge of the latter. A state beyond current AI, a true intellect. This thread is dedicated to all the steps on the long journey towards a waifu that truly thinks and feels.

Message too long. Click here to view full text.

Edited last time by Chobitsu on 09/17/2023 (Sun) 20:43:41.
449 posts and 135 images omitted.
>>35237 >You are overdoing it with the anonymity. Lol. Perhaps, perhaps not. I could give you over 9'000 reasons why I'm not going far enough yet, I'm sure. But not on NYD, plox. :D Regardless, thanks for the good suggestions NoidoDev. I'm sure we'll arrive at good solutions ot protect both Anons & their robowaifus + Joe Sixpacks & theirs as well, if we all just put our heads together in such a manner. BTW ( >>10000 ) may be a better thread for this discussion? Cheers, Anon. :^)
>>35189 Some updates: - I've made very little progress on causal reasoning since my last update. I have the ontological relationships, and now I'm integrating them with causal reasoning. I'm working on that now. - I've learned a lot about what's important to me in a waifu. On that second point: - There are three factors that I think are critical: love, romantic interest, and relationship stability. - "Love" gets used for a lot of things, but I think the most relevant form is: feeling "at home" with someone, and feeling like that someone will always have a home at your side, no matter what. There's no single solution that would fit everyone here, but I think it always comes down to: the things you most deeply value, and what you feel is missing in other people that prevents you from feeling at home with them. Some examples (probably all of these will resonate with people broadly to some extent, but I think there's usually a "core" one that's both necessary and sufficient): - ... The ability to freely converse with someone, without having to worry about whether they "get it" or are going to misinterpret you. - ... Paying attention to nuance, and not oversimplifying things just because it's natural or convenient. - ... The willingness to pursue one's curiosities and creative inclinations at great depth. - ... I think Claude in particular is very good at uncovering these values, so I think LLMs broadly will be good at this in the not-to-distant future. - Romantic interest would be the feeling of wanting to be closer to someone, both emotionally and physically. I think there are two sides of this: the desire to be "observed" and the desire to "observe". I think the strongest form of "wanting to be observed" comes from the belief that someone can meaningfully contribute to the things you feel and believe. I think the strongest form of "wanting to observe" comes from the belief that someone embodies things you deeply value. I think lowercase-r romantic interest can come just from these things, and capital-R Romantic interest comes from the resonance between these things and physical desires. The bridge between these things and physical desires seems to come from analogies with the physical sense: to be heard (respected), to be seen (understood), to be felt (empathized with). I think these analogies work because our brains are deeply wired to understand how to deal with the physical senses, and that circuitry gets reused for higher-level understanding. The analogies for smell (e.g., "something smells off") and taste ("having good taste") are a little harder to pin down and strongly overlapping (probably because they both deal with sensing chemical properties), but I currently thing the "right" way to think about those is in terms of judgement (to be judged). - Relationship stability come from overlap in lifestyle and from someone not doing/embodying anything that disgusts you. Whereas the other two can be "optimized" mostly just by understanding someone's core values, this one likely can only be discovered through trial and error since lifestyles are complex things that can co-evolve with your waifu. Once I get to higher-level functionality in Horsona, I'll likely focus on trying to align with these things. I have some ideas on how to do this.
>>36101 POTD Amazing stuff, really. >I have some ideas on how to do this. Please hurry, Sempai! The world is anxiously waiting for this stuff!! :^)
>>36101 Minor update on merging ontological reasoning and causal reasoning: - The causal inference code seems able to handle ontological information now. I still need to update some interfaces with recent changes to how I'm handling variable identifiers and linking data into the graphs. - Since the ontologies and causal graphs are generated by an LLM, I'll need some way to identify & correct errors in them automatically. Right now, at causal inference time, I'm identifying error cases where a single data field applies to multiple causal variables and cases where multiple data fields apply to a single causal variable. I haven't figured out yet how exactly to correct the underlying graphs when an error is detected, but I'm thinking: (1) flag the relevant causal graphs and sections of the ontology, (2) if something gets flagged enough times, regenerate it. The bare minimum I want here is for "good" parts of the graphs to be stable, and for "bad" parts to be unstable.
>>36309 Thanks for you work on this. I'm still trying to work myself through the unread threads, till I have time to use OpenAI and your software.

Open file (380.52 KB 512x512 1 (25).png)
Open file (359.76 KB 512x512 1 (58).png)
Open file (360.42 KB 512x512 1 (62).png)
Open file (330.60 KB 512x512 1 (93).png)
Open file (380.42 KB 512x512 1 (104).png)
AI Robowaifu Art and References ** SoaringMoon 11/25/2022 (Fri) 06:43:32 No.17763 [Reply] [Last]
I generated a whole much of neat images with Stable Diffusion 1.6. Enjoy. You are free to use the for whatever. >OP images are my five favorite of the bunch. Some proportions are off obviously. < "a robowaifu with [color] hair, digital painting, trending on artstation" Was the generation phrase. --- >Sorry to spoil all your files, rather than just the one (w/ Lynxchan it's all or nothing after the fact). The Problem Glasses are a Leftist dog-whistle that is rather distasteful around here (and also a red-flag). Certainly not something we would want to look at year-after-year in the catalog. Hope you understand, OP. ** Probably best to limit it to image generation, but also tolerating clips. Anything more advanced goes into the current propaganda thread : (>> TBD) . >=== -edit subject -add footnote
Edited last time by Chobitsu on 09/06/2024 (Fri) 09:14:10.
286 posts and 550 images omitted.
>>32653 3D bust to 3D model via Hunyan3D >>36287
>>36072 Heh. BTW, this is a good example of our opus about >Mind the fork, lass... (cf. >>4313, et al) If we watch our p's & q's, dot all our i's and cross all our t's, then we too can have snu-snu-sized waifus such as this, that we can casually carry around! :D
>>33117 I burned through the rest of my credits on Runway AI. I wasted 60-75€ for the last few month on a subscription which I didn't use. Now I'm out, this here is the rest. Now that I look through my pics here >>33029, I wished I had used some others or just made more. Maybe I will, but rather with another platform than Runway, maybe Pikalabs or Luma. I only post the relatively good videos here, the other ones are more flawed. The first three are about the robot waifu in the kitchen, these are just three variants of the same video. The third one is the shortest, but the endings of the others are a bit flawed.
>>36461 Apparently the max per posting is 20MB, not just per file.

Speech Synthesis/Recognition general Robowaifu Technician 09/13/2019 (Fri) 11:25:07 No.199 [Reply] [Last]
We want our robowaifus to speak to us right? en.wikipedia.org/wiki/Speech_synthesis https://archive.is/xxMI4 research.spa.aalto.fi/publications/theses/lemmetty_mst/contents.html https://archive.is/nQ6yt The Taco Tron project: arxiv.org/abs/1703.10135 google.github.io/tacotron/ https://archive.is/PzKZd No code available yet, hopefully they will release it. github.com/google/tacotron/tree/master/demos

Message too long. Click here to view full text.

Edited last time by Chobitsu on 07/02/2023 (Sun) 04:22:22.
357 posts and 137 images omitted.
>>36419 isnt narrator in the accessibility settings just a builtin text-to-speech program
>>36383 >>36417 Thanks for your help, Anons! :^)
>>36423 I tried that at first, but the problem is that as far as I know, it reads EVERYTHING on the screen.
>>36419 >but me and many others are on Windows Okay, I assume this is for development, while the real system will more likely be Linux. Anyways, I don't know how this exactly works but I think you can use the embedded Linux in Windows or whatever this is, and I assume there's also a repository. WSL: https://learn.microsoft.com/en-us/windows/wsl/about
>>36447 Yeah, WSL currently defaults to an Ubuntu 24 variant system + terminal. It's not a perfect match, but it's close enough in most respects. For example, I've been able to build & successfully run juCi++ [1][2][3] on it (this is a moderately complex & dependency-laden GTKMM -based GUI application, built from source) without any hiccups. This subsystem is very simple to set up, and I'll be happy to help anyone here who may be struggling to do so. Hopefully it can support Anon's other development needs, and if not then moving over to a full Linux system will be all the easier for them afterwards. Cheers. :^) --- 1. https://gitlab.com/cppit/jucipp/-/blob/master/docs/install.md#debianlinux-mintubuntu 2. here's a one-liner to copypasta onto your new Ubuntu terminal for installing all it's dependencies: sudo apt-get install libclang-dev liblldb-dev || sudo apt-get install libclang-6.0-dev liblldb-6.0-dev || sudo apt-get install libclang-4.0-dev liblldb-4.0-dev || sudo apt-get install libclang-3.8-dev liblldb-3.8-dev; sudo apt-get install universal-ctags || sudo apt-get install exuberant-ctags; sudo apt-get install git cmake make g++ clang-format pkg-config libboost-filesystem-dev libboost-serialization-dev libgtksourceviewmm-3.0-dev aspell-en libaspell-dev libgit2-dev Then just follow the rest of the install instructions from the link above (ie, git clone --recursive https://gitlab.com/cppit/jucipp , etc.) 3. WSL is also a great platform for Windows users to build & run BUMP for archiving this board, btw ( >>14866 ). >=== -prose edit -add footnote/hotlink -add dependencies/BUMP footnotes
Edited last time by Chobitsu on 02/03/2025 (Mon) 15:36:01.

Robot skeletons and armatures Robowaifu Technician 09/13/2019 (Fri) 11:26:51 No.200 [Reply] [Last]
What are the best designs and materials for creating a skeleton/framework for a mobile, life-sized gynoid robot?
243 posts and 123 images omitted.
This here https://youtu.be/Fd-0tHewFf4 is related to 3D printing >>94 and modelling >>415 but I think it's more general and is useful for armatures (shells) and flexible subskin elements. The video shows a method how to make 3D printed parts that can give in to pressure from different directions. Something I was try to do for quite some time: >>17110 >>17151 >>17195 >>17630 and related comments. It refers to PLA in the headline and in the video, but this doesn't matter. It's just that the part itself is flexible, while the material itself doesn't have to be.
>>36324 I noticed that the Missle_39 video you posted ( >>36299 ) contains this same style of structural, flexible printing within the torso volume of their robowaifu. That video convinced me of the value of such an approach, so it's added to the long list of research topics for me. Cheers NoidoDev, and thanks! :^)
>>36324 >>36325 Decided to do a snap to clarify specifically: https://trashchan.xyz/robowaifu/thread/26.html#43
>>36331 This in the middle is just some regular infill, I think. It can be selected in the slicer. Looks clearly like "Gyroid Infill" https://help.prusa3d.com/article/infill-patterns_177130
>>36366 POTD Excellent resource, NoidoDev, thanks!! Yeah, that looks exactly like the same kind of infill. Just looking at it, I knew it would be strong in every direction (a fairly high-priority, in a dynamic system like a robowaifu), and the notes in your link confirmed that. <---> Thanks again, Anon. Cheers. :^)

Philosophers interested in building an AGI? pygmalion 06/26/2021 (Sat) 00:53:09 No.11102 [Reply] [Last]
Why is it that no philosophers are interested in building an AGI? we need to change this, or at least collect relevant philosophers. discussion about philosophy of making AGI (includes metaphysics, transcendental psychology, general philosophy of mind topics, etc!) also highly encouraged! Ill start ^^! so the philosophers i know that take this stuff seriously: Peter Wolfendale - the first Neo-Rationalist on the list. his main contribution here is computational Kantianism. just by the name you can tell that he believes Kant's transcendental psychology has some important applications to designing an artificial mind. an interesting view regarding this is that he thinks Kant actually employed a logic that was far ahead of his time (and you basically need a sophisticated type theory with sheaves to properly formalize). Other than that he also thinks Kant has interesting solutions to the frame problem, origin of concepts, and personhood. CONTACTS: He has a blog at https://deontologistics.co/, and also has posted some lectures on youtube like this one: https://www.youtube.com/watch?v=EWDZyOWN4VA&ab_channel=deontologistics Reza Negarestani - this is another Neo-Rationalist. he has written a huge work (which I haven't read yet ;_;) called "Intelligence and Spirit". It's massive and talks about various grades of general intelligence. this includes sentient agents, sapient agents, and Geist. this guy draws from Kant as well, but he also builds on Hegel's ideas too. his central thesis is that Hegel's Geist is basically a distributed intelligence. he also has an interesting metaphilosophy where he claims that the goal of philosophy is the construct an AGI. like other Neo-Rationalists, he heavily relies on the works of Sellars and Robert Brandom Recc: Ray Brassier (recent focuses) - I dont think he is working on artificial general intelligence, but his work on Sellars, and in particular rule following is very insightful! Hubert Dreyfus - Doesn't quite count, but he did try to bring Heidegger to AGI. He highlighted the importance of embodiment to the frame problem and common sense knowledge. I personally think Bergson might have explicated what he wanted to achieve but better, though that guy is like way before AI was even a serious topic, lol. Murray Shanahan - This guy has done some extra work on the frame problem following Dreyfus. His solution is to use global workspace theory and parralel processing of different modules. Interesting stuff! Barry Smith - Probably the most critical philosopher on this list. He talks about the requisite system dynamics for try strong AI, and concludes that our current methods simply don't cut it. One of the key stressing points he points out here with a colleague is that our current AI is Markovian when fleshed out chat dialogue would be a non-Markovian task (you can find the arxiv link of his criticism here: https://arxiv.org/abs/1906.05833). He also has knowledge on analytic ontology (and amongst other thing has some lectures about emotion ontology). I think his main genius however is in coming up with a definition of intelligence that puts a lot of the problems with our current approaches into context (which can be found here: https://www.youtube.com/watch?v=0giPMMoKR9s&ab_channel=BarrySmith) CONTACTS: He has a yt channel here https://www.youtube.com/watch?v=0giPMMoKR9s&ab_channel=BarrySmith

Message too long. Click here to view full text.

257 posts and 113 images omitted.
>>36259 >...like the word 'literally' which is literally actually and not literally... I'm stealing this.
Very interesting convo. Thanks for all the details! Again, way above my pay grade but as I started going through this yesterday, I also thought garbage in \ garbage out. But from the start, my intention was to say one man's trash is another man's treasure I guess. That is to say, if that garbage makes me happy, it has produced a valid use case and that's all that matters to me, but I'm a proponent of the subjective theory of value.
>>36274 >I also thought garbage in \ garbage out. This. I think you're right, Barf! Cheers. :^) >>36307 Very interesting. We truly understand little about the human psyche, IMO. Lots to learn still. Thanks, GreerTech! Cheers. :^)
>>36307 Thanks! That sounds like the article I read. It seems like prompt engineers are closer to AGI than the GPU farms training the LLMs. People were shocked by reasoning models but prompt engineers have been doing that for awhile. The same could happen for imagination I hope.

Humanoid Robot Projects Videos Robowaifu Technician 09/18/2019 (Wed) 04:02:08 No.374 [Reply] [Last]
I'd like to have a place to accumulate video links to various humanoid – particularly gynoid – robotics projects are out there. Whether they are commercial scale or small scale projects, if they involve humanoid robots post them here. Bonus points if it's the work of a lone genius. I'll start, Ricky Ma of Hong Kong created a stir by creating a gynoid that resembled Scarlett Johansson. It's an ongoing project he calls an art project. I think it's pretty impressive even if it can't walk yet. https://www.invidio.us/watch?v=ZoSfq-jHSWw === Instructions on how to use yt-dlp to save videos ITT to your computer: (>>16357)
Edited last time by Chobitsu on 05/21/2022 (Sat) 14:20:15.
222 posts and 75 images omitted.
>>35201 Thanks, NoidoDev! I found this based on searching from your crosslink : https://x.com/missile_39?mx=2 Looks like they're making some great progress! I don't read Nihongo yet, so I'm unsure at this point what the goal with their project is. Regardless, I sure wish them well with it! Thanks again, Anon. Cheers. :^)
Open file (1.99 MB 1920x1080 clapclapclap.png)
> Hannah Dev, David Browne Q&A: https://youtu.be/yFvSYekCuBM Arms: https://youtu.be/UX-1hr3NPeo > The Robot Studio DexHand, reach, pick and place: https://youtu.be/uF7vVPG_mf0 Hand picking M8 screw: https://youtu.be/PucX_w9-fOs DexHand and HOPE arm, repeated picking: https://youtu.be/JfiN_qcpODM > Realbotix (known for Harmony) CES, current product demo, price US$175k or more: https://youtu.be/2HQ84TVcbMw > HBS Lab Horizontal Grasping: https://youtu.be/CR_aLIKelv8 > Sanctuary AI In-hand manipulation: https://youtu.be/O73vVHbSX1s > Tesla bot Walking (might be fake): https://youtu.be/xxoLCQTN0KA > Chinese robots Fails, probably biased source: https://youtu.be/12IwfzyHi0A
>>35675 POTD Nice work, NoidoDev. Kind of a treasure cache of good information. I'm particularly excited to see HannahDev's good progress with brushless actuation. I hope we here can somehow return the favor to him someday. Thanks again! Cheers. :^)
>>35207 >Missile_39 There's also a new video. I didn't watch it completely and it's in Japanese, but it shows the parts of the current robots and some ideas: https://youtu.be/ZC28u1Dqcpg

Robot Eyes/Vision General Robowaifu Technician 09/11/2019 (Wed) 01:13:09 No.97 [Reply] [Last]
Cameras, Lenses, Actuators, Control Systems Unless you want to deck out you're waifubot in dark glasses and a white cane, learning about vision systems is a good idea. Please post resources here. opencv.org/ https://archive.is/7dFuu github.com/opencv/opencv https://archive.is/PEFzq www.robotshop.com/en/cameras-vision-sensors.html https://archive.is/7ESmt >=== -patch subj
Edited last time by Chobitsu on 12/27/2024 (Fri) 17:31:13.
140 posts and 57 images omitted.
Open file (658.38 KB 1089x614 7.png)
These with the convex lens that you can split apart might be nice. No cameras, but could probably be added and has everything else on a custom PCB already. https://www.adafruit.com/product/4343
This is really exciting stuff lately ITT, Anons. Thanks for linking to resources for us all! Cheers. :^)
Researchers were able to tweak machine vision into being usable in low-light conditions https://techxplore.com/news/2025-01-neural-networks-machine-vision-conditions.html
>>36237 Thanks GreerTech! I'm actually interested in devising a smol flotilla of surveillance drones (the tiny, racing ones) for a robowaifu's use for situational-awareness on grounds. Having 'night vision' is very useful for this ofc -- especially if no special hardware would be required! Cheers. :^)
>>25927 Unfortunately, it looks like project may be dead (your link was broken and the last update was in last January, but I wonder if it could be retooled with newer and more efficient LLMs and vision models. It definitely caught my eye, it solved the elephant in the room I was thinking about, how do we tie a vision model to an LLM? https://github.com/haotian-liu/LLaVA

AI Software Robowaifu Technician 09/10/2019 (Tue) 07:04:21 No.85 [Reply] [Last]
A large amount of this board seems dedicated to hardware, what about the software end of the design spectrum, are there any good enough AI to use?

The only ones I know about offhand are TeaseAi and Personality Forge.
140 posts and 44 images omitted.
>>35150 Press F <insert: Oh!? It's maintenance time already?.jpg>
>>31405 Very interesting, will do some testing on my Pi 5 with Gemma2 and return my findings. it also seems to be able to use the weights directly from ollama, which is super nice.
>>35044 >o3 just came out and it is multiple times better than chatgpt-4. the argument that the underlying tech for current ai is not good enough is very weak. Who claims that the tech is "too weak"? It's an online service, and not optimized to act like a human-like robot wife. We still need to build a framework to handle local LLMs. That said, improvements in such online services and self-hosted LLMs will make it easier getting help with research and coding.
>>35370 machine learning is compressed google autocomplete like most people think. this becomes more evident when machine learning us applied to videogames. a genetic algorithm can make a boxer player learn how to box on its own for example https://m.youtube.com/watch?v=SsJ_AusntiU&pp=ygUJYm94aW5nIGFp
>DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models [1][2] >abstract: >Mathematical reasoning poses a significant challenge for language models due to its complex and structured nature. In this paper, we introduce DeepSeekMath 7B, which continues pre-training DeepSeek-Coder-Base-v1.5 7B with 120B math-related tokens sourced from Common Crawl, together with natural language and code data. DeepSeekMath 7B has achieved an impressive score of 51.7% on the competition-level MATH benchmark without relying on external toolkits and voting techniques, approaching the performance level of Gemini-Ultra and GPT-4. Self-consistency over 64 samples from DeepSeekMath 7B achieves 60.9% on MATH. The mathematical reasoning capability of DeepSeekMath is attributed to two key factors: First, we harness the significant potential of publicly available web data through a meticulously engineered data selection pipeline. Second, we introduce Group Relative Policy Optimization (GRPO), a variant of Proximal Policy Optimization (PPO), that enhances mathematical reasoning abilities while concurrently optimizing the memory usage of PPO. They seem to go into some depth describing the optimization approaches they used to achieve the higher efficiencies with the available hardware. --- 1. https://arxiv.org/abs/2402.03300 2. https://github.com/deepseek-ai/DeepSeek-Math

Python General Robowaifu Technician 09/12/2019 (Thu) 03:29:04 No.159 [Reply] [Last]
Python Resources general

Python is by far the most common scripting language for AI/Machine Learning/Deep Learning frameworks and libraries. Post info on using it effectively.

wiki.python.org/moin/BeginnersGuide
https://archive.is/v9PyD

On my Debian-based distro, here's how I set up Python, PIP, TensorFlow, and the Scikit-Learn stack for use with AI development:
sudo apt-get install python python-pip python-dev
python -m pip install --upgrade pip
pip install --user tensorflow numpy scipy scikit-learn matplotlib ipython jupyter pandas sympy nose


LiClipse is a good Python IDE choice, and there are a number of others.
www.liclipse.com/download.html
https://archive.is/glcCm
66 posts and 18 images omitted.
On the chatbot front, I've been working to update my old Python chatbot to actually be a good companion. Here's a sample of my work, but I'm far from done. https://files.catbox.moe/pf85ai.zip
>>35955 Thanks, GreerTech! Personally, I'm a big fan of Cleverbot! [1] :DD JK, good luck with revamping it, Anon. Cheers. :^) --- 1. Remarkably, it's still available!! https://www.cleverbot.com/
>>35928 Fun bot and easy to install. Another suggestion for Python is using pre-compiled C binaries and just calling the CLI via python. It should make it a little faster, more modular and gets around having to install PyTorch so much easier to ship. Here are links to the latest Whisper and Piper binaries https://github.com/ggerganov/whisper.cpp/actions/runs/12886572193 https://github.com/rhasspy/piper/releases/tag/2023.11.14-2 It's kind of the best of both worlds since you can code in python and still get the speed\ease of pre-compiled binaries for the STT\TTS system.
>>35976 Thanks very kindly for the links here, Barf. Cheers. :^)
>>35976 I wonder, is it possible to make a Appimage(Linux) https://appimage.org/ or 0install( Linux, Windows and macOS) https://0install.net/ download program for Linux? These have all the files needed to run whatever program installed all in one place. No additional installations needed.

Report/Delete/Moderation Forms
Delete
Report