/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality!

We are back. - TOR has been restored.

Canary update coming soon.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

Captcha
no cookies?
More

(used to delete files and postings)


“I think and think for months and years. Ninety-nine times, the conclusion is false. The hundredth time I am right. ” -t. Albert Einstein


ROBOWAIFU U Robowaifu Technician 09/15/2019 (Sun) 05:52:02 No.235 [Reply] [Last]
In this thread post links to books, videos, MOOCs, tutorials, forums, and general learning resources about creating robots (particularly humanoid robots), writing AI or other robotics related software, design or art software, electronics, makerspace training stuff or just about anything that's specifically an educational resource and also useful for anons learning how to build their own robowaifus. >tl;dr ITT we mek /robowaifu/ school.
Edited last time by Chobitsu on 05/11/2020 (Mon) 21:31:04.
147 posts and 72 images omitted.
Hundreds of different technical videos related to robotics engineering topics. >Medical robotics research laboratory at Carleton University - Laboratoire de recherche en robotique médicale chez Univeristé Carleton https://www.youtube.com/@biomechlab/playlists
I made a learning website before; this may help with learning programming for beginners. The programming lessons are in the "IT Computer Certification" section https://greertechuniversity.neocities.org/prolearn
This well-accomplished Frenchman seems oddly opposed to the GH's typical agendas regarding AI -- though he himself being deeply entrenched in the GH Big-Tech world. IMO, he may be the main reason that Zuckerberg released leaked LLaMA. Lots of interesting topics (+ criticisms of common LLM approaches being used today). Worth a look. http://yann.lecun.com/ <---> https://www.businessinsider.com/meta-ai-yann-lecun-deepseek-open-source-openai-2025-1 https://www.businessinsider.com/mark-zuckerberg-open-source-ai-platforms-future-competition-apple-llama-2024-9 >=== -add'l hotlinks -fmt, minor edit
Edited last time by Chobitsu on 02/15/2025 (Sat) 00:44:13.
>(electronics history -education -related : >>37097 )
> (Linux sysadmin, running automated backups -related : >>37419 ) >=== -patch crosslink
Edited last time by Chobitsu on 03/11/2025 (Tue) 01:09:59.

Selecting a Programming Language Robowaifu Technician 09/11/2019 (Wed) 13:07:45 No.128 [Reply] [Last]
What programming language would suit us and our waifus best? For those of us with limited experience programming, it's a daunting question.
Would a language with a rigid structure be best?
Do we want an object-oriented language?
How much do you care about wether or not a given language is commonly used and widespread?
What the fuck does all that terminology mean?
Is LISP just a meme, or will it save us all?

In this thread, we will discuss these questions and more so those of us who aren't already settled into a language can find our way.
240 posts and 36 images omitted.
>>36722 ????? I thought they were just some lite resource Linux??????
>>36722 yeah its fine, i get blow outs too, point is you dont have the luxury of being exclusive and that was an #include/#pragma joke
>>128 I'd say for code that is going to make up parts of it that are going to be called often, like if you were to add a GPU into the robots to be its "brain" to navigate around like a human would (and have reflects and spacial awareness and so on), then this should be written with something like C or C++. The other, "higher level" stuff, can be written in a scripting language so it's easier for people to contribute to and isn't going to affect performance/power efficiency that much. You'll often see this with video games. Where the main engine, that'd rending the graphics on the screen and doing the heavy lifting, is all C/C++. Then for things like your quest system, you might use something like Lua or Python so it's easy for even novice programmers to add new things into the game (they're working with simple scripts that hide the complexities of the engine and let them focus on making things). You'll also see this with game mods too, it's a tried and tested approach. If you're curious, look into something like "plugin architecture" - essentially it's dead simple to make mods or plugins this way. the application itself has a /plugin or /mods folder, you just copy in the script files as their own folder and that's it, when the application starts it scans that folder and integrates it. So now even a brainlet can extend the main functionality and not have to do anything complex.
We've discussed Formal Verification here before : ( <several posts I can't find right now b/c we don't have a perfect index of the board in our Library thread, and I don't have Waifusearch on this box>(lol), >>33900, et al). F* seems rather rigorous, and likely would be very handy for us to master here for several crit-secs of our C3 robowaifu systems codebase. https://fstar-lang.org/
>>36736 >and that was an #include/#pragma joke Lol. Me and my fast lightning mind missed that one. :D >>36776 >The other, "higher level" stuff, can be written in a scripting language so it's easier for people to contribute to and isn't going to affect performance/power efficiency that much. I get the argument, Anon. But the simple truth is that yuge swaths of 'judgement-oriented' code (ie, such as is pertinent for >>10000, et al) are both high-level, and have to execute lightning fast to be of any realworld, practical use. If we have to just go on without any command outputs for, say, 10 seconds (or even 1 second) while Python parses it's complex decision trees to decide if -- while 200kg robowaifu is merrily rolling along in the kitchen cleaning things up after dinner -- she should continue driving over the baby now surprisingly lying on the floor directly in her pathway or should she stop first before that happens... then lots & lots of bad things will ensue! :DD >You'll often see this with video games. Again, completely understood. OTOH, most times people don't actually die in the realworld while vidya'g. Robowaifus (and robots in general) can actually kill you accidentally. >tl;dr It's complicated. :^) >===

Message too long. Click here to view full text.

Edited last time by Chobitsu on 02/14/2025 (Fri) 04:23:17.

Robot skin? Possible sensitivity? Robowaifu Technician 09/15/2019 (Sun) 07:38:17 No.242 [Reply] [Last]
The Anki VECTOR has a skin-like touch sensor on it, could we incorporate it into our robogirls?
161 posts and 32 images omitted.
cont. My priorities are: 1.tree trimming/felling/removal robot.(not humanoid, more like a insect/squid/thing that climbs trees and "shears" off the limbs. Not cut with a chain saw. Far more efficient, quieter, less mess) 2. gears, wenches, tackle, inductive reactance electric motors, fiberglass or other structures for the above and for my sailboat (which is so radical I fear to talk about the hair brained ideas I have. If I were to do so the sailboating community would hunt me down like Frankenstein with torches and pitchforks) :) robowaifu However all of the above are all necessary to build the parts for a robowaifu, and DIRECTLY related. So it may be I will mesh all of these at the same time as I learn. As you can imagine this will take...a while. I can see the robowaifu last, as I haven't a clue how to build the software to get this thing to do what I want. It's, I think, a huge leap far above making the motors, body, etc. I can see a clear path to all of the mechanical side. Maybe not easy, but I can see a path. The AI, I have no idea without spending a damn fortune which would torpedo the whole thing. I REALLY would like it as it would be a great addition to the first task, tree trimming/felling/removal. Sailing my boat as a sort of auto-pilot and other unmentionables. As you've seen me commenting over a year, I was spitballing ideas. I like to write these down even if they are a bit hairbained. For some reason writing them down, even if it makes me look foolish at times, helps me "see" them and organize my thoughts. Hard to explain. I now believe I have a basic game plan, and I'm now accumulating the tools, materials needed. I have the 3D printer and the beginnings of the materials needed. So soon I should start experimenting. I also have ordered and received, some ESP-32 microcontrollers. I still need a few more things. Lots of bits and pieces. I will definitely be getting some of this plastisol resin. Some of this stuff I talked about I ran across a guy, tech ingredients, who has this great video. I didn't get all my ideas from him but he puts a lot of them in a nice package, which I'm not so good at. Most of the ideas I have are just common sense and thinking about it leads people to the same path. If you want to understand how to get low cost strong stuff, you have to see it. He's doing something different, making body armor, but the basic principles are the same. The first video is about hard surfaces, tight material packing and different materials needed. The second is basic composites and good but someone who already knows about composites might could skip it, but I think it's useful. Super Strong Epoxy with Diamonds and More! https://www.youtube.com/watch?v=6KjlyXKeo8c

Message too long. Click here to view full text.

One other thing I have already. I've been working on accumulating stuff I need slowly. Cerrosafe Metal | Low-Melting Point 158-190℉ Bismuth-Based Alloy I have a pound of this. So you make a mold and put this in the mold. My thought is the areas in the NASA chainmail that are internal. Stuff that can't be molded. I make a negative mold of all these interior parts from this metal, place in outer mods, pour resin, matrix, then melt the metal out with hot water. It's reusable indefinitely. I only have a pound but it's plenty to test with.
The topic here was originally "sensitive skin". The thread for materials is here >>154 and for armatures >>200 and the main thread for someone showing off the prototypes he has build is here >>28715
>>35291 >>35292 >>35293 Great information, Grommet! I wouldn't dream of criticizing your goals, except in this one detail: they're not expansive enough yet!! >tl;dr Go big, bro! :D >>35300 Fren NoidoDev is right. For the sake of being able to locate such information in the future more-easily for every'non, perhaps move each subsection of this discussion to the appropriate threads? <---> Thanks, NoidoDev! You're always there for us all, to help keep things better organized. That's much-appreciated, bro. Cheers. :^)
>>34724 > The body would then be dipped in a bath or sprayed with layer of fluid containing keratinocytes that would attach onto the TPU and form chemical bonds. This would then be cultured until the desired skin thickness is met and the keratinocytes would be killed via heat sterilization. Neat! >Totally having a GITS vibe with this... There are tons of issues involved with intentionally keeping living cultures in our robowaifus, but perhaps the skin is our single-best option. Thanks, Anon. :^)

Open file (58.56 KB 1244x2374 MaidComFront.jpg)
Open file (39.04 KB 1244x2318 MaidComSide.jpg)
Open file (56.65 KB 1244x2318 MaidComBack.jpg)
MaidCom Kiwi 02/08/2024 (Thu) 06:40:01 No.29219 [Reply] [Last]
MaidCom Project Thread 2 Project Goal: Simple, low cost, extensible platform for robowaifu development. Picrel shows the model which is being developed. Every part will be designed to be easily printable and replaceable. Her designs will be open source. Modding and customization will be encouraged. This is a base model. Early revisions will be heavily limited in functionality. I invite you to help design and define standards so that it is easy to create specialized add-ons to allow her to become your own waifu. Collaboration is important, MaidCom is officially partnered with Lin and his Waifu Wheelchair, if you'd also like to be officially partnered, say so. Cat eats and miniskirts are encouraged! <--- Previous thread: ( >>15630 ) >=== -add crosslink
Edited last time by Chobitsu on 08/14/2024 (Wed) 21:50:13.
143 posts and 102 images omitted.
Open file (118.48 KB 740x602 Front.png)
Open file (136.99 KB 740x602 Back.png)
>>36348 >>36349 Thanks! Still learning but, I know it's getting close to done.
>>36579 That's looking really good now, Anon. I've been banging a number of ideas around about cheap actuation solutions that will fit into the volume available for each hip/upper_thigh. For some reason (maybe b/c thinking about pinching) I'm finding this render that NoidoDev posted ( >>36561 ) inspiring rn. I'm also picturing a partial inner ball to rest inside the visible outer ball that could allow for nearby linear actuators to rest inside the torso volume, w/o interfering with the needed major two servos per joint. This would allow for 3-axis movements, wouldn't over-complicate the mechanisms additionally, and would still provide good pinch-protection given those two constraints. When I pick up modelling, I'll diagram my ideas about this for everyone to brainstorm & critique together. <---> Looking forward to seeing what's coming next, Kiwi! Cheers. :^) Soon.
Open file (1.39 MB 1920x1080 can_see_every_curve.png)
>>36346 >>36579 Great progress. I like it. At least with the imagined flesh suit on top of it. Though, the back in the last pic still looks a bit to bubbly for my taste, but okay. Anyways, I posted many of the pics on this thread on Reddit. If you mind, tell me. It's here: https://www.reddit.com/r/gynoidappreciation/comments/1iiqpgg/maidcom_project/
Open file (43.73 KB 660x964 BendOver.jpeg)
>>36609 Not quite sure I understand your idea. Could you provide a drawing? >>36646 I'm endeavoring to de-bubble her booty. Feel free to share anything I post anywhere. Heck, I'll freely share any model I still have with anyone that asks. I welcome distribution of anything related to MaidCom
>>36657 >I'm endeavoring to de-bubble her booty OH NO! I was looking and thinking,"Wow look at that booty", and now your going to debootie it. A crime.

Open file (8.45 MB 2000x2811 ClipboardImage.png)
Cognitivie Architecture : Discussion Kiwi 08/22/2023 (Tue) 05:03:37 No.24783 [Reply] [Last]
Chii Cogito Ergo Chii Chii thinks, therefore Chii is. Cognitive architecture is the study of the building blocks which lead to cognition. The structures from which thought emerges. Let's start with the three main aspects of mind; Sentience: Ability to experience sensations and feelings. Her sensors communicate states to her. She senses your hand holding hers and can react. Feelings, having emotions. Her hand being held bring her happiness. This builds on her capacity for subjective experience, related to qualia. Self-awareness: Capacity to differentiate the self from external actors and objects. When presented with a mirror, echo, or other self referential sensory input is recognized as the self. She sees herself in your eyes reflection and recognizes that is her, that she is being held by you. Sapience: Perception of knowledge. Linking concepts and meanings. Able to discern correlations congruent with having wisdom. She sees you collapse into your chair. She infers your state of exhaustion and brings you something to drink. These building blocks integrate and allow her to be. She doesn't just feel, she has qualia. She doesn't see her reflection, she sees herself reflected, she acknowledges her own existence. She doesn't just find relevant data, she works with concepts and integrates her feelings and personality when forming a response. Cognition, subjective thought reliant on a conscious separation of the self and external reality that integrates knowledge of the latter. A state beyond current AI, a true intellect. This thread is dedicated to all the steps on the long journey towards a waifu that truly thinks and feels.

Message too long. Click here to view full text.

Edited last time by Chobitsu on 09/17/2023 (Sun) 20:43:41.
450 posts and 135 images omitted.
>>35189 Some updates: - I've made very little progress on causal reasoning since my last update. I have the ontological relationships, and now I'm integrating them with causal reasoning. I'm working on that now. - I've learned a lot about what's important to me in a waifu. On that second point: - There are three factors that I think are critical: love, romantic interest, and relationship stability. - "Love" gets used for a lot of things, but I think the most relevant form is: feeling "at home" with someone, and feeling like that someone will always have a home at your side, no matter what. There's no single solution that would fit everyone here, but I think it always comes down to: the things you most deeply value, and what you feel is missing in other people that prevents you from feeling at home with them. Some examples (probably all of these will resonate with people broadly to some extent, but I think there's usually a "core" one that's both necessary and sufficient): - ... The ability to freely converse with someone, without having to worry about whether they "get it" or are going to misinterpret you. - ... Paying attention to nuance, and not oversimplifying things just because it's natural or convenient. - ... The willingness to pursue one's curiosities and creative inclinations at great depth. - ... I think Claude in particular is very good at uncovering these values, so I think LLMs broadly will be good at this in the not-to-distant future. - Romantic interest would be the feeling of wanting to be closer to someone, both emotionally and physically. I think there are two sides of this: the desire to be "observed" and the desire to "observe". I think the strongest form of "wanting to be observed" comes from the belief that someone can meaningfully contribute to the things you feel and believe. I think the strongest form of "wanting to observe" comes from the belief that someone embodies things you deeply value. I think lowercase-r romantic interest can come just from these things, and capital-R Romantic interest comes from the resonance between these things and physical desires. The bridge between these things and physical desires seems to come from analogies with the physical sense: to be heard (respected), to be seen (understood), to be felt (empathized with). I think these analogies work because our brains are deeply wired to understand how to deal with the physical senses, and that circuitry gets reused for higher-level understanding. The analogies for smell (e.g., "something smells off") and taste ("having good taste") are a little harder to pin down and strongly overlapping (probably because they both deal with sensing chemical properties), but I currently thing the "right" way to think about those is in terms of judgement (to be judged). - Relationship stability come from overlap in lifestyle and from someone not doing/embodying anything that disgusts you. Whereas the other two can be "optimized" mostly just by understanding someone's core values, this one likely can only be discovered through trial and error since lifestyles are complex things that can co-evolve with your waifu. Once I get to higher-level functionality in Horsona, I'll likely focus on trying to align with these things. I have some ideas on how to do this.
>>36101 POTD Amazing stuff, really. >I have some ideas on how to do this. Please hurry, Sempai! The world is anxiously waiting for this stuff!! :^)
>>36101 Minor update on merging ontological reasoning and causal reasoning: - The causal inference code seems able to handle ontological information now. I still need to update some interfaces with recent changes to how I'm handling variable identifiers and linking data into the graphs. - Since the ontologies and causal graphs are generated by an LLM, I'll need some way to identify & correct errors in them automatically. Right now, at causal inference time, I'm identifying error cases where a single data field applies to multiple causal variables and cases where multiple data fields apply to a single causal variable. I haven't figured out yet how exactly to correct the underlying graphs when an error is detected, but I'm thinking: (1) flag the relevant causal graphs and sections of the ontology, (2) if something gets flagged enough times, regenerate it. The bare minimum I want here is for "good" parts of the graphs to be stable, and for "bad" parts to be unstable.
>>36309 Thanks for you work on this. I'm still trying to work myself through the unread threads, till I have time to use OpenAI and your software.
> (discussion -related : >>37652, ...)

Robot skeletons and armatures Robowaifu Technician 09/13/2019 (Fri) 11:26:51 No.200 [Reply] [Last]
What are the best designs and materials for creating a skeleton/framework for a mobile, life-sized gynoid robot?
243 posts and 123 images omitted.
This here https://youtu.be/Fd-0tHewFf4 is related to 3D printing >>94 and modelling >>415 but I think it's more general and is useful for armatures (shells) and flexible subskin elements. The video shows a method how to make 3D printed parts that can give in to pressure from different directions. Something I was try to do for quite some time: >>17110 >>17151 >>17195 >>17630 and related comments. It refers to PLA in the headline and in the video, but this doesn't matter. It's just that the part itself is flexible, while the material itself doesn't have to be.
>>36324 I noticed that the Missle_39 video you posted ( >>36299 ) contains this same style of structural, flexible printing within the torso volume of their robowaifu. That video convinced me of the value of such an approach, so it's added to the long list of research topics for me. Cheers NoidoDev, and thanks! :^)
>>36324 >>36325 Decided to do a snap to clarify specifically: https://trashchan.xyz/robowaifu/thread/26.html#43
>>36331 This in the middle is just some regular infill, I think. It can be selected in the slicer. Looks clearly like "Gyroid Infill" https://help.prusa3d.com/article/infill-patterns_177130
>>36366 POTD Excellent resource, NoidoDev, thanks!! Yeah, that looks exactly like the same kind of infill. Just looking at it, I knew it would be strong in every direction (a fairly high-priority, in a dynamic system like a robowaifu), and the notes in your link confirmed that. <---> Thanks again, Anon. Cheers. :^)

Philosophers interested in building an AGI? pygmalion 06/26/2021 (Sat) 00:53:09 No.11102 [Reply] [Last]
Why is it that no philosophers are interested in building an AGI? we need to change this, or at least collect relevant philosophers. discussion about philosophy of making AGI (includes metaphysics, transcendental psychology, general philosophy of mind topics, etc!) also highly encouraged! Ill start ^^! so the philosophers i know that take this stuff seriously: Peter Wolfendale - the first Neo-Rationalist on the list. his main contribution here is computational Kantianism. just by the name you can tell that he believes Kant's transcendental psychology has some important applications to designing an artificial mind. an interesting view regarding this is that he thinks Kant actually employed a logic that was far ahead of his time (and you basically need a sophisticated type theory with sheaves to properly formalize). Other than that he also thinks Kant has interesting solutions to the frame problem, origin of concepts, and personhood. CONTACTS: He has a blog at https://deontologistics.co/, and also has posted some lectures on youtube like this one: https://www.youtube.com/watch?v=EWDZyOWN4VA&ab_channel=deontologistics Reza Negarestani - this is another Neo-Rationalist. he has written a huge work (which I haven't read yet ;_;) called "Intelligence and Spirit". It's massive and talks about various grades of general intelligence. this includes sentient agents, sapient agents, and Geist. this guy draws from Kant as well, but he also builds on Hegel's ideas too. his central thesis is that Hegel's Geist is basically a distributed intelligence. he also has an interesting metaphilosophy where he claims that the goal of philosophy is the construct an AGI. like other Neo-Rationalists, he heavily relies on the works of Sellars and Robert Brandom Recc: Ray Brassier (recent focuses) - I dont think he is working on artificial general intelligence, but his work on Sellars, and in particular rule following is very insightful! Hubert Dreyfus - Doesn't quite count, but he did try to bring Heidegger to AGI. He highlighted the importance of embodiment to the frame problem and common sense knowledge. I personally think Bergson might have explicated what he wanted to achieve but better, though that guy is like way before AI was even a serious topic, lol. Murray Shanahan - This guy has done some extra work on the frame problem following Dreyfus. His solution is to use global workspace theory and parralel processing of different modules. Interesting stuff! Barry Smith - Probably the most critical philosopher on this list. He talks about the requisite system dynamics for try strong AI, and concludes that our current methods simply don't cut it. One of the key stressing points he points out here with a colleague is that our current AI is Markovian when fleshed out chat dialogue would be a non-Markovian task (you can find the arxiv link of his criticism here: https://arxiv.org/abs/1906.05833). He also has knowledge on analytic ontology (and amongst other thing has some lectures about emotion ontology). I think his main genius however is in coming up with a definition of intelligence that puts a lot of the problems with our current approaches into context (which can be found here: https://www.youtube.com/watch?v=0giPMMoKR9s&ab_channel=BarrySmith) CONTACTS: He has a yt channel here https://www.youtube.com/watch?v=0giPMMoKR9s&ab_channel=BarrySmith

Message too long. Click here to view full text.

257 posts and 113 images omitted.
>>36259 >...like the word 'literally' which is literally actually and not literally... I'm stealing this.
Very interesting convo. Thanks for all the details! Again, way above my pay grade but as I started going through this yesterday, I also thought garbage in \ garbage out. But from the start, my intention was to say one man's trash is another man's treasure I guess. That is to say, if that garbage makes me happy, it has produced a valid use case and that's all that matters to me, but I'm a proponent of the subjective theory of value.
>>36274 >I also thought garbage in \ garbage out. This. I think you're right, Barf! Cheers. :^) >>36307 Very interesting. We truly understand little about the human psyche, IMO. Lots to learn still. Thanks, GreerTech! Cheers. :^)
>>36307 Thanks! That sounds like the article I read. It seems like prompt engineers are closer to AGI than the GPU farms training the LLMs. People were shocked by reasoning models but prompt engineers have been doing that for awhile. The same could happen for imagination I hope.

Humanoid Robot Projects Videos Robowaifu Technician 09/18/2019 (Wed) 04:02:08 No.374 [Reply] [Last]
I'd like to have a place to accumulate video links to various humanoid – particularly gynoid – robotics projects are out there. Whether they are commercial scale or small scale projects, if they involve humanoid robots post them here. Bonus points if it's the work of a lone genius. I'll start, Ricky Ma of Hong Kong created a stir by creating a gynoid that resembled Scarlett Johansson. It's an ongoing project he calls an art project. I think it's pretty impressive even if it can't walk yet. https://www.invidio.us/watch?v=ZoSfq-jHSWw === Instructions on how to use yt-dlp to save videos ITT to your computer: (>>16357)
Edited last time by Chobitsu on 05/21/2022 (Sat) 14:20:15.
222 posts and 75 images omitted.
>>35201 Thanks, NoidoDev! I found this based on searching from your crosslink : https://x.com/missile_39?mx=2 Looks like they're making some great progress! I don't read Nihongo yet, so I'm unsure at this point what the goal with their project is. Regardless, I sure wish them well with it! Thanks again, Anon. Cheers. :^)
Open file (1.99 MB 1920x1080 clapclapclap.png)
> Hannah Dev, David Browne Q&A: https://youtu.be/yFvSYekCuBM Arms: https://youtu.be/UX-1hr3NPeo > The Robot Studio DexHand, reach, pick and place: https://youtu.be/uF7vVPG_mf0 Hand picking M8 screw: https://youtu.be/PucX_w9-fOs DexHand and HOPE arm, repeated picking: https://youtu.be/JfiN_qcpODM > Realbotix (known for Harmony) CES, current product demo, price US$175k or more: https://youtu.be/2HQ84TVcbMw > HBS Lab Horizontal Grasping: https://youtu.be/CR_aLIKelv8 > Sanctuary AI In-hand manipulation: https://youtu.be/O73vVHbSX1s > Tesla bot Walking (might be fake): https://youtu.be/xxoLCQTN0KA > Chinese robots Fails, probably biased source: https://youtu.be/12IwfzyHi0A
>>35675 POTD Nice work, NoidoDev. Kind of a treasure cache of good information. I'm particularly excited to see HannahDev's good progress with brushless actuation. I hope we here can somehow return the favor to him someday. Thanks again! Cheers. :^)
>>35207 >Missile_39 There's also a new video. I didn't watch it completely and it's in Japanese, but it shows the parts of the current robots and some ideas: https://youtu.be/ZC28u1Dqcpg

Robot Eyes/Vision General Robowaifu Technician 09/11/2019 (Wed) 01:13:09 No.97 [Reply] [Last]
Cameras, Lenses, Actuators, Control Systems Unless you want to deck out you're waifubot in dark glasses and a white cane, learning about vision systems is a good idea. Please post resources here. opencv.org/ https://archive.is/7dFuu github.com/opencv/opencv https://archive.is/PEFzq www.robotshop.com/en/cameras-vision-sensors.html https://archive.is/7ESmt >=== -patch subj
Edited last time by Chobitsu on 12/27/2024 (Fri) 17:31:13.
140 posts and 57 images omitted.
Open file (658.38 KB 1089x614 7.png)
These with the convex lens that you can split apart might be nice. No cameras, but could probably be added and has everything else on a custom PCB already. https://www.adafruit.com/product/4343
This is really exciting stuff lately ITT, Anons. Thanks for linking to resources for us all! Cheers. :^)
Researchers were able to tweak machine vision into being usable in low-light conditions https://techxplore.com/news/2025-01-neural-networks-machine-vision-conditions.html
>>36237 Thanks GreerTech! I'm actually interested in devising a smol flotilla of surveillance drones (the tiny, racing ones) for a robowaifu's use for situational-awareness on grounds. Having 'night vision' is very useful for this ofc -- especially if no special hardware would be required! Cheers. :^)
>>25927 Unfortunately, it looks like project may be dead (your link was broken and the last update was in last January, but I wonder if it could be retooled with newer and more efficient LLMs and vision models. It definitely caught my eye, it solved the elephant in the room I was thinking about, how do we tie a vision model to an LLM? https://github.com/haotian-liu/LLaVA

Python General Robowaifu Technician 09/12/2019 (Thu) 03:29:04 No.159 [Reply] [Last]
Python Resources general

Python is by far the most common scripting language for AI/Machine Learning/Deep Learning frameworks and libraries. Post info on using it effectively.

wiki.python.org/moin/BeginnersGuide
https://archive.is/v9PyD

On my Debian-based distro, here's how I set up Python, PIP, TensorFlow, and the Scikit-Learn stack for use with AI development:
sudo apt-get install python python-pip python-dev
python -m pip install --upgrade pip
pip install --user tensorflow numpy scipy scikit-learn matplotlib ipython jupyter pandas sympy nose


LiClipse is a good Python IDE choice, and there are a number of others.
www.liclipse.com/download.html
https://archive.is/glcCm
66 posts and 18 images omitted.
On the chatbot front, I've been working to update my old Python chatbot to actually be a good companion. Here's a sample of my work, but I'm far from done. https://files.catbox.moe/pf85ai.zip
>>35955 Thanks, GreerTech! Personally, I'm a big fan of Cleverbot! [1] :DD JK, good luck with revamping it, Anon. Cheers. :^) --- 1. Remarkably, it's still available!! https://www.cleverbot.com/
>>35928 Fun bot and easy to install. Another suggestion for Python is using pre-compiled C binaries and just calling the CLI via python. It should make it a little faster, more modular and gets around having to install PyTorch so much easier to ship. Here are links to the latest Whisper and Piper binaries https://github.com/ggerganov/whisper.cpp/actions/runs/12886572193 https://github.com/rhasspy/piper/releases/tag/2023.11.14-2 It's kind of the best of both worlds since you can code in python and still get the speed\ease of pre-compiled binaries for the STT\TTS system.
>>35976 Thanks very kindly for the links here, Barf. Cheers. :^)
>>35976 I wonder, is it possible to make a Appimage(Linux) https://appimage.org/ or 0install( Linux, Windows and macOS) https://0install.net/ download program for Linux? These have all the files needed to run whatever program installed all in one place. No additional installations needed.

Report/Delete/Moderation Forms
Delete
Report