/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Downtime was caused by the hosting service's network going down. Should be OK now.

An issue with the Webring addon was causing Lynxchan to intermittently crash. The issue has been fixed.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

Captcha
no cookies?
More

(used to delete files and postings)


Vera stayed by Anon's side, continuing to support him in building new programs, but their primary goal was no longer work or money or fame.


Open file (156.87 KB 1920x1080 waifunetwork.png)
WaifuNetwork - /robowaifu/ GitHub Collaborators/Editors Needed SoaringMoon 05/22/2022 (Sun) 15:47:59 No.16378 [Reply]
This is a rather important project for the people involved here. I just had this amazing idea, which allows us to catalogue and make any of the information here searchable in a unique way. It functions very similarly to an image booru, but for markdown formatted text. It embeds links and the like. We can really make this thing our own, and put the entire board into a format that is indestructible. Anyone want to help build this into something great? I'll be working on this all day if you want to view the progress on GitHub. https://github.com/SoaringMoon/WaifuNetwork
10 posts and 4 images omitted.
>>16530 Nah I'm good, I know how to handle JSON. XD
>>16413 Finally started to sort my data in regards to robowaifu and AI in Obsidian today. > https://github.com/SoaringMoon/WaifuNetwork The data seem to not be available anymore. Does anyone have it, or should I contact OP? I might have downloaded it somewhere but don't find it right now. It also might be a good idea to make this into a project using something like LangChain, NLTK, or something else to extract some keywords and make them into a tag.
>>26259 >or should I contact OP? Yeah I think that'd be cool to find out what SoaringMoon's up to these days.
>>26263 >find out what SoaringMoon's up to these days. Well, that's easy, he's working in gaming. That's on his Github. He also has a Substack, and I saw him on the Discord a while ago. Probably just deleted the RW repo on Github for PR reasons.
>>26266 Cool! Well I hope he's doing well and will stop by to say hi again, and let us know how he's doing at some point. >Probably just deleted the RW repo on Github for PR reasons. Yeah, I warned you about hiding your power level the stairs, bro! :^)

Selecting a Programming Language Robowaifu Technician 09/11/2019 (Wed) 13:07:45 No.128 [Reply] [Last]
What programming language would suit us and our waifus best? For those of us with limited experience programming, it's a daunting question.
Would a language with a rigid structure be best?
Do we want an object-oriented language?
How much do you care about wether or not a given language is commonly used and widespread?
What the fuck does all that terminology mean?
Is LISP just a meme, or will it save us all?

In this thread, we will discuss these questions and more so those of us who aren't already settled into a language can find our way.
209 posts and 36 images omitted.
This is VERY COOL. A "Scratch" like visual programming for micro-controllers. ESP32 included. Has a multitasking OS using byte code. OS is 16k. Can run n a browser or downloaded program. Can be used to test programs in a virtual micro-controller on the web or, I think, built in the program. http://microblocks.fun/ I've been reading a lot of Alan Kay's stuff so this makes sense.
>>26245 Okay. Not sure how much one could do with that and if it was suitable for doing something related to robowaifus, but it might come in handy.
>>26245 Neat! Thanks Grommet (good to see you BTW). I love Scratch, and think it's kind of visual interface will be highly valuable as a scripting interface for us, once robowaifus have become commonplace enough for Joe Sixpack's to be clammoring for their own.
Python Sucks And I LOVE It | Prime Reacts https://www.youtu.be/8D7FZoQ-z20 tl;dw: Execution speed is only one thing, development speed matters often more, and getting things done even more. Builtin types are in C anyways. Parts of the code can be transferred to Cython. Not mentioned: Mojo is around the corner, and based on Python.
Edited last time by Chobitsu on 11/08/2023 (Wed) 03:42:12.
>>26260 >Mojo is around the corner, and based on Python. Yeah, I'm curious if Modular will ever decide to end their commercial goals surrounding Mojo, and release it free as in speech to the world with no strings attached. It would be a shame for anons to get mired into some kind of Globohomo-esque tarbaby trap with something as vital to our robowaifus as her operational software. >=== -prose edit
Edited last time by Chobitsu on 11/08/2023 (Wed) 03:55:26.

Open file (32.62 KB 341x512 unnamed.jpg)
Cyborg general + Biological synthetic brains for robowaifus? Robowaifu Technician 04/06/2020 (Mon) 20:16:19 No.2184 [Reply] [Last]
Scientists made a neural network from rat neurons that could fly a fighter jet in a simulator and control a small robot. I think that lab grown biological components would be a great way to go for some robowaifu systems. It could also make it feel more real. https://www.google.com/amp/s/singularityhub.com/2010/10/06/videos-of-robot-controlled-by-rat-brain-amazing-technology-still-moving-forward/amp/ >=== -add/rm notice
Edited last time by Chobitsu on 08/23/2023 (Wed) 04:40:41.
168 posts and 29 images omitted.
https://www.nature.com/articles/s41586-023-06484-9 >DNA-based programmable gate arrays for general-purpose DNA computing >The past decades have witnessed the evolution of electronic and photonic integrated circuits, from application specific to programmable1,2. Although liquid-phase DNA circuitry holds the potential for massive parallelism in the encoding and execution of algorithms3,4, the development of general-purpose DNA integrated circuits (DICs) has yet to be explored. Here we demonstrate a DIC system by integration of multilayer DNA-based programmable gate arrays (DPGAs). We find that the use of generic single-stranded oligonucleotides as a uniform transmission signal can reliably integrate large-scale DICs with minimal leakage and high fidelity for general-purpose computing. Reconfiguration of a single DPGA with 24 addressable dual-rail gates can be programmed with wiring instructions to implement over 100 billion distinct circuits. Furthermore, to control the intrinsically random collision of molecules, we designed DNA origami registers to provide the directionality for asynchronous execution of cascaded DPGAs. We exemplify this by a quadratic equation-solving DIC assembled with three layers of cascade DPGAs comprising 30 logic gates with around 500 DNA strands. We further show that integration of a DPGA with an analog-to-digital converter can classify disease-related microRNAs. The ability to integrate large-scale DPGA networks without apparent signal attenuation marks a key step towards general-purpose DNA computing.
Open file (99.31 KB 500x500 dystopia meme.jpg)
>>25568 >>25569 I'm not entirely sure about that given that there's always a possibility that you may be dealing with some necrotic tissue if it's not being properly supplied with nutrition. Lab grown organic animal cells are still in their infancy stages with a lot of regulations and moral/ethical concerns impeding further development. I'm all for it as i'm an anarcho capitalist, but there are too many factors at play right now regarding that stuff. Alginic hydrogel is more than appropriate for making soft muscle actuators and other body parts for gynoids. Not to mention how well it is with being anti-microbial. What we do with this approach wil act as a stepping stone for further development inching us toward our goal of fixing the woman problem. My idea has a lot of holes as all of the others do regarding fabrication and lack of adequate R&D, but it'll just have to sit on the shelf for now with all of the other grandiose things I've thought up. I have more important matters to tend to for now. My primary concern is with the advent of government creeping into every facet of normal life. This will also affect any robot wife progress that could happen in the foreseeable future. I'm taking proactive measures to gain the knowledge required to build tools that will allow me and other people to circumvent this phenomenon. For now, sex bots maids will just have to sit on the shelf.
State of the Womb - The Economist: Are artificial wombs the future? https://www.youtu.be/hBSSb462_Z4 - the typical retarded comments from people programmed by entertainment and lack of thinking.
>>26171 >people don't think like me therefore they were brainwashed by the media
>>26185 did anyone ever say anything about incubators other than saddam-hussein=bad, cognitive dissonance is an obvious sign of being indoctrinated

Open file (1.45 MB 1650x1640 gynoid_anime.jpg)
Robowaifus in Media: Thread 02 Robowaifus in Media: Thread 02 01/14/2023 (Sat) 23:49:54 No.18711 [Reply] [Last]
Post about whatever media predominately features at least one fembot aka gynoid (female form of an android) as an important character, ideally a robowaifu or synthetic girlfriend. Some freedom with the topic is allowed, virtual waifus or bodiless AI might also qualify if she is female. Negative framing of the whole topic should be mentioned with negative sentiment. Cyborgs with a human brain or uploads of women don't really fit in, but if she's very nice (Alita) and maybe not a human based cyborg (catborg/netoborg) we can let it slide. Magical dolls have also been included in the past, especially when the guy had to hand-made them. Look through the old thread, it's worth it: >>82 - Picrel shows some of the more well know shows from a few years ago. I made a long list with links to more info's on most known anime shows about fembots/robowaifus, without hentai but including some ecchi and posted it here in the old thread: >>7008 - It also features some live-action shows and movies. I repost the updated list here, not claiming that it is complete, especially when it comes to live action and shows/movies we don't really like. >In some cases I can only assume that the show includes robogirls, since the show is about various robots, but I haven't seen it yet. A.D. Police Files: https://www.anime-planet.com/anime/ad-police-files Andromeda Stories: https://www.anime-planet.com/anime/andromeda-stories Angelic Layer: https://www.anime-planet.com/anime/angelic-layer Armitage III: https://www.anime-planet.com/anime/armitage-iii Azusa will help: https://www.anime-planet.com/anime/azusa-will-help Blade Runner 2022: https://www.anime-planet.com/anime/blade-runner-black-out-2022 Busou Shinki: https://www.anime-planet.com/anime/busou-shinki-moon-angel Butobi CPU: https://www.anime-planet.com/anime/buttobi-cpu Casshern Sins: https://www.anime-planet.com/anime/casshern-sins Chobits: https://www.anime-planet.com/anime/chobits

Message too long. Click here to view full text.

Edited last time by Chobitsu on 10/14/2023 (Sat) 05:44:28.
148 posts and 65 images omitted.
>>26158 >>26160 On-topic, spoilered, NSFW is OK-ish, but we're particularly averse to 3DPD NSFW here. I'll compromise with you Robophiliac and just rm the actual nudity + disable hotlinks in your post. I hope you understand Anon, it's nothing personal. Cheers. :^) --- update: sorry I accidentally'd one of the non-nudes unintentionally. btw, thanks for spoilering as well! >=== -add update msg -minor edit
Edited last time by Chobitsu on 11/01/2023 (Wed) 04:19:30.
>>26167 > we're particularly averse to 3DPD NSFW here np, I thought spoilerd was ok, what with some of the content in the Important Question Vagoo thread. In future I won't post similar thumbnails.
>>26178 >what with some of the content in the Important Question Vagoo thread Sorry I probably missed it if so. Can you or some anons link to it for me please? Just to be clear: frank discussions of the realities of devising sexual functionality for robowaifus is fine here (and this includes explicit, spoilered, robotics/systems imagery if it's directly on-topic). Think 'medical' textbook. Just steer clear of NSFW-3DPD-based imagery, thanks. :^) >=== -minor edit
Edited last time by Chobitsu on 11/02/2023 (Thu) 06:44:24.
>>26180 >>20159 This pic is of the actress playing the Maid Droid in the movie, not a doll. >>26180 >Just steer clear of NSFW-3DPD-based imagery I thought these videos might be a bit much given the thought below, >>25586 even though it looks like they are included in the realm of the OK if spoilered. >>21520 >>21543 >this includes explicit, spoilered, robotics/systems imagery if it's directly on-topic >>25586

Message too long. Click here to view full text.

Edited last time by Chobitsu on 11/03/2023 (Fri) 08:38:57.
>>26182 >This pic is of the actress playing the Maid Droid in the movie, not a doll. Got it thanks, Anon. >even though it looks like they are included in the realm of the OK if spoilered. OK-ish, I suppose. But if someone tried to do some kind of dump using such imagery, I'd definitely delete them. pr0n is easily found in thousands and thousands of places on the networks. No need for us here to add further to that pandering grotesquery! :^) If it's needed/on-topic, let's all keep it highly technical/artistic please (design, molding, casting, decorating, lubing, cleaning, repair, etc.) >I think anon should be smart enough not to click on anything spoilered while at work, but maybe not. Agreed, but I want to reduce the risk for that anon's sake. Dealing with harpies like that is already bad enough! :D ofc, anons shouldn't be sh*teposting during work at all, would be my best advice. :^) >tl;dr Just to reiterate (and closeout) this topical deviation: Avoid any 3DPD-NSFW. Beyond that, just use professional good judgment. >=== -prose edit
Edited last time by Chobitsu on 11/03/2023 (Fri) 12:04:23.

Robot Vision General Robowaifu Technician 09/11/2019 (Wed) 01:13:09 No.97 [Reply] [Last]
Cameras, Lenses, Actuators, Control Systems

Unless you want to deck out you're waifubot in dark glasses and a white cane, learning about vision systems is a good idea. Please post resources here.

opencv.org/
https://archive.is/7dFuu

github.com/opencv/opencv
https://archive.is/PEFzq

www.robotshop.com/en/cameras-vision-sensors.html
https://archive.is/7ESmt
Edited last time by Chobitsu on 09/11/2019 (Wed) 01:14:45.
95 posts and 44 images omitted.
Open file (82.60 KB 386x290 Screenshot_158.png)
I was working on this here >>26112 using OpenCL to make video processing faster. So I got this here recommended by YouTube: https://www.youtu.be/0Kgm_aLunAo Github: https://github.com/jjmlovesgit/pipcounter This is using OpenCV to count pips on dominos, and does it much faster and better than GPT4-Vision. I wonder if it would be possible to have a LLM adjust the code dependent on the use case, and maybe having a library of common patterns to look out for. Ideally one would show it something new, it would detect the outer border like the stones here and then adjust till it can catch the details on all of these objects which are of interest. It could look out for patterns dependent on some context, like e.g. a desk.
>>26132 >and does it much faster and better than GPT4-Vision. Doesn't really surprise me. OpenCV is roughly the SoA in hand-written C++ code for computer vision. You have some great posts ITT Anon thanks... keep up the good work! :^)
There are several libraries and approaches that attempt to achieve generalized object detection within a context, although creating a completely automatic, context-based object detection system without predefining objects can be a complex task due to the variability of real-world scenarios. However, libraries and methodologies that have been utilized for more general object detection include: 1. YOLO (You Only Look Once): YOLO is a popular object detection system that doesn't require predefining objects in the training phase. It uses a single neural network to identify objects within an image and can detect multiple objects in real-time. However, it typically requires training on specific object categories. 2. OpenCV with Haar Cascades and HOG (Histogram of Oriented Gradients): OpenCV provides Haar cascades and HOG-based object detection methods. While not entirely context-based, they allow for object detection using predefined patterns and features. These methods can be more general but might not adapt well to various contexts without specific training or feature engineering. 3. TensorFlow Object Detection API: TensorFlow offers an object detection API that provides pre-trained models for various objects. While not entirely context-based, these models are designed to detect general objects and can be customized or fine-tuned for specific contexts. 4. Custom Object Detection Models with Transfer Learning: You could create a custom object detection model using transfer learning from a pre-trained model like Faster R-CNN, SSD, or Mask R-CNN. By fine-tuning on your own dataset, the model could adapt to specific contexts. 5. Generalized Shape Detection Algorithms: Libraries like scikit-image and skimage in Python provide various tools for general image processing and shape analysis, including contour detection, edge detection, and morphological operations. While not object-specific, they offer tools for identifying shapes within images. Each of these methods has its advantages and limitations when it comes to general object detection. If you're looking for a more context-aware system that learns and adapts to various contexts, combining traditional computer vision methods with machine learning models trained on diverse images may be a step towards achieving a more generalized object detection system. However, creating a fully context-aware, automatic object detection system that adapts to any arbitrary context without any predefined objects is still a challenging area of research. -----------------

Message too long. Click here to view full text.

>>26146 Understood. If the goal is to identify various objects within a specific context (like a desk) without predefining the objects, and the lighting conditions might vary, a more flexible approach using general computer vision techniques can be applied. This could involve methods such as contour detection, edge detection, and basic image processing techniques to identify objects within the context of a desk. You might use a more generalized version of object detection that isn’t specific to particular objects but rather identifies any distinguishable shape within the context. Here’s an example: python import cv2 # Read the image image = cv2.imread('path_to_your_image.jpg') # Convert to grayscale gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # Apply thresholding or other preprocessing techniques to enhance object edges # ...

Message too long. Click here to view full text.

Open file (346.77 KB 696x783 1698709850406174.png)
Open file (199.93 KB 767x728 1698710469395618.png)
I suppose this is a good thread to use for discussing this concept: a swarm of small drones available for a robowaifu's use for enhanced perimeter/area surveillance, etc.

Visual Waifus Robowaifu Technician 09/15/2019 (Sun) 06:40:42 No.240 [Reply] [Last]
Thoughts on waifus which remain 2D but have their own dedicated hardware. This is more on the artistry side though ai is still involved. An example of an actual waifu product being the Gatebox.
gatebox.ai/sp/

My favorite example is Ritsu, she's a cute ai from assassination classroom who's body is a giant screen on wheels.
231 posts and 108 images omitted.
>>24301 hat is super impressive. I had no idea you do that on a phone. If that can be done on a phone the a standard processor should be able to be far more advanced.
>>24301 I looked at the code link and ??? I'm not seeing what he said in the video.
>>24489 Might be the case, I didn't test it. I think he only shares some basic elements, like for making the animation, the rest might only be explained or hinted at in the videos.
Open file (1.00 MB 900x675 ClipboardImage.png)
A miniature version of Pepper's Ghost for voice assistants. https://www.hackster.io/zoelenbox/ghost-pepper-voice-assistant-for-ha-293a9d
>>26129 Neat! Always a good idea to try to find ways to economize things. Thanks Anon. :^)

OpenGL Resources Robowaifu Technician 09/12/2019 (Thu) 03:26:10 No.158 [Reply]
Good VR is one important interim goal for a fully-realized RoboWaifu, and it's also much easier to visualize an algorithm graphically at a glance than 'trying to second-guess a mathematical function'. OpenGL is by far the most portable approach to graphics atm.

>"This is an excellent site for starting with OpenGL from scratch. It covers from very introductory topics all the way to advanced topics like Deferred Shading. Liberal usage of accompanying images and code. Strongly recommended."

learnopengl.com/
https://archive.is/BAJ0e

Plus the two books related, but learnopengl.com is particularly well-suited for beginners.

www.openglsuperbible.com/
https://archive.is/NvdZQ

opengl-redbook.com/
https://archive.is/xPxml

www.opengl.org
https://archive.fo/EZA0p
Edited last time by Chobitsu on 09/26/2019 (Thu) 08:14:37.
22 posts and 18 images omitted.
>>1848
Unrelated: Does the system redraw from the bottom-up? For some reason I thought the drawing happened from the upper-left corner down, but apparently not. Given that the coordinate origin for the system starts in the lower-left corner (similar to a Cartesian system) I guess that makes sense. Live and learn.
>>26098 This here about the Lobster language is related to OpenGL. This is why it is fast, I don't think that it is compiled is much important for speed, though it might have other advantages. For now I'm trying to use Python with OpenGL to do the same thing. Not to confused with OpenCL which I also need to use. I found the term "delta compression" for calculating differences in frames. I hope I can make the animations smaller that way. My current way of "programming" this is asking ChatGPT for every step while learning about how it works. With basic knowledge of Python it works relatively well, even with GPT-3. I'm getting the terms I need to look for how to do things, and the code which needs some polishing.
>>26105 thats how most video codecs like webm work, where you keep only keyframes and replace whats in between them with only the transformation required for the next frames, there must be lots of libraries for this
Open file (18.98 KB 768x768 delta_frame_15.png)
Open file (33.69 KB 768x768 delta_frame_37.png)
>>26106 Thanks, that's what I asked ChatGPT so I had something to ask further. I need to make it interactive, though, or like very short gifs with sound.
>>26108 Okay, I fell into a rabbit hole here. It was interesting, but probably useless. I rather need a collection of GIFs or so. I wondered why no one else had this idea before, I'm starting to understand where the problem is and why it might be hard or impossible. [quote] You are correct. In a delta compression approach where you keep the original frames and use delta frames to reconstruct, the primary advantage is not in reducing storage requirements but rather in potentially reducing the processing load when displaying or working with the frames in real-time. The advantage lies in the efficiency of processing and transmitting the frames, especially when dealing with limited computational resources or bandwidth. Here's a clearer explanation of the advantage: 1. Reduced Processing Load: By storing and transmitting only the delta frames, you can reduce the amount of data that needs to be processed, especially when displaying or working with the frames in real-time. Instead of working with full frames, you process the smaller delta frames, which can be computationally less intensive. 2. Real-Time Efficiency: In applications where real-time processing or streaming is crucial, delta compression can be advantageous. It allows for quicker decoding and display of frames, which is important in video conferencing, surveillance, and interactive applications. 3. Bandwidth Efficiency: When transmitting video data over a network, delta compression can reduce the required network bandwidth, making it feasible to stream video even with limited bandwidth. However, it's important to note that you still need the original frames to apply the delta frames and reconstruct the complete frames. The advantage is in processing efficiency, not in storage efficiency. You trade off storage efficiency for computational and bandwidth efficiency. If your priority is purely reducing storage requirements and you don't need real-time processing or streaming, then traditional video codecs that achieve high compression ratios while storing complete frames might be more suitable for your use case.[/quote]

Python General Robowaifu Technician 09/12/2019 (Thu) 03:29:04 No.159 [Reply] [Last]
Python Resources general

Python is by far the most common scripting language for AI/Machine Learning/Deep Learning frameworks and libraries. Post info on using it effectively.

wiki.python.org/moin/BeginnersGuide
https://archive.is/v9PyD

On my Debian-based distro, here's how I set up Python, PIP, TensorFlow, and the Scikit-Learn stack for use with AI development:
sudo apt-get install python python-pip python-dev
python -m pip install --upgrade pip
pip install --user tensorflow numpy scipy scikit-learn matplotlib ipython jupyter pandas sympy nose


LiClipse is a good Python IDE choice, and there are a number of others.
www.liclipse.com/download.html
https://archive.is/glcCm
58 posts and 14 images omitted.
Open file (79.43 KB 483x280 Screenshot_70.png)
Open file (124.54 KB 638x305 Screenshot_71.png)
Open file (124.48 KB 686x313 Screenshot_73.png)
Open file (186.01 KB 636x374 Screenshot_74.png)
Advanced use of exceptions in Python for reliability and debugging: >I Take Exception to Your Exceptions: Using Custom Errors to Get Your Point Across https://youtu.be/wJ5EO7tnDiQ (audio quality is a bit suboptimal)
>>24658 Great stuff NoidoDev. Exceptions are based.
I watched this video here in 1.75x speed as a refresher, since I had some gaps from not writing Python in quite a while. It might also work for beginners with experience in some other language: > Python As Fast as Possible - Learn Python in ~75 Minutes https://youtu.be/VchuKL44s6E As a beginner you should of course go slower and test the code you've learned. While we're at it, Python got a lot of new features, including compiling better to C-code now using Cython: https://youtu.be/e6zFlbEU76I
https://automatetheboringstuff.com/ Free online ebook. >Practical Programming for Total Beginners >If you've ever spent hours renaming files or updating hundreds of spreadsheet cells, you know how tedious tasks like these can be. But what if you could have your computer do them for you?
Related: Some anon wants to get started >>26616

/robowaifu/ + /all things monster girl/; it's benefits, projects, the uncanny valley, etc. Robowaifu Technician 05/03/2021 (Mon) 14:02:40 No.10259 [Reply] [Last]
Discussing the potential benefits of creating monster girls via robotics instead of 1 to 1 replicas of humans and what parts can be substituted to get them in production as soon as possible. Firstly is the fact that many of the animal parts that could be substituted for human one are much simpler to work with than the human appendages, which have a ton of bones and complex joints in the hands and feet, My primary example of this is bird/harpy species (image 1), which have relatively simple structures and much less complexity in the hands and feet. For example, the wings of the bird species typically only have around three or four joints total, compared to the twenty-seven in the human hand, while the legs typically only have two or three, compared to the thirty-three in the human foot. As you can guess, having to work with a tenth of the bones and joints and opposable thumbs and all that shit makes things incredibly easier to work with. And while I used bird species as an example, the same argument could be put forward for MG species with paws and other more simplistic appendages, such as Bogey (image 2) and insect hybrids (image 3). Secondly is intentionally making it appear to not be human in order to circumvent the uncanny valley. It's incredibly difficult to make completely convincing human movement, and one of the simplest ways around that is just to suspend the need for it entirely. We as humans are incredibly sensitive to the uncanny valley of our own species, even something as benign as a prosthetic limb can trigger it, but if we were to create something that we don't expect to move in such a way, it's theoretically entirely possible to just not have to deal with it (for the extremities part of it, anyways), leaving more time to focus on other aspects, such as the face. On the topic of face, so too could slight things be substituted there (again for instance, insect girls), in order to draw attention away from the uncanny valley until technology is advanced enough that said uncanny valley can be eliminated entirely. These possibilities, while certainly not to the taste of every anon, could be used as a way to accelerate production to the point that it picks up investors and begins to breed competition and innovation among people with wayyyyyyy more money and manpower than us, which I believe should be the endgoal for this board as a whole. . Any ideas or input is sincerely appreciated.

Message too long. Click here to view full text.

Edited last time by Chobitsu on 10/01/2023 (Sun) 06:42:25.
91 posts and 68 images omitted.
>>25856 >>26037 >Do the middle pairs of legs need to rotate? Yes they do. Under the category of missing the obvious- to go up and down stairs. Or negotiate slopes. The legs need to be able to assume a near-vertical orientation in their operation or side loads will cause damage, like trying to bend your knee in the wrong plane. So as our girl goes up stairs and her body changes orientation to match the slope of the stairs by up to 45 degrees to the horizontal, her legs need to rotate at the hip to be able to operate properly. So we are up to a minimum of 4 actuators per each leg with 3 leg segments. >>26037 >Leg rotation relative to hips have front and back sets of legs on a pivot controlled by a worm drive to limit back driving That would restrict independent movement of the legs. For example in a "hug" with the forelegs they would be at the same level and bang into each other, instead of being at slightly different levels, and our girl couldn't move them up and down separately to rub *or "grope"* anon. If the rear legs can be made to tie knots they will need to be independently moveable. The middle pair of legs may be able to use this for adjusting their angle for slopes and stairs. The problem then is not being able to easily remove one leg for repair. Removable as opposing pairs? Then we would need more types of spares. >>26034 >Batteries and CG This also relates to the placement of the legs. I am thinking of a finished spider-girl with right legs, but she would be able to walk (but not much else) with 4 or 6. So build her with 8 leg mounts and start with 4 and add the rest as time, money or code development permits. With 6 legs you could have the first two leg pairs and the rearmost pair to start. The 2nd and 4th pair could be set in position (like the hydraulic legs of heavy equipment) to allow for elevated use of the front pair. For this we might want the CG to be rearward in the thorax. If we wanted to use the rear legs for web-spinning, we would set the first and 3rd pair legs with the CG more forward in the thorax. We would need to choose which arrangement is more desirable until we have all eight legs.
>>25707 >>26014 > The conclusion I came to was that you’d need at least 5 separate contractile regions, all independently addressable, AND able to contract dY for the snake-bot to be capable of any generalized sort of locomotion. Yes, that's just what the design I described should do. By having the cable winches attached to several disc vertebrae down-body from each winch, the overlap of each set of cables emulates the overlap of muscles in a snake's body. In this way the contraction of each set of cables can be individually controlled to either stand alone or gang together over a distance of several vertebrae, and the size of each "muscle" group (and therefore the number of muscle groups) can be varied on the fly by the controlling code. The operation of the cable winch groups can be done in sequence along the body to emulate the sinuous movement of a snake, and the winch-and-cable sets in the vertical plane for coiling the body can be used to adjust the areas of contact with the surface the body is moving over. To be clear, each vertebra is attached to several winches by cables, and can be acted upon by one or any number of the cables attached to it. The force of each winch is spread over several vertebrae, but several winches acting on each vertebrae combine to increase the force applied. By varying the spacing of the vertebrae discs, the diameter of each vertebrae disc, vs the diameter of the foam body and the resiliency of the foam used for the body (say 8" dia. vertebrae spaced 6" and 12" body dia.), it should be possible for the coils to be as tight as a few inches. With a larger sized object, like anon's body, our lamia should have no problem exerting a firm grip. As with an arachne, anon should probably have a safe word. Something I didn't think of before, real snakes are able to do a twisting motion with at least the forward part of their bodies, so we may also want to copy that. The 18" body size I mentioned before was at the "hip" area before the tapering of the snake body. I should have made that clear. >https://pubmed.ncbi.nlm.nih.gov/32271916/ (source analysis) >https://pubmed.ncbi.nlm.nih.gov/36296136/ (proposed robotic design based on analysis) Thanks, I'll take a look at these when I have a chance. >===

Message too long. Click here to view full text.

Edited last time by Chobitsu on 10/17/2023 (Tue) 10:04:25.
>>25838 >find out how many legs of a spider are off the ground at one time as it is moving At most four. There are always at least 4 on the ground at any time in the walking cycle. So each of our spider-girl's legs must be able to support 1/4 of her total weight, plus whatever extra we decide on for contingencies. That could be 1/3 of the total weight (maybe a little more?), in case one leg is disabled, so she could still safely walk to her service area (if the code isn't smart enough to adapt to a broken leg). Or if you are feeling particularly ambitious you might want her strong enough to carry her own weight plus anon's weight (plus excess safety margin) for when she needs to carry anon somewhere after demonstrating her web-spinning. In that case only four legs might be walking so 3 on the ground at all times, a quadruped. Here's a site with a good description of how spider legs work: Leg Uses Hydraulics and Muscle Flex https://asknature.org/strategy/leg-uses-hydraulics-and-muscle-flex/ Here's a paper on soft robotic spider legs cited on the website^. Soft, so nobody is using cables and sprockets: Biomimetic Spider Leg Joints: A Review from Biomechanical Research to Compliant Robotic Actuators (downloadable pdf) https://www.mdpi.com/2218-6581/5/3/15#
>>26056 >in case one leg is disabled, so she could still safely walk to her service area (if the code isn't smart enough to adapt to a broken leg). Not sure how difficult it would be with servos w/o some dissassembly but with electrical motors regardless of type you can closely aproximate their % load by how much current they draw relative to max allowed which should not be difficult to track via analog inputs. Some simple algorithm would then calculate how many legs may be in air at any time and how far can legs neighbouring strained limb move to offset its load.

Open file (485.35 KB 1053x1400 0705060114258_01_Joybot.jpg)
Robowaifu Simulator Robowaifu Technician 09/12/2019 (Thu) 03:07:13 No.155 [Reply] [Last]
What would be a good RW simulator. I guess I'd like to start with some type of PCG solution that just builds environments to start with and build from there up to characters.

It would be nice if the system wasn't just pre-canned, hard-coded assets and behaviors but was instead a true simulator system. EG, write robotics control software code that can actually calculate mechanics, kinematics, collisions, etc., and have that work correctly inside the basic simulation framework first with an eye to eventually integrating it into IRL Robowaifu mechatronic systems with little modifications. Sort of like the OpenAI Gym concept but for waifubots.
https ://gym.openai.com/
131 posts and 63 images omitted.
>>16648 Excellent find Pareto Frontier. Any advice on running a local instance?
>>16652 It's hard but doable, boils down to making this notebook work https://github.com/brokenmold/dalle-mini/blob/main/tools/inference/inference_pipeline.ipynb I don't have time to bring it up rn
>>16645 >>16648 Nice. Thanks Anon.
Opensimulator thread: >>12066 unreal engine file with the maidcom project manequin >>25776
Open file (63.55 KB 765x728 Screenshot_149.png)
>An encyclopedia of concept simulations that computers and humans can learn from. An experimental project in machine learning. https://concepts.jtoy.net >Examples of concepts that are grounded in the physical world: https://blog.jtoy.net/examples-of-concepts-that-are-grounded-in-the-physical-world/ Somewhat related: https://blog.jtoy.net/log-of-machine-learning-work-and-experiments/

Report/Delete/Moderation Forms
Delete
Report