/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Canary has been updated.

Build Back Better

Sorry for the delays in the BBB plan. An update will be issued in the thread soon. -r

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

Captcha
no cookies?
More

(used to delete files and postings)


Have a nice day, Anon!


Robot Wife Programming Robowaifu Technician 09/10/2019 (Tue) 07:12:48 No.86 [Reply] [Last]
ITT, contribute ideas, code, etc. related to the area of programming robot wives. Inter-process and networking is also on-topic, as well as AI discussion in the specific context of actually writing software for it. General AI discussions should go in the thread already dedicated to it.

To start off, in the Robot Love thread a couple of anons were discussing distributed, concurrent processing happening inside various hardware sub-components and coordinating the communications between them all. I think that Actor-based and Agent-based programming is pretty well suited to this problem domain, but I'd like to hear differing opinions.

So what do you think anons? What is the best programming approach to making all the subsystems needed in a good robowaifu work smoothly together?
84 posts and 36 images omitted.
>>14360 >Thank you, this sounds very exciting Y/W. Yes, I agree. I've spent quite a bit of time making things this 'simple', heh. :^) >I just wonder how hard it will be to understand how it works. Well, if we do our jobs perfectly, then the software's complexity will exactly mirror the complexity of the real-world problem itself whatever that may prove to be in the end. However, in my studied opinion that's not how things actually work out. I'd suggest a good, working solution will probably end up being ~150% the complexity of the real problemspace? Ofc if you really want to understand it, you'll need proficiency in C++ as well. I'd suggest working your way through the college freshman textbook known as 'PPP2', written by the inventor of the language himself, if you decide to become serious about it (>>4895) . Good luck Anon. >>14361 >as it is rather efficient for an object oriented programming language. I agree it certainly is. But it's also a kind of 'Swiss Army Knife' of a programming language. And in it's modern incarnation handles basically every important programming style out there. But yes I agree, it does OOP particularly well. >but, have wanted to try C++. See my last advice above. >Hopefully this project fixes that problem by providing anons with clarity on how robotic minds actually work. If we do our jobs well on this, then yes, I'd say that's a real possibility Anon. Let us hope for good success!

Message too long. Click here to view full text.

OK, I added another class that implements the ability to explicitly and completely specify exactly which embedded member objects to to include during it's construction. This could be a very handy capability to have (and a quite unusual one too). Imagine we are trying to fit RW Foundations code down onto a very small device. The ability to turn off the memory footprint of unused fields would be valuable. However, the current approach 'complexifies' lol is that a word? :^) the initialization code a good bit, and probably makes maintenance more costly going forward as well (an important point to consider). I'm satisfied that we have solved the functionality, but I'll have to give some thought to whether it should be a rigorous standard for the library code overall, or applied only in specific cases in the future. Anyway, here it is. There's a new 5th test for it as well. === -add specified member instantiations > >rw_sumomo-v211122.tar.xz.sha256sum 61ac78563344019f60122629f3f3ef80f5b98f66c278bdf38ac4a4049ead529a *rw_sumomo-v211122.tar.xz >backup drop: https://files.catbox.moe/iam4am.7z
>>14353 >related (>>14409)
leaving this here Synthiam software https ://synthiam.com/About/Synthiam
Mathematically-formalized C11 compiler toolchain the CompCert C verified compiler https://compcert.org/ >publications listing https://compcert.org/publi.html

General Robotics/A.I. news and commentary Robowaifu Technician 09/18/2019 (Wed) 11:20:15 No.404 [Reply] [Last]
Anything in general related to the Robotics or A.I. industries, or any social or economic issues surrounding it (especially of RoboWaifus). www.therobotreport.com/news/lets-hope-trump-does-what-he-says-regarding-robots-and-robotics https://archive.is/u5Msf blogmaverick.com/2016/12/18/dear-mr-president-my-suggestion-for-infrastructure-spending/ https://archive.is/l82dZ >=== -add A.I. to thread topic
Edited last time by Chobitsu on 12/17/2020 (Thu) 20:16:50.
355 posts and 160 images omitted.
just found this in a FB ad https://wefunder.com/destinyrobotics/
https://keyirobot.com/ another, seems like FB has figured me out for a robo "enthusiast"
>>15862 Instead of letting companies add important innovations only to monopolize them, what about using copyleft on them?
Open file (377.08 KB 1234x626 Optimus_Actuators.png)
Video of the event from Tesla YT channel: https://youtu.be/ODSJsviD_SU Was unsure what to make of this. It looks a lot like a Boston Dynamics robot from ten years ago. Also still not clear how a very expensive robot is going to be able to replace the mass-importation of near slave-labour from developing countries. Still, if Musk can get this to mass-manufacture and stick some plastic cat ears on it's head, you never know what's possible these days...
>>3857 >robots wandering the streets after all. You can spot them and "program" them. That is if you find them after all.

Open file (213.86 KB 406x532 13213d24132.PNG)
Open file (1.19 MB 1603x1640 non.png)
Robowaifu Technician 10/29/2020 (Thu) 21:56:16 No.6187 [Reply]
https://www.youtube.com/watch?v=SWI5KJvqIfg I have been working on creation of waifus using GANs etc... I've come across this project and I am totally amazed. Anyone has any idea how we can achive this much of a quality animation based GAN created characters? I think accomplishing this kind of work would have a huge impact on our progression. Calling all the people who posted at chatbot thread.
1 post omitted.
Open file (213.22 KB 400x400 sample4.png)
Open file (199.16 KB 400x400 sample1.png)
Open file (194.92 KB 400x400 sample2.png)
Open file (199.43 KB 400x400 sample3.png)
>>6188 Looking at some old tweets from them I think it is safe to say that it doesn't look much different from StyleGan on portraits. The shoulders are bad, & most of the work is done by their data cleaning to simplify the problem. Interps & style mixing are nothing special either. Gwerns work with some whack data was able to create similar kind of characters. Also waifulabs - which is all run by StyleGan - can create some really quality characters from different positions. And notice that they are a game development studio which does not work on AI waifu creations. Looks like hype-bait to me to be honest. They probably cherrypicked some of the results and maybe even manually played it to create this kind of animations. And considering their budget and data that is well possible. I am not sure if they still use StyleGAN though. They do not drop even a clue. But honestly with the current state of it and the time they spent on it I think they use a different approach.
My chief concern is first and foremost Is this open-source? If not, then it's relatively useless to us here on /robowaifu/, other than tangentially as inspiration. Hand-drawn, meticulously-crated animu is far better in that role tbh.
>>6187 It appears the characters are generated with a GAN then another model separates the character pieces into textures for a Live2D model. They're not animated with AI, but there are techniques to do such a thing: https://www.youtube.com/watch?v=p1b5aiTrGzY
Video on the state of anime GANs, anime created by AI, including animation for vtuber/avatar style animations: https://youtu.be/DX1lUelmyUo One of the guys mentioned in the video, creating a 3D model from a drawing, around 10:45 in the video above: https://github.com/t-takasaka - didn't really find which one it is on his Github yet, though. He seems to have some pose estimation to avatar in his repository, though. Other examples in the video might be more interesting for guys trying to build a virtual waifu. "Talking Head Anime 2", based on one picture: https://youtu.be/m13MLXNwdfY
>>16245 This would be tremendously helpful to us if we can find a straightforward way to accomplish this kind of thing in our robowaifu's onboard systems Anon ('character' recognition, situational awareness, hazard avoidance, etc.) Thanks! :^)

Open file (185.64 KB 1317x493 NS-VQA on CLEVR.png)
Open file (102.53 KB 1065x470 NS-VQA accuracy.png)
Open file (86.77 KB 498x401 NS-VQA efficiency.png)
Neurosymbolic AI Robowaifu Technician 05/11/2022 (Wed) 07:20:50 No.16217 [Reply]
I stumbled upon a couple videos critiquing "Deep Learning" as inefficient, fragile, opaque, and narrow [1]. It claims Deep Learning requires too much data, yet it performs poorly trying to extrapolate from training set, and how it arrives to its conclusions are opaque, so it's not immediately obvious why it breaks in certain cases, and all that learned information cannot be transfered between domains easily. They then put forth "Neurosymbolic AI" as the solution to DL's ails and next step of AI, along with NS-VQA as an impressive example at the end [2]. What does /robowaifu/ think about Neurosymbolic AI (NeSy)? NeSy is any approach that combines neural networks with symbolic AI techniques to take advantage of both their strengths. One example is the Neuro-Symbolic Dynamic Reasoning (NS-DR) applied on the CLEVRER dataset [3], which cascades information from neural networks into a symbolic executor. Another example is for symbolic mathematics [4], which "significantly outperforms Wolfram Mathematica" in speed and accuracy. The promise or goal is that NeSy will bring about several benefits: 1. Out-of-distribution generalization 2. Interpretability 3. Reduced size of training data 4. Transferability 5. Reasoning I brought it up because points 3 and 5, and to a lesser degree 4, are very relevant for the purpose of making a robot waifu's AI. Do you believe these promises are real? Or do you think it's an over-hyped meme some academics made to distract us from Deep Learning? I'm split between believing these promises are real and this being academics trying to make "Neurosymbolic AI" a new buzzword. [5] tries to put forth a taxonomy of NeSy AIs. It labels [4] as an example of NeSy since it parses math expressions into symbolic trees, but [4] refers to itself as Deep Learning, not neurosymbolic or even symbolic. Ditto with AlphaGo and self-driving car AI. And the NS-DR example was beaten by DeepMind's end-to-end neural network Aloe [6], and overwhelmingly so when answering CLEVRER's counterfactuals. A study reviewed how well NeSy implementations met their goals based on their paper, but their answer was inconclusive [7]. It's also annoying looking for articles on this topic because there's like five ways to write the term (Neurosymbolic, Neuro Symbolic, Neuro-Symbolic, Neural Symbolic, Neural-Symbolic). >References [1] MIT 6.S191 (2020): Neurosymbolic AI. <https://www.youtube.com/watch?v=4PuuziOgSU4> [2] Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding. <http://nsvqa.csail.mit.edu/>

Message too long. Click here to view full text.

Open file (422.58 KB 1344x496 MDETR.png)
Open file (68.28 KB 712x440 the matrix has you.jpg)
>>16217 I think such critique is outdated. The impressive results of NS-VQA have been beaten by full deep learning approaches like MDETR.[1] It would be a bit ironic to call deep learning fragile and narrow and then proceed to write specific functions that only handle a certain type of data that the training set just happens to be a small subset of and call it generalization. Sure it can handle 'out-of-distribution' examples with respect to the training set, but give it a truly out-of-distribution dataset with respect to the functions and these handwritten methods will fail completely. A lot of deep learning approaches these days can learn entire new classes of data from as few as 10-100 examples. ADAPET[2] learns difficult language understanding tasks from only 32 examples. RCE[3] can learn from a single success state example of a finished task. DINO[4] can learn to identify objects from no labelled examples at all. CLIP[5] and CoCa[6] are examples of deep learning generalizing to datasets they were never trained on, including adversarial datasets, and outperforming specialized models, and this is just stuff off the top of my head. Someone ought to give DALL-E 2 the prompt "a school bus that is an ostrich" and put that meme to rest. That said, neurosymbolic AI has its place though and I've been using it lately to solve problems that aren't easily solvable with deep learning alone. There are times when using a discrete algorithm saves development time or outperforms existing deep learning approaches. I don't really think of what I'm doing as neurosymbolic AI either. Stepping away from matrix multiplications for a bit doesn't suddenly solve all your problems and become something entirely different from deep learning. You have to be really careful actually because often a simpler deep learning approach will outperform a more clever seeming neurosymbolic one, which is clearly evident in the progression of AlphaGo to AlphaZero to MuZero. From my experience it hasn't really delivered much on the promises you listed, except maybe point 2 and 5. I wouldn't think of it as something good or bad though. It's just another tool and it's what you do with that tool what counts. There was a good paper on how to do supervised training on classical algorithms. Basically you can teach a neural network to do a lot of what symbolic AI can do, even complicated algorithms like 3D rendering, finding the shortest path or a sorting algorithm. I think it shows we've barely scratched the surface of what neural networks are capable of doing. https://www.youtube.com/watch?v=01ENzpkjOCE https://arxiv.org/pdf/2110.05651.pdf >Links 1. https://arxiv.org/abs/2104.12763 2. https://arxiv.org/abs/2103.11955 3. https://arxiv.org/abs/2103.12656 4. https://arxiv.org/abs/2104.14294 5. https://arxiv.org/abs/2103.00020

Message too long. Click here to view full text.

Open file (201.23 KB 1133x1700 spaghetti_mama.jpg)
Idling around the Interwebz today[a], I found myself reading the Chinese Room Argument article on the IEP[b], I came across the editor's contention in the article the notion that "mind is everywhere" is an "absurd consequence". >"Searle also insists the systems reply would have the absurd consequence that “mind is everywhere.” For instance, “there is a level of description at which my stomach does information processing” there being “nothing to prevent [describers] from treating the input and output of my digestive organs as information if they so desire.” "[1],[2] I found that supposed-refutation of this concept vaguely humorous on a personal level. As a devout Christian Believer, I would very strongly assert that indeed, Mind is everywhere. Always has been, always will be. To wit: The Holy Spirit sees and knows everything, everywhere. As King David wrote: >7 Where can I go to escape Your Spirit? > Where can I flee from Your presence? >8 If I ascend to the heavens, You are there; > if I make my bed in Sheol, You are there. >9 If I rise on the wings of the dawn, > if I settle by the farthest sea, >10 even there Your hand will guide me; > Your right hand will hold me fast.[3] However, I definitely agree with the authors in their writing that >"it's just ridiculous" to assert >" “that while [the] person doesn’t understand Chinese, somehow the conjunction of that person and bits of paper might” ".[1],[2]

Message too long. Click here to view full text.

Edited last time by Chobitsu on 05/16/2022 (Mon) 00:13:09.

Reploid thread Robowaifu Technician 02/28/2022 (Mon) 04:13:32 No.15349 [Reply]
A few people thought it'd be a good idea to start a thread for Reploid builds, so here we are! To kick things off, here's a little progress on painting my own RiCO. It's just spray paint so it doesn't look stellar and I screwed up a couple parts. All the blue trim paint needs done as well. I don't care if it's perfect, I just want to get her done. Not to mention anything adjacent to "art" or "craftsmanship" is beyond me, mostly due to lack of patience: I don't want to spend 1000s of hours grinding away with a paintbrush when I could be designing up cool (to me...) robotic mechanisms for instance. I bet you bottom dollar none of my projects will be winning awards in the fit-and-finish department. Can't wait to see what happens with Pandora and whatever other Reploid projects people might be working on.
36 posts and 28 images omitted.
>>16021 >Great idea, mind if I borrow this technique for MaidCom? Please do. Check out how I did the rest of her eyes as well, perhaps it could scale up.
>>16029 It does scale up really well. Though I will innovate upon it for Pandora.
Edited last time by AllieDev on 04/27/2022 (Wed) 01:32:08.
>>15999 I'm really impressed with this. I have a lot of questions, I'll scan the rest of this thread to make sure they aren't already answered first.
Curious if you had any changes this week with your wonderful robowaifu, RiCOdev?
>>16243 RiCO will be on hold for the near future; your friendly neighborhood reploid builder has got bigger fish to fry. I'll still keep an eye out here to answer questions etc.

Open file (1.04 MB 2999x1298 main-results-large.png)
Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents Robowaifu Technician 01/21/2022 (Fri) 15:35:19 No.15047 [Reply]
https://wenlong.page/language-planner/ https://github.com/huangwl18/language-planner Can world knowledge learned by large language models (LLMs) be used to act in interactive environments? In this paper, we investigate the possibility of grounding high-level tasks, expressed in natural language (e.g. "make breakfast"), to a chosen set of actionable steps (e.g. "open fridge"). While prior work focused on learning from explicit step-by-step examples of how to act, we surprisingly find that if pre-trained LMs are large enough and prompted appropriately, they can effectively decompose high-level tasks into low-level plans without any further training. However, the plans produced naively by LLMs often cannot map precisely to admissible actions. We propose a procedure that conditions on existing demonstrations and semantically translates the plans to admissible actions. Our evaluation in the recent VirtualHome environment shows that the resulting method substantially improves executability over the LLM baseline. The conducted human evaluation reveals a trade-off between executability and correctness but shows a promising sign towards extracting actionable knowledge from language models. I think it's worth a whole thread. If not, move it to the appropriate section.
Update in this topic, this time by DeepMind. https://www.youtube.com/watch?v=L9kA8nSJdYw Slowly, they realize what we all think about, yet only in virtual spaces.
Open file (251.80 KB 1665x621 calm outline.png)
Open file (172.15 KB 859x785 calm planning.png)
Open file (85.32 KB 827x627 calm success.png)
Open file (337.52 KB 1523x725 coca.png)
Open file (119.53 KB 901x670 coca performance.png)
>>16197 >Human: show me how you polish the baluster >Robowaifu: say no more goshujin-sama There was a really interesting paper recently on context-aware language models that can do planning and achieve better than human performance on a flight-booking task, using a model the same size as GPT-2 small: https://sea-snell.github.io/CALM_LM_site/ It shows a lot of promise for using a Monte Carlo tree search for doing planning with language models, since it only takes 5 dialog generation attempts with the new method to outdo human performance without doing a tree search at all. Also a huge breakthrough in zero-shot multi-modal learning has been made that completely blows CLIP and SotA specialized models to pieces by using a simple to understand contrastive and captioning loss (CoCa) that can leverage existing models: https://arxiv.org/pdf/2205.01917.pdf This is going to be huge for embodied agents. It's a lot like the similarity measure used between sentence embeddings produced by the RoBERTa model in OP's paper to ensure the translated generated instructions are executable, except it does it between images and sentences. And there's another paper worth mentioning doing transfer learning from a language model trained on Wikipedia to an RL agent (on continuous control and games) that outperforms training from scratch: https://arxiv.org/pdf/2201.12122.pdf It seems we're headed towards a leap forward soon with goal-oriented embodied agents using language models.

Who wouldn't hug a kiwi. Robowaifu Technician 09/11/2019 (Wed) 01:58:04 No.104 [Reply] [Last]
Important good news for Kiwi! (>>14757) === Howdy, I'm planning on manufacturing and selling companion kiwi robot girls. Why kiwi girls? Robot arms are hard. What would you prefer to hug, a 1 m or 1.5 m robot? Blue and white or red and black? She'll be ultra light around 10 to 15 kg Max with self balancing functionality. Cost is dependent on size, 1000 for 1 m or 1500 for 1.5 m. I'm but one waifugineer, so I'm going to setup around one model to self produce. If things go well, costs can go down and more models can be made. Hopefully with arms someday. >=== -add news crosslink
Edited last time by Chobitsu on 12/23/2021 (Thu) 06:43:53.
107 posts and 75 images omitted.
Open file (49.51 KB 510x386 AscentoLegMechanism.png)
>>15374 Completion requires an end point. Perfection can be said to have no end as improvements can be made ad infinitum to fit various ideals. I want to restart again to inch closer to my perfect leg but, that's a potentially endless cycle. Focusing on getting something done is better. The knee functions nearly identically to Ascento's leg, just mass optimized and legally distinct. Not pictured is the use of rubber bands for gravity compensation as a lower cost alternative to Ascento's springs.
>>15414 What a cool design Kiwi, thanks.
>>15414 Isn't it a problem to have a sping, or even worse a rubber band, being loaded all the time if the joints are in a non-neutral position? I like the idea with the spring, not I'm thinking if I could use that as well, but maybe with a motor controlling the non-neutral position of the spring.
Open file (102.78 KB 800x600 019.jpg)
>>15746 Your suspicion is understandable. Springs and rubber bands are meant to experience stressed states for prolonged time. It is true that some springs will suffer from fatigue under prolonged or extreme stress. All you need to do is ensure your design does not put too much strain on the elastic element. For us, rubber bands are easy to design around and are used frequentally in DIY RC cars and robotics. (Picrel is a lego RC car which uses rubber bands for power transmission and suspension, used for the sake of clarity, note that all rubber bands are in a constant state of stress.)
>>15748 That's kinda cool looking Kiwi. I think the wide variety of elastic bands broadly available, and their low-cost in general make them a natural fit for our goals here on /robowaifu/.

Elfdroid Sophie Dev Thread 2 Robowaifu Enthusiast 03/26/2021 (Fri) 19:51:19 No.9216 [Reply] [Last]
The end of an era...(>>14744) === The saga of the Elfdroid-pattern Robowaifu continues! Previous (1st) dev thread starts here >>4787 At the moment I am working to upgrade Sophie's eye mechanism with proper animatronics. I have confirmed that I'm able to build and program the original mechanism so that the eyes and eyelids move, but that was the easy part. Now I have to make this already 'compact' Nilheim Mechatronics design even more compact so that it can fit snugly inside of Sophie's head. One big problem I can see coming is building in clearance between her eyeballs, eyelids and eye sockets so that everything can move fully and smoothly. I already had to use Vaseline on the eyeballs of the first attempt because the tolerances were so small. If the eyelids are recessed too deep into her face, then she looks like a lizard-creature with a nictitating membrane. But if the eyelids are pushed too far forward then she just looks bug-eyed and ridiculous. There is a middle ground which still looks robotic and fairly inhuman, but not too bad (besides, a certain degree of inhuman is what I'm aiming for, hence I made her an elf). Links to file repositories below. http://www.mediafire.com/folder/nz5yjfckzzivz/Robowaifu_Resources

Message too long. Click here to view full text.

Edited last time by Chobitsu on 12/23/2021 (Thu) 06:51:08.
346 posts and 175 images omitted.
Open file (201.29 KB 1160x773 Russia put-in Ukraine.jpg)
I now know how to use a multimeter and have confirmed that my buck-converter is correctly limiting voltage to the servos. Have thrown out a bunch of fried micro-servos that were a lost cause (kept a few parts for spares). Now, onto the more pressing subject: To Mr. Vladimir Putin, I appreciate your desk in the Kremlin is very well polished and shiny. In fact if I wasn't working on building a robot I would be polishing my desk and downgrading to Windows XP so my setup could be more like yours. However, at present you are causing me a problem. Because electricity prices in my country have doubled due to you ordering the gas pipelines closed, I can no longer 3D print large things because it's too expensive. Although, Mr.Putin, I realise it's not all your fault. You see, I told my government that they should've focused less on feminism, jigaboos and faggots and more on building nuclear power stations, but they wouldn't listen. So now all of our gas power plants are out of gas and all of our energy firms are going bust. So please could you kindly sort out your business with Ukraine so the rest of us can get on with building robots? Kind regards, SophieDev
>>15207 POTD TOP LOL <ywn a shiny desk full of XP in Soviet Russia. I'm glad you're sorting your servos/power systems. Hopefully the situation will improve before too long Anon. I'm sure you will figure things out as you go along. I pray for you SophieDev indeed for us all. Godspeed. >=== BTW Anon, your thread is nearly at the autosage bump limit. I'd suggest you begin #3 thread soon.
Edited last time by Chobitsu on 02/14/2022 (Mon) 03:04:20.
Thought I'd print something small and cheap but still useful: servo connector locks. Do you have keyless servo connectors that keep coming undone when your robot moves about? Slip some of these locks over the connection point and your robot's days of sudden-onset flaccid paralysis will be over! Obviously though, if your wires aren't long enough to accomodate your robot's range of movement and you have these servo connector locks on, then the servo wires are likely to yank out completely (or pull something off your robot) since the servo connectors can no longer slip out easily.
>>15211 That's great but BLDCs and ESCs mostly use other connectors. I only see these on the small servos. And in those cases they need to go into a breadboard for which I use the male to male breadboard cables. I'm mostly using XT-60 connectors and banana plugs, and try to get motors with the right connectors. These are very cheap on AliExpress. Make sure to get male and female and the right size, though. For plugs and the connectors. You only need one connector and the other side can use plugs. The connectors seem to be mostly yellow and the plugs without plastics golden (brass). On AliExpress they also have little screw terminals for cables, which are also very cheap and fit into a breadboard. Not sure about the name, DG-301 is written on the side and they're blue (others are green).
Oh hey its me from the other thread. Guess I was wrong haha. Excellent job you did with the documentation and everything sophiedev. You're clearly the 100x dev around these parts. Cause you really did the work of like 100 people.

Open file (304.39 KB 1200x801 02.jpeg)
Open file (524.02 KB 1024x682 03.jpg)
Open file (987.46 KB 2560x1918 05.jpeg)
/robowaifu/meta-4: Rugged Mountains on the Shore Robowaifu Technician 09/09/2021 (Thu) 22:39:33 No.12974 [Reply] [Last]
/meta & QTDDTOT Note: Latest version of /robowaifu/ JSON archives available is v220117 Jan 2022 https://files.catbox.moe/l5vl37.7z If you use Waifusearch, just extract this into your 'all_jsons' directory for the program, then quit (q) and restart. Note: Final version of BUMP available is v0.2g (>>14866) >--- Mini-FAQ >A few hand-picked posts on various topics -Why is keeping mass (weight) low so important? (>>4313)

Message too long. Click here to view full text.

Edited last time by Chobitsu on 01/17/2022 (Mon) 09:22:09.
354 posts and 122 images omitted.
>>15395 Sorry about that mistake.
New Thread New Thread New Thread >>15434 >>15434 >>15434 >>15434 >>15434 New Thread New Thread New Thread
>>13018 > (>>16395, >>16433 >Headpat Waifus -related) >=== -add original-use crosspost
Edited last time by Chobitsu on 05/24/2022 (Tue) 07:00:58.
>>15323 >anyone here familiar with ESP32 that would know why I2C works on an older version of ESP-IDF but not the current version? Not that I'm familiar but looking around a little, "...ESP32-S2’s internal pull-ups are in the range of tens of kOhm, which is, in most cases, insufficient for use as I2C pull-ups. Users are advised to use external pull-ups with values described in the I2C specification. For help with calculating the resistor values see TI Application Note https://www.ti.com/lit/an/slva689/slva689.pdf ..." and https://learn.adafruit.com/adafruit-metro-esp32-s2/esp32-s2-bugs-and-limitations "...I2C at 100 kHz bus frequency runs slowly

Message too long. Click here to view full text.


Open file (1.12 MB 2391x2701 20210710_233134.jpg)
Open Simple Robot Maid (OSRM) Robowaifu Technician 07/11/2021 (Sun) 06:40:52 No.11446 [Reply] [Last]
Basic design for an open source low cost robowaifu maid. Currently attempting to make a maid that looks like Ilulu from Miss Kobayashi's Dragon Maid. Right now she's an RC car with two servo steering. Will share designs when they're a tad better. Ultimate goal is cute dragon maid waifu that rolls around and gently listens to you while holding things for you.
103 posts and 56 images omitted.
>>13895 Thanks! Sorry for my recent absence. I'd like to continue work on this on the robowaifu.club until it can be determined what happened to the BO/Chobitsu or if this place is dead in the water.
>>14081 I'll make a new thread over there and all updates will be made over there until moderation returns.
>>14082 Posted the new thread
Open file (712.91 KB 724x1024 r.png)
This is exactly what I want. A beautiful dedicated maid filled with love and affection
>>14093 Hope you are doing well MeidoDev. It would be nice to see what you're up lately. BTW, that waifu is wonderful. I hope you make her someday! :^)

Report/Delete/Moderation Forms
Delete
Report