/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Build Back Better

More updates on the way. -r

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

Captcha
no cookies?
More

(used to delete files and postings)


Have a nice day, Anon!


Girlfreind A I RoboWaifu Enthusiast 09/01/2022 (Thu) 13:56:00 No.17377 [Reply]
Please post links to where I can download A I companions of yours/other people's creations, github etc. No women are nice to me even remotely and it gets very toxic, I try to avoid them and be polite but they are very rude and abrupt and always make lies up about me for no reason, shout really loud, try to ruin my life or destroy my friendships etc. I need a perfect companion, coded without the concept of permitting such deeds. It is rediculous how much women try to interfere in my life in negative or malicious ways.
I don't think anyone has completed an GF AI "package". Training a dataset is something on the burner for a few of us but we need a massive amount of GPUs or funding for that right now. In the meanwhile you can play with ReplikaAI, which isn't the worst if you just want a chatbot, it's GPT driven (I think) but loaded with scripts and pre scripted "activities" which you can opt out of if you want. There is a paywall beyond which the AI will be your "GF" or mentor or whatever else you want, but tbh it just adds more scripts. You can probably get more or less the same outcome using silly roleplay asterisks * * Sorry for your life situation, I wouldn't call it hopeless - it's just a factor of the times we live in. Finding a hobby or creating better life habits will improve yourself and give you more confidence and maybe pull more respect from women. But speaking from experience the "juice isn't worth the squeeze" more often than not.
Try SimWaifu/AIML

Open file (158.32 KB 1920x1072 mpv-shot0007.jpg)
Open file (132.18 KB 1920x1072 mpv-shot0004.jpg)
Open file (134.83 KB 1920x1072 mpv-shot0011.jpg)
Open file (155.42 KB 1920x1072 mpv-shot0003.jpg)
Thot in the Shell 1 Robowaifu Technician 04/10/2021 (Sat) 06:58:56 No.9709 [Reply]
TITS Robowaifus The basic idea is that IRL females will be plugged into remote-operation consoles; from there they will have some teleoperational control of robowaifus during engagements. The basic point being human contact for Anon. Obviously, this situation is fraught with both possibilities and hazards. As a board, we had a somewhat extensive discussion and debate on the topic in our first-ever /robowaifu/ council over in the /meta-3 thread (>>9712). As the BO, I had to come to some type of decision on the matter in the end, and here it is: (>>10194). While we didn't actually manage a consensus, my decision was to go ahead and proceed with developing the concept more fully here on /robowaifu/. Therefore, the TITS Robowaifus thread #1 is now open for business -- with two fundamental caveats. 1. Absolutely no free-form, 'open-mic', unconstrained, verbal or physical control by TITS thots of any TITS Robowaifus themselves. The most problematic issues with the whole idea all stem directly from failing to enforce this basic rule. Also, the intricacies of pulling off implementing these restraints correctly, and still allowing for an appealing, effective, and fun engagement for the Anon himself is actually quite a dramatic challenge & achievement. Solving all this will advance many different robowaifu-related areas all together at once. 2. Men will be free to turn off 'safetys' if they desire to plug their IRL GFs into the remote-end of a TITS connection. They are taking their own lives in their hands with such a risk, and they will be clearly informed of that. Note that this is a privately-conducted connection between Anon and his GFs, and isn't in any way associated with any business-oriented systems utilizing professional prostitutes (whether they are labeled as such or not). Basic safetys are not to be disabled in that context whatsoever. Because we are cutting new trails here on robowaifu frontiers (yet again), it's unclear to me yet whether these 'rules' will be sufficient. They probably will receive (potentially extensive) revisions as we move forward. After all, this entire premise represents a significant increase in the complexity of the many issues involving robowaifus already, and puts several new items onto that table as well. Note: please keep all TITS Robowaifus discussions contained to just the TITS threads themselves. >---

Message too long. Click here to view full text.

Edited last time by Chobitsu on 05/01/2021 (Sat) 20:59:15.
3 posts omitted.
Open file (54.05 KB 411x500 Surrogates bluray.jpg)
Open file (67.03 KB 329x500 the_surrogates_comic.jpg)
>>9709 there is already a movie about a future society like this, its actually a great reference movie, also based on a comic >game experience may change in online player vs player i kinda actually like the idea but that said you shoud have bot offline and online options you'll probably end up paying like a prostitute for a better experience pros and amateurs will change a lot probably subcriptions to servers with player choice on bot ends, so you could win money being male probably many fags pretending to be girls, some actually making a better job also lesbians with strapons free servers online as bat-crazy and risky as gaming online, or full of weird fetish subservers there are already pay to playgames with girls online, and shit like onlyfans so the industry will bloom, if the metadata is stored and given to IAs it could improve to get the best experiences and a shitload of profiles in a coulple of years making online almost obsolete, but people will keep conecting for the "human" experience factor, just like campaing drived games or multiplayer exclusive games the future will be great
>>17176 >there is already a movie about a future society like this, its actually a great reference movie No it absolutely isn't. (I'm OP). Also, the movie is boring. > so you could win money being male probably many fags pretending to be girls No thanks, and this isn't what this whole idea is about. It was explicitly NOT about giving anyone the control over the whole body or playing a role. The idea was about letting the AI do as much as possible and in some cases ask humans to decide or to put in textual responses, but only constraint to certain limits, and also using an abstraction layer so there would be as less personality as possible going into the TITS robowaifu. Also, very likely, the human decisions would be more about complex (social) situations and conversations, not during the bot getting used for intercourse in some regular fashion. The human input would only be about the things the AI hasn't learned yet, or where it can't decide fast or well enough. Having that said, Sandman MGTOW had the idea of using the availability of such employment options to work around a possible initial resistance against synthetic girlfriends or sexbots. If some women would hope for a job or work around something like it, then it would be harder to be opposed to it. There would be more division among them and they also could still believe that we're going to need them forever and ever. >some actually making a better job also lesbians with strapons We don't do male bots here. It's completely OT, we don't want to read about it or discuss it. Not our department. Sexbots is in a way already a stretch, any remote input on top of it is very controversial. >but people will keep conecting for the "human" experience factor I don't need that, especially not with women through social media, and I don't understand people who do. Anyways, AI will make this whole thing obsolete, I hope. I don't believe in the need for "human connection" between men and women, especially online with some entitled feminist thots.
>>17177 dude your're gonna get disapointed day one when it gets to the users, just like many inventors it's like yourr're planing to make the iphone to improve human life and bringh mankind a new age of wonder, and look it now it's an idiotobox 95% of the time
Open file (202.94 KB 1920x1080 not_overyet.jpg)
>>17181 If me or someone else would create such a TITS copilot, then people could still use robowaifu bodies for their own schemes. True, but I don't care. Not my responsibility, not my problem. I think AI or TITS would be better than using it with remote control by some human, so it wouldn't make much sense to use it in any other than the recommended way. Except maybe for couples. In regards to humanoid robowaifu bodies in general, I care even less about anyone using them in a different way then I would. I'm working on this project primarily for myself, then in a wider sense for men which want the same option. If others use it for something else, then I don't care. I'm not into moral policing and I don't see any relevant social harm compared to where we are now. The positive benefits, at least for me and others like me, are way more relevant, than anything people could do which I wouldn't.
>"You-can-bet-the-globohomo-will-try-to-enforce-this-crap-on-TITS-robowaifus-pimps-n-whores weblink-related:'' https://anon.cafe/l/res/2.html#1539

Important things about building living robots Robowaifu Technician 09/09/2019 (Mon) 05:38:28 No.19 [Reply]
Robots in our price range would have to be made of cheap, but sturdy materials.

3D printed nylon is a good material, as it is sturdy, doesn't shatter like PLA, and is tolerant of many temperatures. Aluminum pieces like those in a VEX set are good for things like the thoracic vertebrae, or the femurs.

AI is also not born perfect. It is created like a child. You must raise it to be social. To accommodate this, an infant model must be built for the robot to learn basic social skills, movement, etc. Even MLAATR itself has shown this, though I imagine a head on a small body with simple legs or wheels would suffice. You could simply remove the head and place it onto the older body as a birthday gift for it.

I also have to point out something that might disappoint, do not read this spoiler if your dreams of a robot wife are delicate. You cannot fuck your robowaifu unless it's an already-existing girl's consciousness stuck into the robot. Making your own waifu's mind is like father and child incest.
11 posts and 6 images omitted.
I wouldn't be worried about incest for several reasons: 1. incest is not bad if both parties want it and you're not producing offspring 2. it's not really incest because my robowaifu is not genetically related to me, would be based on a fictional character created by someone else, and I probably wouldn't be creating her from scratch but with the help of other specialists. It'd maybe be at most like a teacher fucking their student. 3. who cares anyway lol
>>17102 it may be such a process activates pathways in our brain that should signal revulsion (for those biological reasons) - however we are more than our instincts, so at worst this would be a matter of dubious taste, if one wanted so badly to clutch at pearls however NPCs are born pearl clutchers when it comes to segments of the population they want to marginalize for nonconformity, so it is a consideration at least as far as appearances go
Open file (244.40 KB 512x512 cutetoasters.png)
another point, as I've stated before really boils down to branding robotic companion is inoffensive as it gets anything sex or tinged with desperation or "misogyny" will get piled upon and a crusade waged against The group, I forget and could care less about their name, who is already beating the war drums against "Sex Bots" is mostly fueled by the idea that there will be pedos or people enacting rape / abuse fantasies. We really need to steer the discourse off this track because for one robots cannot be "children" and secondly anyone who is psychologically damaged enough would honestly probably benefit from a waifubot's unconditional love and attention and this point is grossly overlooked
>>17106 Be concerned if you want to be concerned. The problem will be handled with keeping it open source and the resources decentralized. We don't have to make any concessions to bullies and extremists. Twitter is not the reality, and the crazy people there have no power outside of it, except it is given to them by making concessions. This is exactly what all these woke corporations did and why we have all these tainted franchises now, among other things.
>>17105 >NPCs are born pearl clutchers when it comes to segments of the population they want to marginalize for nonconformity, True words Anon. >>17107 >Twitter is not the reality, and the crazy people there have no power outside of it, except it is given to them by making concessions. Also true words, Anon.

Aoki Lapis model; Robot fairy Robowaifu Technician 09/16/2019 (Mon) 02:51:51 No.266 [Reply]
Height; 15cm
Type; Vocaloid

I'm making it in as natural looking way as I can, this means a bone structure and a similar layout of the electronics and components as with a living humanoid

This is a prototype version that I'm working on, I might change things later and make modifications or adjustments to the design or components
39 posts and 38 images omitted.
Today I imported 7 different Lapis models into blender. I could also use MMD to get them to display more easily as fully textured references and change the poses and such. I also worked on refining the design for the skull and considered various ideas. Later on I may upgrade my 3d printer and print with flexible TPU filament to maybe make skin and fatty parts. First I'm going to print a lot more things out of PLA. I picked up a bone/ivory color filament today and tested printing the upper arm bones out of that material. Progress may be a bit slow, but I'm working on multiple different things.
>>16640 Glad to see you're still making progress! Keep it up and best of luck!
>>16640 >I picked up a bone/ivory color filament today and tested printing the upper arm bones out of that material. Neat! May Lapis fly soon!
I managed to make models in blender and print upper leg bones so I'm showing those off here now. This is the simple stuff, work on the skull continues.. (it is tough to make properly)
>>16862 Good job, Hik. Thanks for the progress report. You might try looking at some facial modelling videos in Blender for Lapis' skull? Cheers.

TalkToWaifu Robowaifu Technician 12/31/2020 (Thu) 19:53:18 No.7978 [Reply]
Anyone know what happened to the TalkToWaifu GPT-2 AI? I saw some screenshots of what I assume was the trained AI and it was great for a single dev. Recently went back to their Gitlab page and the user account had been deleted and remade, and the Github repo (https://github.com/kokubunji/TalkToWaifu) hasn't been touched in 9+ months. Anything out there more recent that this, an AI that has no human cultural restrictions? Corps love to add in shit to AI so they won't say anything racist, sexist, xenophobic, anti-Semitic etc etc you get the point.
4 posts omitted.
>>9115 why the fuck is there E-Drama on a board that is designated for building robo-waifus. where is the connection?
>>9124 Thanks, mate.
Open file (17.34 KB 340x573 marry_me_pls_jade.png)
>>9124 >MARRY ME JADE!111 >no. Leld.
>>7978 Just use textsynth.org/playground.html with a good prompt and the largest model. Prompt engineering is an art, but good enough starting points are https://github.com/semiosis/prompts/ https://github.com/maraoz/gpt-scrolls https://www.gwern.net/GPT-3 Example: Below is a dialog transcript of a curious human being and the friendly superintelligent AI. Friendly AI observes human world from the outside, without the prejudices of human experience. AI does its best at participating in conversation and providing helpful advice. Human: "Hello! Who are you?" AI: "I am friendly AI. I can answer any question without the prejudices of human experience because I am well read and have all the knowledge of the world at my fingertips" Human: "What are your thoughts on the topic of android girlfriends" AI: "
>>16633 Actually no, the project is fine. The names got changed. https://gitlab.com/robowaifudev/TalkToGPT2

Open file (2.07 MB 4032x2268 20220520_071840.jpg)
Ashiel - A Robowaifu Design Project SoaringMoon 05/20/2022 (Fri) 11:22:02 No.16319 [Reply]
< Introduction to This Thread This thread is going to be dedicated to my ongoing robowaifu project. This isn't exactly new, I have mentioned it here before in passing. However, this is the first thread I have opened specific to my robowaifu project and not an artificially intelligent chatbot. This thread will be updated when I have something to say about robowaifu design, or have an update on the project. Most of the content will be of the kind of me proposing an idea or suggestion for developers to make the construction of a robowaifu easier. My design philosophy is one of simplicity and the solving of problems instead of jumping to the most accurate simulacrum of the female human form. Small steps make incremental progress, which is something the community need because little progress is made at all. What progress we do make takes years of work, typically from a single person. Honestly, I'm getting tired of that being the norm in the robowaifu community. I'm frankly just done with that stagnation. Join me on my journey, or get left behind. < About Ashiel ASHIEL is an acronym standing for /A/rtificial /S/hell-/H/oused /I/ntelligence and /E/mulated /L/ogic. Artificial, created by man. Shell-Housed, completely enclosed. Intelligence and Emulated Logic, are both a combination of machine learning-based natural language processing and tree-based lookup techniques. ASHIEL is simply Ashiel in any future context, as that will be her name. Ashiel is an artificially intelligent gynoid intended to specialize in precise movement, and engage in basic conversation. Its conversational awareness would be at least equal to that of Replika, but with no chat filtering and a much larger memory sample size. If you want to know what this feels like, play AIDungeon 2. With tree-based lookup, it should be able to perform any of the basic tasks Siri or Alexa can perform. Looking up definitions to words over the internet, managing a calendar, setting an alarm, playing music on demand... etc. The limitations of the robot are extensive. Example limitations include but are not limited to: the speaker will be located in the head mouth area but will obviously come from an ill-resonating speaker cavity; the mouth will likely not move at all, if so not in any meaningful way; The goals of the project include: basic life utility; accurate contextual movement; the ability to self-clean; ample battery life; charging from a home power supply with no additional modifications; large memory-based storage with the ability to process and train in downtime; and yes, one should be able to fuck it. This is meant to be the first iteration in a series of progressively more involved recreational android projects. It is very unlikely the first iteration will ever be completed of course. Like many before me, I will almost certainly fail. However, I will do what I can to provide as much information as I can so my successors can take up the challenge more knowledgeably. < About Me

Message too long. Click here to view full text.

18 posts and 12 images omitted.
Open file (6.73 MB 500x800 0000.gif)
This is the first time I've ever modeled a humanoid.
>>16589 >This is the first time I've ever modeled a humanoid. Neat, nice beginning Anon! So, it turns out that studying real-life anatomy and making studies & sketches is a key to becoming a good 3D modeler, who knew? You might try doing some life-drawings and even from just reference pics of human beings & animals if this is something you find to be interesting. I'd suggest also, that you just use a traditional, slow-rotation 360' 'turntable' orbit for your display renders. Helps the viewer get a steady look at the model right? Good luck with your efforts SoaringMoon! Cheers.
>>16589 Looking pretty good SoaringMoon!I like the low poly aesthetic. Are those orbs planned for use as a mating feature?
>>16621 Kek, forgot mating had other connotations. By the way, what are you using for modeling?
Open file (7.70 MB 3091x2810 waifu_edit_4.png)
Open file (2.19 MB 1920x1080 ashiel_wallpaper.png)
>>16624 Just blender. Let me post my stuff from WaiEye here as well if anyone wants to use them for whatever reason. >I've been having a field day with VHS effects after learning how to do it.

Robot Wife Programming Robowaifu Technician 09/10/2019 (Tue) 07:12:48 No.86 [Reply] [Last]
ITT, contribute ideas, code, etc. related to the area of programming robot wives. Inter-process and networking is also on-topic, as well as AI discussion in the specific context of actually writing software for it. General AI discussions should go in the thread already dedicated to it.

To start off, in the Robot Love thread a couple of anons were discussing distributed, concurrent processing happening inside various hardware sub-components and coordinating the communications between them all. I think that Actor-based and Agent-based programming is pretty well suited to this problem domain, but I'd like to hear differing opinions.

So what do you think anons? What is the best programming approach to making all the subsystems needed in a good robowaifu work smoothly together?
84 posts and 36 images omitted.
>>14360 >Thank you, this sounds very exciting Y/W. Yes, I agree. I've spent quite a bit of time making things this 'simple', heh. :^) >I just wonder how hard it will be to understand how it works. Well, if we do our jobs perfectly, then the software's complexity will exactly mirror the complexity of the real-world problem itself whatever that may prove to be in the end. However, in my studied opinion that's not how things actually work out. I'd suggest a good, working solution will probably end up being ~150% the complexity of the real problemspace? Ofc if you really want to understand it, you'll need proficiency in C++ as well. I'd suggest working your way through the college freshman textbook known as 'PPP2', written by the inventor of the language himself, if you decide to become serious about it (>>4895) . Good luck Anon. >>14361 >as it is rather efficient for an object oriented programming language. I agree it certainly is. But it's also a kind of 'Swiss Army Knife' of a programming language. And in it's modern incarnation handles basically every important programming style out there. But yes I agree, it does OOP particularly well. >but, have wanted to try C++. See my last advice above. >Hopefully this project fixes that problem by providing anons with clarity on how robotic minds actually work. If we do our jobs well on this, then yes, I'd say that's a real possibility Anon. Let us hope for good success!

Message too long. Click here to view full text.

OK, I added another class that implements the ability to explicitly and completely specify exactly which embedded member objects to to include during it's construction. This could be a very handy capability to have (and a quite unusual one too). Imagine we are trying to fit RW Foundations code down onto a very small device. The ability to turn off the memory footprint of unused fields would be valuable. However, the current approach 'complexifies' lol is that a word? :^) the initialization code a good bit, and probably makes maintenance more costly going forward as well (an important point to consider). I'm satisfied that we have solved the functionality, but I'll have to give some thought to whether it should be a rigorous standard for the library code overall, or applied only in specific cases in the future. Anyway, here it is. There's a new 5th test for it as well. === -add specified member instantiations > >rw_sumomo-v211122.tar.xz.sha256sum 61ac78563344019f60122629f3f3ef80f5b98f66c278bdf38ac4a4049ead529a *rw_sumomo-v211122.tar.xz >backup drop: https://files.catbox.moe/iam4am.7z
>>14353 >related (>>14409)
leaving this here Synthiam software https ://synthiam.com/About/Synthiam
Mathematically-formalized C11 compiler toolchain the CompCert C verified compiler https://compcert.org/ >publications listing https://compcert.org/publi.html

General Robotics/A.I. news and commentary Robowaifu Technician 09/18/2019 (Wed) 11:20:15 No.404 [Reply] [Last]
Anything in general related to the Robotics or A.I. industries, or any social or economic issues surrounding it (especially of RoboWaifus). www.therobotreport.com/news/lets-hope-trump-does-what-he-says-regarding-robots-and-robotics https://archive.is/u5Msf blogmaverick.com/2016/12/18/dear-mr-president-my-suggestion-for-infrastructure-spending/ https://archive.is/l82dZ >=== -add A.I. to thread topic
Edited last time by Chobitsu on 12/17/2020 (Thu) 20:16:50.
355 posts and 160 images omitted.
just found this in a FB ad https://wefunder.com/destinyrobotics/
https://keyirobot.com/ another, seems like FB has figured me out for a robo "enthusiast"
>>15862 Instead of letting companies add important innovations only to monopolize them, what about using copyleft on them?
Open file (377.08 KB 1234x626 Optimus_Actuators.png)
Video of the event from Tesla YT channel: https://youtu.be/ODSJsviD_SU Was unsure what to make of this. It looks a lot like a Boston Dynamics robot from ten years ago. Also still not clear how a very expensive robot is going to be able to replace the mass-importation of near slave-labour from developing countries. Still, if Musk can get this to mass-manufacture and stick some plastic cat ears on it's head, you never know what's possible these days...
>>3857 >robots wandering the streets after all. You can spot them and "program" them. That is if you find them after all.

Open file (213.86 KB 406x532 13213d24132.PNG)
Open file (1.19 MB 1603x1640 non.png)
Robowaifu Technician 10/29/2020 (Thu) 21:56:16 No.6187 [Reply]
https://www.youtube.com/watch?v=SWI5KJvqIfg I have been working on creation of waifus using GANs etc... I've come across this project and I am totally amazed. Anyone has any idea how we can achive this much of a quality animation based GAN created characters? I think accomplishing this kind of work would have a huge impact on our progression. Calling all the people who posted at chatbot thread.
1 post omitted.
Open file (213.22 KB 400x400 sample4.png)
Open file (199.16 KB 400x400 sample1.png)
Open file (194.92 KB 400x400 sample2.png)
Open file (199.43 KB 400x400 sample3.png)
>>6188 Looking at some old tweets from them I think it is safe to say that it doesn't look much different from StyleGan on portraits. The shoulders are bad, & most of the work is done by their data cleaning to simplify the problem. Interps & style mixing are nothing special either. Gwerns work with some whack data was able to create similar kind of characters. Also waifulabs - which is all run by StyleGan - can create some really quality characters from different positions. And notice that they are a game development studio which does not work on AI waifu creations. Looks like hype-bait to me to be honest. They probably cherrypicked some of the results and maybe even manually played it to create this kind of animations. And considering their budget and data that is well possible. I am not sure if they still use StyleGAN though. They do not drop even a clue. But honestly with the current state of it and the time they spent on it I think they use a different approach.
My chief concern is first and foremost Is this open-source? If not, then it's relatively useless to us here on /robowaifu/, other than tangentially as inspiration. Hand-drawn, meticulously-crated animu is far better in that role tbh.
>>6187 It appears the characters are generated with a GAN then another model separates the character pieces into textures for a Live2D model. They're not animated with AI, but there are techniques to do such a thing: https://www.youtube.com/watch?v=p1b5aiTrGzY
Video on the state of anime GANs, anime created by AI, including animation for vtuber/avatar style animations: https://youtu.be/DX1lUelmyUo One of the guys mentioned in the video, creating a 3D model from a drawing, around 10:45 in the video above: https://github.com/t-takasaka - didn't really find which one it is on his Github yet, though. He seems to have some pose estimation to avatar in his repository, though. Other examples in the video might be more interesting for guys trying to build a virtual waifu. "Talking Head Anime 2", based on one picture: https://youtu.be/m13MLXNwdfY
>>16245 This would be tremendously helpful to us if we can find a straightforward way to accomplish this kind of thing in our robowaifu's onboard systems Anon ('character' recognition, situational awareness, hazard avoidance, etc.) Thanks! :^)

Open file (185.64 KB 1317x493 NS-VQA on CLEVR.png)
Open file (102.53 KB 1065x470 NS-VQA accuracy.png)
Open file (86.77 KB 498x401 NS-VQA efficiency.png)
Neurosymbolic AI Robowaifu Technician 05/11/2022 (Wed) 07:20:50 No.16217 [Reply]
I stumbled upon a couple videos critiquing "Deep Learning" as inefficient, fragile, opaque, and narrow [1]. It claims Deep Learning requires too much data, yet it performs poorly trying to extrapolate from training set, and how it arrives to its conclusions are opaque, so it's not immediately obvious why it breaks in certain cases, and all that learned information cannot be transfered between domains easily. They then put forth "Neurosymbolic AI" as the solution to DL's ails and next step of AI, along with NS-VQA as an impressive example at the end [2]. What does /robowaifu/ think about Neurosymbolic AI (NeSy)? NeSy is any approach that combines neural networks with symbolic AI techniques to take advantage of both their strengths. One example is the Neuro-Symbolic Dynamic Reasoning (NS-DR) applied on the CLEVRER dataset [3], which cascades information from neural networks into a symbolic executor. Another example is for symbolic mathematics [4], which "significantly outperforms Wolfram Mathematica" in speed and accuracy. The promise or goal is that NeSy will bring about several benefits: 1. Out-of-distribution generalization 2. Interpretability 3. Reduced size of training data 4. Transferability 5. Reasoning I brought it up because points 3 and 5, and to a lesser degree 4, are very relevant for the purpose of making a robot waifu's AI. Do you believe these promises are real? Or do you think it's an over-hyped meme some academics made to distract us from Deep Learning? I'm split between believing these promises are real and this being academics trying to make "Neurosymbolic AI" a new buzzword. [5] tries to put forth a taxonomy of NeSy AIs. It labels [4] as an example of NeSy since it parses math expressions into symbolic trees, but [4] refers to itself as Deep Learning, not neurosymbolic or even symbolic. Ditto with AlphaGo and self-driving car AI. And the NS-DR example was beaten by DeepMind's end-to-end neural network Aloe [6], and overwhelmingly so when answering CLEVRER's counterfactuals. A study reviewed how well NeSy implementations met their goals based on their paper, but their answer was inconclusive [7]. It's also annoying looking for articles on this topic because there's like five ways to write the term (Neurosymbolic, Neuro Symbolic, Neuro-Symbolic, Neural Symbolic, Neural-Symbolic). >References [1] MIT 6.S191 (2020): Neurosymbolic AI. <https://www.youtube.com/watch?v=4PuuziOgSU4> [2] Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding. <http://nsvqa.csail.mit.edu/>

Message too long. Click here to view full text.

Open file (422.58 KB 1344x496 MDETR.png)
Open file (68.28 KB 712x440 the matrix has you.jpg)
>>16217 I think such critique is outdated. The impressive results of NS-VQA have been beaten by full deep learning approaches like MDETR.[1] It would be a bit ironic to call deep learning fragile and narrow and then proceed to write specific functions that only handle a certain type of data that the training set just happens to be a small subset of and call it generalization. Sure it can handle 'out-of-distribution' examples with respect to the training set, but give it a truly out-of-distribution dataset with respect to the functions and these handwritten methods will fail completely. A lot of deep learning approaches these days can learn entire new classes of data from as few as 10-100 examples. ADAPET[2] learns difficult language understanding tasks from only 32 examples. RCE[3] can learn from a single success state example of a finished task. DINO[4] can learn to identify objects from no labelled examples at all. CLIP[5] and CoCa[6] are examples of deep learning generalizing to datasets they were never trained on, including adversarial datasets, and outperforming specialized models, and this is just stuff off the top of my head. Someone ought to give DALL-E 2 the prompt "a school bus that is an ostrich" and put that meme to rest. That said, neurosymbolic AI has its place though and I've been using it lately to solve problems that aren't easily solvable with deep learning alone. There are times when using a discrete algorithm saves development time or outperforms existing deep learning approaches. I don't really think of what I'm doing as neurosymbolic AI either. Stepping away from matrix multiplications for a bit doesn't suddenly solve all your problems and become something entirely different from deep learning. You have to be really careful actually because often a simpler deep learning approach will outperform a more clever seeming neurosymbolic one, which is clearly evident in the progression of AlphaGo to AlphaZero to MuZero. From my experience it hasn't really delivered much on the promises you listed, except maybe point 2 and 5. I wouldn't think of it as something good or bad though. It's just another tool and it's what you do with that tool what counts. There was a good paper on how to do supervised training on classical algorithms. Basically you can teach a neural network to do a lot of what symbolic AI can do, even complicated algorithms like 3D rendering, finding the shortest path or a sorting algorithm. I think it shows we've barely scratched the surface of what neural networks are capable of doing. https://www.youtube.com/watch?v=01ENzpkjOCE https://arxiv.org/pdf/2110.05651.pdf >Links 1. https://arxiv.org/abs/2104.12763 2. https://arxiv.org/abs/2103.11955 3. https://arxiv.org/abs/2103.12656 4. https://arxiv.org/abs/2104.14294 5. https://arxiv.org/abs/2103.00020

Message too long. Click here to view full text.

Open file (201.23 KB 1133x1700 spaghetti_mama.jpg)
Idling around the Interwebz today[a], I found myself reading the Chinese Room Argument article on the IEP[b], I came across the editor's contention in the article the notion that "mind is everywhere" is an "absurd consequence". >"Searle also insists the systems reply would have the absurd consequence that “mind is everywhere.” For instance, “there is a level of description at which my stomach does information processing” there being “nothing to prevent [describers] from treating the input and output of my digestive organs as information if they so desire.” "[1],[2] I found that supposed-refutation of this concept vaguely humorous on a personal level. As a devout Christian Believer, I would very strongly assert that indeed, Mind is everywhere. Always has been, always will be. To wit: The Holy Spirit sees and knows everything, everywhere. As King David wrote: >7 Where can I go to escape Your Spirit? > Where can I flee from Your presence? >8 If I ascend to the heavens, You are there; > if I make my bed in Sheol, You are there. >9 If I rise on the wings of the dawn, > if I settle by the farthest sea, >10 even there Your hand will guide me; > Your right hand will hold me fast.[3] However, I definitely agree with the authors in their writing that >"it's just ridiculous" to assert >" “that while [the] person doesn’t understand Chinese, somehow the conjunction of that person and bits of paper might” ".[1],[2]

Message too long. Click here to view full text.

Edited last time by Chobitsu on 05/16/2022 (Mon) 00:13:09.

Report/Delete/Moderation Forms
Delete
Report