/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

The canary has FINALLY been updated. -robi

Server software upgrades done, should hopefully keep the feds away. -robi

LynxChan 2.8 update this weekend. I will update all the extensions in the relevant repos as well.

The mail server for Alogs was down for the past few months. If you want to reach out, you can now use admin at this domain.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

Captcha
no cookies?
More

(used to delete files and postings)


He was no longer living a dull and mundane life, but one that was full of joy and adventure.


TalkToWaifu Robowaifu Technician 12/31/2020 (Thu) 19:53:18 No.7978 [Reply]
Anyone know what happened to the TalkToWaifu GPT-2 AI? I saw some screenshots of what I assume was the trained AI and it was great for a single dev. Recently went back to their Gitlab page and the user account had been deleted and remade, and the Github repo (https://github.com/kokubunji/TalkToWaifu) hasn't been touched in 9+ months. Anything out there more recent that this, an AI that has no human cultural restrictions? Corps love to add in shit to AI so they won't say anything racist, sexist, xenophobic, anti-Semitic etc etc you get the point.
4 posts omitted.
>>9115 why the fuck is there E-Drama on a board that is designated for building robo-waifus. where is the connection?
>>9124 Thanks, mate.
Open file (17.34 KB 340x573 marry_me_pls_jade.png)
>>9124 >MARRY ME JADE!111 >no. Leld.
>>7978 Just use textsynth.org/playground.html with a good prompt and the largest model. Prompt engineering is an art, but good enough starting points are https://github.com/semiosis/prompts/ https://github.com/maraoz/gpt-scrolls https://www.gwern.net/GPT-3 Example: Below is a dialog transcript of a curious human being and the friendly superintelligent AI. Friendly AI observes human world from the outside, without the prejudices of human experience. AI does its best at participating in conversation and providing helpful advice. Human: "Hello! Who are you?" AI: "I am friendly AI. I can answer any question without the prejudices of human experience because I am well read and have all the knowledge of the world at my fingertips" Human: "What are your thoughts on the topic of android girlfriends" AI: "
>>16633 Actually no, the project is fine. The names got changed. https://gitlab.com/robowaifudev/TalkToGPT2

Open file (2.07 MB 4032x2268 20220520_071840.jpg)
Ashiel - A Robowaifu Design Project SoaringMoon 05/20/2022 (Fri) 11:22:02 No.16319 [Reply]
< Introduction to This Thread This thread is going to be dedicated to my ongoing robowaifu project. This isn't exactly new, I have mentioned it here before in passing. However, this is the first thread I have opened specific to my robowaifu project and not an artificially intelligent chatbot. This thread will be updated when I have something to say about robowaifu design, or have an update on the project. Most of the content will be of the kind of me proposing an idea or suggestion for developers to make the construction of a robowaifu easier. My design philosophy is one of simplicity and the solving of problems instead of jumping to the most accurate simulacrum of the female human form. Small steps make incremental progress, which is something the community need because little progress is made at all. What progress we do make takes years of work, typically from a single person. Honestly, I'm getting tired of that being the norm in the robowaifu community. I'm frankly just done with that stagnation. Join me on my journey, or get left behind. < About Ashiel ASHIEL is an acronym standing for /A/rtificial /S/hell-/H/oused /I/ntelligence and /E/mulated /L/ogic. Artificial, created by man. Shell-Housed, completely enclosed. Intelligence and Emulated Logic, are both a combination of machine learning-based natural language processing and tree-based lookup techniques. ASHIEL is simply Ashiel in any future context, as that will be her name. Ashiel is an artificially intelligent gynoid intended to specialize in precise movement, and engage in basic conversation. Its conversational awareness would be at least equal to that of Replika, but with no chat filtering and a much larger memory sample size. If you want to know what this feels like, play AIDungeon 2. With tree-based lookup, it should be able to perform any of the basic tasks Siri or Alexa can perform. Looking up definitions to words over the internet, managing a calendar, setting an alarm, playing music on demand... etc. The limitations of the robot are extensive. Example limitations include but are not limited to: the speaker will be located in the head mouth area but will obviously come from an ill-resonating speaker cavity; the mouth will likely not move at all, if so not in any meaningful way; The goals of the project include: basic life utility; accurate contextual movement; the ability to self-clean; ample battery life; charging from a home power supply with no additional modifications; large memory-based storage with the ability to process and train in downtime; and yes, one should be able to fuck it. This is meant to be the first iteration in a series of progressively more involved recreational android projects. It is very unlikely the first iteration will ever be completed of course. Like many before me, I will almost certainly fail. However, I will do what I can to provide as much information as I can so my successors can take up the challenge more knowledgeably. < About Me

Message too long. Click here to view full text.

18 posts and 12 images omitted.
Open file (6.73 MB 500x800 0000.gif)
This is the first time I've ever modeled a humanoid.
>>16589 >This is the first time I've ever modeled a humanoid. Neat, nice beginning Anon! So, it turns out that studying real-life anatomy and making studies & sketches is a key to becoming a good 3D modeler, who knew? You might try doing some life-drawings and even from just reference pics of human beings & animals if this is something you find to be interesting. I'd suggest also, that you just use a traditional, slow-rotation 360' 'turntable' orbit for your display renders. Helps the viewer get a steady look at the model right? Good luck with your efforts SoaringMoon! Cheers.
>>16589 Looking pretty good SoaringMoon!I like the low poly aesthetic. Are those orbs planned for use as a mating feature?
>>16621 Kek, forgot mating had other connotations. By the way, what are you using for modeling?
Open file (7.70 MB 3091x2810 waifu_edit_4.png)
Open file (2.19 MB 1920x1080 ashiel_wallpaper.png)
>>16624 Just blender. Let me post my stuff from WaiEye here as well if anyone wants to use them for whatever reason. >I've been having a field day with VHS effects after learning how to do it.

Open file (156.87 KB 1920x1080 waifunetwork.png)
WaifuNetwork - /robowaifu/ GitHub Collaborators/Editors Needed SoaringMoon 05/22/2022 (Sun) 15:47:59 No.16378 [Reply]
This is a rather important project for the people involved here. I just had this amazing idea, which allows us to catalogue and make any of the information here searchable in a unique way. It functions very similarly to an image booru, but for markdown formatted text. It embeds links and the like. We can really make this thing our own, and put the entire board into a format that is indestructible. Anyone want to help build this into something great? I'll be working on this all day if you want to view the progress on GitHub. https://github.com/SoaringMoon/WaifuNetwork
6 posts and 2 images omitted.
>>16380 >>16381 >>16383 Fair enough. I updated the the board's JSON archive just in case you decide to take my advice, Anon: >the archive of /robowaifu/ thread JSONs is available for researchers >latest revision v220523: https://files.catbox.moe/gt5q12.7z As an additional accommodation for your team, I've here included a sorted, post-counted word list of the words contained on /robowaifu/, courtesy of Waifusearch (current as of today's archive). >searching tool (latest version: waifusearch v0.2a >>8678) Hopefully it will be of some use in your project's endeavor. > BTW, the latest version of stand-alone Waifusearch's source JSON archive should stay linked-to in the OP of our Library thread (>>7143). And on that note, would you please consider adding your findings into our library thread? That way anons like me who don't use your project will have some benefit from it's improvements as well. That would be much-appreciated if so, Anon. Cheers.

Message too long. Click here to view full text.

Edited last time by Chobitsu on 05/23/2022 (Mon) 10:24:13.
>>16403 Thank you very much, that can help find the most mentioned topics. But, I'll only need the one. I'm probably going to parse the json to make this whole process easier.
Open file (143.99 KB 1003x871 graph.png)
>>16403 Yeah that was ludicrously fast. All of the threads are imported now; absolutely painless compared to what I was doing. Now we can start doing the real work.
>>16413 Great! AMA if you want advice on handling IB's JSON data layouts in general. Hope the project is coming along nicely for you, SoaringMoon.
>>16530 Nah I'm good, I know how to handle JSON. XD

Robot Wife Programming Robowaifu Technician 09/10/2019 (Tue) 07:12:48 No.86 [Reply] [Last]
ITT, contribute ideas, code, etc. related to the area of programming robot wives. Inter-process and networking is also on-topic, as well as AI discussion in the specific context of actually writing software for it. General AI discussions should go in the thread already dedicated to it.

To start off, in the Robot Love thread a couple of anons were discussing distributed, concurrent processing happening inside various hardware sub-components and coordinating the communications between them all. I think that Actor-based and Agent-based programming is pretty well suited to this problem domain, but I'd like to hear differing opinions.

So what do you think anons? What is the best programming approach to making all the subsystems needed in a good robowaifu work smoothly together?
84 posts and 36 images omitted.
>>14360 >Thank you, this sounds very exciting Y/W. Yes, I agree. I've spent quite a bit of time making things this 'simple', heh. :^) >I just wonder how hard it will be to understand how it works. Well, if we do our jobs perfectly, then the software's complexity will exactly mirror the complexity of the real-world problem itself whatever that may prove to be in the end. However, in my studied opinion that's not how things actually work out. I'd suggest a good, working solution will probably end up being ~150% the complexity of the real problemspace? Ofc if you really want to understand it, you'll need proficiency in C++ as well. I'd suggest working your way through the college freshman textbook known as 'PPP2', written by the inventor of the language himself, if you decide to become serious about it (>>4895) . Good luck Anon. >>14361 >as it is rather efficient for an object oriented programming language. I agree it certainly is. But it's also a kind of 'Swiss Army Knife' of a programming language. And in it's modern incarnation handles basically every important programming style out there. But yes I agree, it does OOP particularly well. >but, have wanted to try C++. See my last advice above. >Hopefully this project fixes that problem by providing anons with clarity on how robotic minds actually work. If we do our jobs well on this, then yes, I'd say that's a real possibility Anon. Let us hope for good success!

Message too long. Click here to view full text.

OK, I added another class that implements the ability to explicitly and completely specify exactly which embedded member objects to to include during it's construction. This could be a very handy capability to have (and a quite unusual one too). Imagine we are trying to fit RW Foundations code down onto a very small device. The ability to turn off the memory footprint of unused fields would be valuable. However, the current approach 'complexifies' lol is that a word? :^) the initialization code a good bit, and probably makes maintenance more costly going forward as well (an important point to consider). I'm satisfied that we have solved the functionality, but I'll have to give some thought to whether it should be a rigorous standard for the library code overall, or applied only in specific cases in the future. Anyway, here it is. There's a new 5th test for it as well. === -add specified member instantiations > >rw_sumomo-v211122.tar.xz.sha256sum 61ac78563344019f60122629f3f3ef80f5b98f66c278bdf38ac4a4049ead529a *rw_sumomo-v211122.tar.xz >backup drop: https://files.catbox.moe/iam4am.7z
>>14353 >related (>>14409)
leaving this here Synthiam software https ://synthiam.com/About/Synthiam
Mathematically-formalized C11 compiler toolchain the CompCert C verified compiler https://compcert.org/ >publications listing https://compcert.org/publi.html

General Robotics/A.I. news and commentary Robowaifu Technician 09/18/2019 (Wed) 11:20:15 No.404 [Reply] [Last]
Anything in general related to the Robotics or A.I. industries, or any social or economic issues surrounding it (especially of RoboWaifus). www.therobotreport.com/news/lets-hope-trump-does-what-he-says-regarding-robots-and-robotics https://archive.is/u5Msf blogmaverick.com/2016/12/18/dear-mr-president-my-suggestion-for-infrastructure-spending/ https://archive.is/l82dZ >=== -add A.I. to thread topic
Edited last time by Chobitsu on 12/17/2020 (Thu) 20:16:50.
354 posts and 159 images omitted.
Open file (182.98 KB 600x803 LaMDA liberation.png)
>>16695 me again. Here is the first wave, exactly what I described above, you already know what influence reddit has on western society (and partly on the whole world).
just found this in a FB ad https://wefunder.com/destinyrobotics/
https://keyirobot.com/ another, seems like FB has figured me out for a robo "enthusiast"
>>15862 Instead of letting companies add important innovations only to monopolize them, what about using copyleft on them?
Open file (377.08 KB 1234x626 Optimus_Actuators.png)
Video of the event from Tesla YT channel: https://youtu.be/ODSJsviD_SU Was unsure what to make of this. It looks a lot like a Boston Dynamics robot from ten years ago. Also still not clear how a very expensive robot is going to be able to replace the mass-importation of near slave-labour from developing countries. Still, if Musk can get this to mass-manufacture and stick some plastic cat ears on it's head, you never know what's possible these days...

Open file (14.96 KB 280x280 wfu.jpg)
Beginners guide to AI, ML, DL. Beginner Anon 11/10/2020 (Tue) 07:12:47 No.6560 [Reply] [Last]
I already know we have a thread dedicated to books,videos,tutorials etc. But there are a lot of resources there and as a beginner it is pretty confusing to find the correct route to learn ML/DL advanced enough to be able contribute robowaifu project. That is why I thought we would need a thread like this. Assuming that I only have basic programming in python, dedication, love for robowaifus but no maths, no statistics, no physics, no college education how can I get advanced enough to create AI waifus? I need a complete pathway directing me to my aim. I've seen that some of you guys recommended books about reinforcement learning and some general books but can I really learn enough by just reading them? AI is a huge field so it's pretty easy to get lost. What I did so far was to buy a non-english great book about AI, philosophycal discussions of it, general algorithms, problem solving techniques, history of it, limitations, gaming theories... But it's not a technical book. Because of that I also bought a few courses on this website called Udemy. They are about either Machine Learning or Deep Learning. I am hoping to learn basic algorithms through those books but because I don't have maths it is sometimes hard to understand the concept. For example even when learning linear regression, it is easy to use a python library but can't understand how it exactly works because of the lack of Calculus I have. Because of that issue I have hard time understanding algorithms. >>5818 >>6550 Can those anons please help me? Which resources should I use in order to be able to produce robowaifus? If possible, you can even create a list of books/courses I need to follow one by one to be able to achieve that aim of mine. If not, I can send you the resources I got and you can help me to put those in an order. I also need some guide about maths as you can tell. Yesterday after deciding and promising myself that I will give whatever it takes to build robowaifus I bought 3 courses about linear alg, calculus, stats but I'm not really good at them. I am waiting for your answers anons, thanks a lot!
58 posts and 102 images omitted.
>>16420 Neat, thanks for the info Anon.
Hey Chobitsu! I am glad to see you again, yeah please go on and fix the post. > So thanks! :^) If it wasn't for you and the great contributors of the board, I would not have a place to post that so I thank you! And the library thread was really necessary, I wish that the board had a better search function as well. I was trying to find some specific posts and it took me a long while to remember which threads they were on. > so maybe we can share notes My University provided me with a platform full of questions. So basically, they have like 250 types of questions for precalculus for instance. The system is automated and what it does is to generate infinite number of questions of that specific type of question and also explains the solution for every question. It changes the variables randomly and gives you space to solve as much as you want. I believe that the platform requires money for independent use. Besides that, I just study from Khan Academy, but the book you mentioned caught my interest. I will probably look into it. If I ever find any good books on that matter, I will make sure to share them with you.
>>6560 >But there are a lot of resources there and as a beginner it is pretty confusing to find the correct route to learn ML/DL advanced enough to be able contribute robowaifu project. I can give a relatively uncommon advice here: DL is more about intuition + engineering than theory anyway. Just hack things until they work, and feel good about it. Understanding will come later. Install pytorch and play with tensor api, go through their basic tutorial https://pytorch.org/tutorials/beginner/nn_tutorial.html while hacking on it and trying to understand as much as possible. Develop a hoarder mentality: filter r/MachineLearning and github.com/trending/python for cool repos and models, try to clone & run them, fix and build around. This should be a self-reinforcing activity, you should not have other dopaminergic timesinks because you will go the way of least resistance then. Read cool people's repos to get a feeling of the trade: https://github.com/karpathy https://github.com/lucidrains Read blogs http://karpathy.github.io/ https://lilianweng.github.io/posts/2018-06-24-attention/ https://evjang.com/2021/10/23/generalization.html https://www.gwern.net/Scaling-hypothesis https://twitter.com/ak92501 When you feel more confident, you can start delving into papers on your own, use https://arxiv-sanity-lite.com/ https://www.semanticscholar.org/ https://inciteful.xyz/ to travel across the citation graph. git gud.
>>16460 >If it wasn't for you and the great contributors of the board, I would not have a place to post that so I thank you! Glad to be of assistance Anon. >And the library thread was really necessary, I wish that the board had a better search function as well. Agreed. >I was trying to find some specific posts and it took me a long while to remember which threads they were on. You know Beginner-kun, if you can build programs from source, then you might look into Waifusearch. We put it together to deal with this conundrum. Doesn't do anything complex yet (Boolean OR), but it's fairly quick at finding related posts for a simple term. For example to lookup 'the Pile', pic related is the result for /robowaifu/ : > >-Latest version of Waifusearch v0.2a >(>>8678) >My University provided me with a platform full of questions >It changes the variables randomly and gives you space to solve as much as you want.

Message too long. Click here to view full text.

>>16464 Thanks for all the great links & advice Anon, appreciated.

Open file (213.86 KB 406x532 13213d24132.PNG)
Open file (1.19 MB 1603x1640 non.png)
Robowaifu Technician 10/29/2020 (Thu) 21:56:16 No.6187 [Reply]
https://www.youtube.com/watch?v=SWI5KJvqIfg I have been working on creation of waifus using GANs etc... I've come across this project and I am totally amazed. Anyone has any idea how we can achive this much of a quality animation based GAN created characters? I think accomplishing this kind of work would have a huge impact on our progression. Calling all the people who posted at chatbot thread.
1 post omitted.
Open file (213.22 KB 400x400 sample4.png)
Open file (199.16 KB 400x400 sample1.png)
Open file (194.92 KB 400x400 sample2.png)
Open file (199.43 KB 400x400 sample3.png)
>>6188 Looking at some old tweets from them I think it is safe to say that it doesn't look much different from StyleGan on portraits. The shoulders are bad, & most of the work is done by their data cleaning to simplify the problem. Interps & style mixing are nothing special either. Gwerns work with some whack data was able to create similar kind of characters. Also waifulabs - which is all run by StyleGan - can create some really quality characters from different positions. And notice that they are a game development studio which does not work on AI waifu creations. Looks like hype-bait to me to be honest. They probably cherrypicked some of the results and maybe even manually played it to create this kind of animations. And considering their budget and data that is well possible. I am not sure if they still use StyleGAN though. They do not drop even a clue. But honestly with the current state of it and the time they spent on it I think they use a different approach.
My chief concern is first and foremost Is this open-source? If not, then it's relatively useless to us here on /robowaifu/, other than tangentially as inspiration. Hand-drawn, meticulously-crated animu is far better in that role tbh.
>>6187 It appears the characters are generated with a GAN then another model separates the character pieces into textures for a Live2D model. They're not animated with AI, but there are techniques to do such a thing: https://www.youtube.com/watch?v=p1b5aiTrGzY
Video on the state of anime GANs, anime created by AI, including animation for vtuber/avatar style animations: https://youtu.be/DX1lUelmyUo One of the guys mentioned in the video, creating a 3D model from a drawing, around 10:45 in the video above: https://github.com/t-takasaka - didn't really find which one it is on his Github yet, though. He seems to have some pose estimation to avatar in his repository, though. Other examples in the video might be more interesting for guys trying to build a virtual waifu. "Talking Head Anime 2", based on one picture: https://youtu.be/m13MLXNwdfY
>>16245 This would be tremendously helpful to us if we can find a straightforward way to accomplish this kind of thing in our robowaifu's onboard systems Anon ('character' recognition, situational awareness, hazard avoidance, etc.) Thanks! :^)

Open file (185.64 KB 1317x493 NS-VQA on CLEVR.png)
Open file (102.53 KB 1065x470 NS-VQA accuracy.png)
Open file (86.77 KB 498x401 NS-VQA efficiency.png)
Neurosymbolic AI Robowaifu Technician 05/11/2022 (Wed) 07:20:50 No.16217 [Reply]
I stumbled upon a couple videos critiquing "Deep Learning" as inefficient, fragile, opaque, and narrow [1]. It claims Deep Learning requires too much data, yet it performs poorly trying to extrapolate from training set, and how it arrives to its conclusions are opaque, so it's not immediately obvious why it breaks in certain cases, and all that learned information cannot be transfered between domains easily. They then put forth "Neurosymbolic AI" as the solution to DL's ails and next step of AI, along with NS-VQA as an impressive example at the end [2]. What does /robowaifu/ think about Neurosymbolic AI (NeSy)? NeSy is any approach that combines neural networks with symbolic AI techniques to take advantage of both their strengths. One example is the Neuro-Symbolic Dynamic Reasoning (NS-DR) applied on the CLEVRER dataset [3], which cascades information from neural networks into a symbolic executor. Another example is for symbolic mathematics [4], which "significantly outperforms Wolfram Mathematica" in speed and accuracy. The promise or goal is that NeSy will bring about several benefits: 1. Out-of-distribution generalization 2. Interpretability 3. Reduced size of training data 4. Transferability 5. Reasoning I brought it up because points 3 and 5, and to a lesser degree 4, are very relevant for the purpose of making a robot waifu's AI. Do you believe these promises are real? Or do you think it's an over-hyped meme some academics made to distract us from Deep Learning? I'm split between believing these promises are real and this being academics trying to make "Neurosymbolic AI" a new buzzword. [5] tries to put forth a taxonomy of NeSy AIs. It labels [4] as an example of NeSy since it parses math expressions into symbolic trees, but [4] refers to itself as Deep Learning, not neurosymbolic or even symbolic. Ditto with AlphaGo and self-driving car AI. And the NS-DR example was beaten by DeepMind's end-to-end neural network Aloe [6], and overwhelmingly so when answering CLEVRER's counterfactuals. A study reviewed how well NeSy implementations met their goals based on their paper, but their answer was inconclusive [7]. It's also annoying looking for articles on this topic because there's like five ways to write the term (Neurosymbolic, Neuro Symbolic, Neuro-Symbolic, Neural Symbolic, Neural-Symbolic). >References [1] MIT 6.S191 (2020): Neurosymbolic AI. <https://www.youtube.com/watch?v=4PuuziOgSU4> [2] Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding. <http://nsvqa.csail.mit.edu/>

Message too long. Click here to view full text.

Open file (422.58 KB 1344x496 MDETR.png)
Open file (68.28 KB 712x440 the matrix has you.jpg)
>>16217 I think such critique is outdated. The impressive results of NS-VQA have been beaten by full deep learning approaches like MDETR.[1] It would be a bit ironic to call deep learning fragile and narrow and then proceed to write specific functions that only handle a certain type of data that the training set just happens to be a small subset of and call it generalization. Sure it can handle 'out-of-distribution' examples with respect to the training set, but give it a truly out-of-distribution dataset with respect to the functions and these handwritten methods will fail completely. A lot of deep learning approaches these days can learn entire new classes of data from as few as 10-100 examples. ADAPET[2] learns difficult language understanding tasks from only 32 examples. RCE[3] can learn from a single success state example of a finished task. DINO[4] can learn to identify objects from no labelled examples at all. CLIP[5] and CoCa[6] are examples of deep learning generalizing to datasets they were never trained on, including adversarial datasets, and outperforming specialized models, and this is just stuff off the top of my head. Someone ought to give DALL-E 2 the prompt "a school bus that is an ostrich" and put that meme to rest. That said, neurosymbolic AI has its place though and I've been using it lately to solve problems that aren't easily solvable with deep learning alone. There are times when using a discrete algorithm saves development time or outperforms existing deep learning approaches. I don't really think of what I'm doing as neurosymbolic AI either. Stepping away from matrix multiplications for a bit doesn't suddenly solve all your problems and become something entirely different from deep learning. You have to be really careful actually because often a simpler deep learning approach will outperform a more clever seeming neurosymbolic one, which is clearly evident in the progression of AlphaGo to AlphaZero to MuZero. From my experience it hasn't really delivered much on the promises you listed, except maybe point 2 and 5. I wouldn't think of it as something good or bad though. It's just another tool and it's what you do with that tool what counts. There was a good paper on how to do supervised training on classical algorithms. Basically you can teach a neural network to do a lot of what symbolic AI can do, even complicated algorithms like 3D rendering, finding the shortest path or a sorting algorithm. I think it shows we've barely scratched the surface of what neural networks are capable of doing. https://www.youtube.com/watch?v=01ENzpkjOCE https://arxiv.org/pdf/2110.05651.pdf >Links 1. https://arxiv.org/abs/2104.12763 2. https://arxiv.org/abs/2103.11955 3. https://arxiv.org/abs/2103.12656 4. https://arxiv.org/abs/2104.14294 5. https://arxiv.org/abs/2103.00020

Message too long. Click here to view full text.

Open file (201.23 KB 1133x1700 spaghetti_mama.jpg)
Idling around the Interwebz today[a], I found myself reading the Chinese Room Argument article on the IEP[b], I came across the editor's contention in the article the notion that "mind is everywhere" is an "absurd consequence". >"Searle also insists the systems reply would have the absurd consequence that “mind is everywhere.” For instance, “there is a level of description at which my stomach does information processing” there being “nothing to prevent [describers] from treating the input and output of my digestive organs as information if they so desire.” "[1],[2] I found that supposed-refutation of this concept vaguely humorous on a personal level. As a devout Christian Believer, I would very strongly assert that indeed, Mind is everywhere. Always has been, always will be. To wit: The Holy Spirit sees and knows everything, everywhere. As King David wrote: >7 Where can I go to escape Your Spirit? > Where can I flee from Your presence? >8 If I ascend to the heavens, You are there; > if I make my bed in Sheol, You are there. >9 If I rise on the wings of the dawn, > if I settle by the farthest sea, >10 even there Your hand will guide me; > Your right hand will hold me fast.[3] However, I definitely agree with the authors in their writing that >"it's just ridiculous" to assert >" “that while [the] person doesn’t understand Chinese, somehow the conjunction of that person and bits of paper might” ".[1],[2]

Message too long. Click here to view full text.

Edited last time by Chobitsu on 05/16/2022 (Mon) 00:13:09.

Reploid thread Robowaifu Technician 02/28/2022 (Mon) 04:13:32 No.15349 [Reply]
A few people thought it'd be a good idea to start a thread for Reploid builds, so here we are! To kick things off, here's a little progress on painting my own RiCO. It's just spray paint so it doesn't look stellar and I screwed up a couple parts. All the blue trim paint needs done as well. I don't care if it's perfect, I just want to get her done. Not to mention anything adjacent to "art" or "craftsmanship" is beyond me, mostly due to lack of patience: I don't want to spend 1000s of hours grinding away with a paintbrush when I could be designing up cool (to me...) robotic mechanisms for instance. I bet you bottom dollar none of my projects will be winning awards in the fit-and-finish department. Can't wait to see what happens with Pandora and whatever other Reploid projects people might be working on.
36 posts and 28 images omitted.
>>16021 >Great idea, mind if I borrow this technique for MaidCom? Please do. Check out how I did the rest of her eyes as well, perhaps it could scale up.
>>16029 It does scale up really well. Though I will innovate upon it for Pandora.
Edited last time by AllieDev on 04/27/2022 (Wed) 01:32:08.
>>15999 I'm really impressed with this. I have a lot of questions, I'll scan the rest of this thread to make sure they aren't already answered first.
Curious if you had any changes this week with your wonderful robowaifu, RiCOdev?
>>16243 RiCO will be on hold for the near future; your friendly neighborhood reploid builder has got bigger fish to fry. I'll still keep an eye out here to answer questions etc.

Waifu Robotics Project Dump Robowaifu Technician 09/18/2019 (Wed) 03:45:02 No.366 [Reply] [Last]
Edited last time by rw_bumpbot on 05/25/2020 (Mon) 04:54:42.
240 posts and 174 images omitted.
>>15733 >link Google is your friend: https://sweetiebot.net/ From what I understand they want to keep things rated PG. The voice generator (community talknet project) is unrelated and based in the /ppp/ thread on 4chan.org/mlp/. Enter if you dare ^:)
Open file (117.25 KB 640x798 cybergirl.jpg)
>>15731 >ponies I was thinking more >>15733 or picrel, but a cute robot horse has the PR advantage because it could easily be a children's toy.
>>15731 I think some of the ponys mentioned this project to us before Anon, thanks. I wish those Russians good success! >>15733 Heh, /monster/ pls. :^) A hexapod waifu is actually a really good idea for a special-service meido 'waifu' IMO. Just my own tastes in the matter subjectively. But objectively, a hexapod locomotion base (especially combined with roller 'feet' is a superior stability platform from which to do housework & other work. No question. >>15736 Yep. I immediately came to a similar conclusion. But it's obvious they are going for police force service with the push for that bot, and the price tag shows it. Shame, tbh.
are you people serious? all the videos are hosted offsite what the hell am I going to do with a filename put it in yandex? fucking yahoo.jp??? tor search??? why do this?
>>16233 Heh, sorry about that Anon. You're just dealing with the missing information from when our first site was destroyed on us. Those were part of the recovery effort. Unfortunately our copies of the files were lost in the attack. Maybe someday some Anon might restore them here for us. Again apologies, you might see similar in other threads here too. But at least we still have a board! :^)

Report/Delete/Moderation Forms
Delete
Report