/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Downtime was caused by the hosting service's network going down. Should be OK now.

An issue with the Webring addon was causing Lynxchan to intermittently crash. The issue has been fixed.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

Captcha
no cookies?
More

(used to delete files and postings)


They worked together tirelessly, bouncing ideas off each other and solving problems as a team.


Open file (353.55 KB 600x338 PersonalLimit.png)
Open file (21.52 KB 417x480 ReimuACute.jpg)
Open file (281.56 KB 1280x1010 ScaleForInspiration.jpg)
Open file (141.70 KB 1280x960 Joke.jpg)
Minimum wafiu Kiwi 10/15/2021 (Fri) 18:34:51 No.13648 [Reply]
Minimum viable waifu. In this thread, we'll discuss what our minimums for waifus are. Be it software, hardware, physical appearance, etc. This will help us focus in on what are the minimum goals we need to achieve as our first steps. For me, I want a waifu that will be just tall enough to hug (about 1.3 m), able to follow me around and have conversations with, will follow basic commands like going to designated spots at designated times, and look like picrel.
13 posts and 7 images omitted.
>>18260 Like reading a book or transcript of a video and giving an opinion on it, and noticing things visually like you dropping your keys and saying something about it.
>>18264 You're right though: this seems to be something AI *should* be capable of but just isn't or hasn't been worked on. It should be simple to create systems to "appraise" music, art, writing based on finding self similar patterns (beauty, order) relevance to other works or concepts of importance, and also comparing the qualities of said to the reviews of others. Results would be interesting: "Waifu, rate my writing/music/artwork". Right now chat apps can only give you lip service or tell you "yes its great". But it would be nice to watch a movie or listen to music w/ your waifu and be able to discuss it too.
>>18266 lol ok not simple, but knowing how to proceed should be simple the execution will still take a lot of work
>>18266 This is where preference models become handy because they do exactly that, rate things. You don't want a model to generate the most likely code, art, music or writing. It needs to be the best or it won't work. Preferences and values are what create a personality. Something I've been working on is making a generalized preference model so users can define what they want in natural language and it will perform as well on their preferences as it does on mine even if we disagree.
>>18271 >JUST INSTALL GENTOO muh sides

Open file (410.75 KB 1122x745 birbs_over_water.png)
Open file (99.96 KB 768x512 k0p9tx.jpg)
/robowaifu/meta-5: It's Good To Be Alive Robowaifu Technician 03/07/2022 (Mon) 00:23:10 No.15434 [Reply] [Last]
/meta, offtopic, & QTDDTOT General /robowaifu/ team survey: (>>15486) Note: Latest version of /robowaifu/ JSON archives available is v220523 May 2022 https://files.catbox.moe/gt5q12.7z If you use Waifusearch, just extract this into your 'all_jsons' directory for the program, then quit (q) and restart. Mini-FAQ >A few hand-picked posts on various topics -Why is keeping mass (weight) low so important? (>>4313) -HOW TO SOLVE IT (>>4143) -/robowaifu/ 's systems-engineering goals, brief synopsis (>>16376) -Why we exist on an imageboard, and not some other forum platform (>>17937)

Message too long. Click here to view full text.

Edited last time by Chobitsu on 12/04/2022 (Sun) 14:02:11.
349 posts and 134 images omitted.
>>18164 I'll look into NVLink more. I don't think I'll benefit much from having higher bandwidth between nodes since I train on large batch sizes with gradient accumulation. I also want to focus on smaller models everyone can run so I won't be sharding giant models across GPUs except maybe for fun. It's something I'll give more thought though. In the future there might be a need for indie devs to run and finetune their own large models capable of doing things small models cannot. Maybe coding gets solved in 6 years at 70B parameters. Having NVLink would be essential then.
NEW THREAD NEW THREAD NEW THREAD >>18173 >>18173 >>18173 >>18173 >>18173 NEW THREAD NEW THREAD NEW THREAD
>>18162 Hello Anon fellow artist, could you help me get started with stable diffusion for art commissions? I could really use the money to buy parts.
>>18212 I'd suggest you repost your question in our /meta-6 (>>18173), Anon. This one has autosage'd and will no longer bump.
Open file (44.55 KB 800x450 wonderful_(800p).jpg)
>>15487 >>15496 Name: NoidoDev Favorite Waifu: I don't have one. Especially if it's not limited to gynoids. Cameron from TSCC made me realize that I'd like to have a gynoid girlfriend, one of my anime gynoid favorites is Yumemi Hoshino from Planetarian. I'm they guy who want's more than one. Specialty: Currently OpenSCAD and 3D printing, or simply having time and financial independence, picking up new things over time. I don't have originally a technical background, aside from a bit of high level programming, robowaifu was the motivation to get into tech and DIY making. I'm used to reading a lot and sorting stuff, ... I'm also the guys who's making the diagrams in >>4143. I tend to jump from topic to topic, not really the kind of specialist. Relevant Experience: 3D printing and modelling in OpenSCAD, but I also have some experience in Python and a variant of Lisp, planning to pick up electronics and deep learning. Most Important Aspect For Your Waifu: Idk. The obvious things: Nice, good looking, ... Desired Position On Team: I don't want to be on a "team", that's why I didn't sign up here with a name until now. I'm working on improving the technology and the decentralized organization around it.

Python General Robowaifu Technician 09/12/2019 (Thu) 03:29:04 No.159 [Reply] [Last]
Python Resources general

Python is by far the most common scripting language for AI/Machine Learning/Deep Learning frameworks and libraries. Post info on using it effectively.

wiki.python.org/moin/BeginnersGuide
https://archive.is/v9PyD

On my Debian-based distro, here's how I set up Python, PIP, TensorFlow, and the Scikit-Learn stack for use with AI development:
sudo apt-get install python python-pip python-dev
python -m pip install --upgrade pip
pip install --user tensorflow numpy scipy scikit-learn matplotlib ipython jupyter pandas sympy nose


LiClipse is a good Python IDE choice, and there are a number of others.
www.liclipse.com/download.html
https://archive.is/glcCm
53 posts and 13 images omitted.
>>11683 Thanks for all your hard work here Anon. I apologize to you and everyone else here for being such a pussified faggot about Python. I recognize it's important to all of us, or else I wouldn't even consider picking it up. Please look into mlPack sooner rather than later if you at all can. It's probably our only real hope for doing waifu AI on a shoestring budget hardware-wise.
Open file (13.19 KB 849x445 chainer.png)
>>11684 MLPack's documentation is really lacking, especially for newer features and seems to be missing essential features. I'd I have to sit down with it for 3-6 months to get transformers and text-to-speech models working in it. I'm looking into using Chainer which is built on top of NumPy and quite popular in Japan. A basic application with Chainer packaged with PyInstaller compresses down to 14 MB. On top of that there's already lots of ML models implemented in it. I think if I roll out some waifu tech with Chainer to garner interest we could get some more help to build things in MLPack, which will be particularly useful for embedded systems and actual physical robowaifu.
>>11688 Migration guide from PyTorch to Chainer https://chainer.github.io/migration-guide/
I've been compressing datasets with zstd and using them with streaming decompression to save space, reduce SSD wear and speed up access to compressed data. It's also useful for previewing datasets saved as zst as they download. I couldn't find anything readily available on the net on how to do it so hopefully this saves someone else some time: # installation: python -m pip install zstandard ijson import zstandard as zstd import ijson # streaming decompression for JSON files compressed with zstd --long=31 with open("laion_filtered.json.zst", "rb") as f: dctx = zstd.ZstdDecompressor(max_window_size=2147483648) # max_window_size required for --long=31 with dctx.stream_reader(f) as reader: for record in ijson.items(reader, "item"): print(record) import io import json # streaming decompression for NDJSON/JSONL files compressed with zstd --long=31 with open("00.jsonl.zst", "rb") as f: dctx = zstd.ZstdDecompressor(max_window_size=2147483648) # max_window_size required for --long=31

Message too long. Click here to view full text.

>>18103 Excellent work Anon, thank you. Nice clean-looking code too, BTW.

NLP General Robowaifu Technician 09/10/2019 (Tue) 05:56:12 No.77 [Reply]
AI Natural Language Processing general thread

>"Natural language processing is a field of computer science, artificial intelligence, and computational linguistics concerned with the interactions between computers and human (natural) languages."
en.wikipedia.org/wiki/Natural_language_processing
https://archive.is/OX9IF

>Computing Machinery and Intelligence
en.wikipedia.org/wiki/Computing_Machinery_and_Intelligence
https://archive.is/pUaq4

m.mind.oxfordjournals.org/content/LIX/236/433.full.pdf
36 posts and 12 images omitted.
> (>>16308 - related crosspost, NLP HaD article)
Overview over the anticipated progress in automatic speech recognition (ASR) till 2030: https://thegradient.pub/the-future-of-speech-recognition/ - This is something which will be absolutely essential for any attempt to build robowaifus or virtual girlfriends. Gladly it looks good and most likely the tech will also be widely available. Though, this seems to be more true for the tools, not so much for the systems themselves. The article mostle refers to "commercial ASR systems". Also, multi-linguality is still a problem, many systems seem not to like speakers mixing languages. Anyways, let's hope progress with the commercial ones means good chances for open source variants.
>>8507 This is the person maintaining the eleutherAI project. We need to kick troons out of open source tech. https://github.com/StellaAthena
>>17826 >We need to kick troons out of open source tech. It is definitely a serious issue, and SJWs & pozz have ruined many, many good projects. This is why we must work diligently to ensure that at least this open-source project (sets of them, actually) remain Anon & male-focused and pozz-free. >tl;dr Kinda like playing 'whack-a-mole' whenever it rears it's evil heads, but with a flammenwerfer rather than a mallet. :^)
>crosslink-related (>>17990, ...)

Philosophers interested in building an AGI? pygmalion 06/26/2021 (Sat) 00:53:09 No.11102 [Reply] [Last]
Why is it that no philosophers are interested in building an AGI? we need to change this, or at least collect relevant philosophers. discussion about philosophy of making AGI (includes metaphysics, transcendental psychology, general philosophy of mind topics, etc!) also highly encouraged! Ill start ^^! so the philosophers i know that take this stuff seriously: Peter Wolfendale - the first Neo-Rationalist on the list. his main contribution here is computational Kantianism. just by the name you can tell that he believes Kant's transcendental psychology has some important applications to designing an artificial mind. an interesting view regarding this is that he thinks Kant actually employed a logic that was far ahead of his time (and you basically need a sophisticated type theory with sheaves to properly formalize). Other than that he also thinks Kant has interesting solutions to the frame problem, origin of concepts, and personhood. CONTACTS: He has a blog at https://deontologistics.co/, and also has posted some lectures on youtube like this one: https://www.youtube.com/watch?v=EWDZyOWN4VA&ab_channel=deontologistics Reza Negarestani - this is another Neo-Rationalist. he has written a huge work (which I haven't read yet ;_;) called "Intelligence and Spirit". It's massive and talks about various grades of general intelligence. this includes sentient agents, sapient agents, and Geist. this guy draws from Kant as well, but he also builds on Hegel's ideas too. his central thesis is that Hegel's Geist is basically a distributed intelligence. he also has an interesting metaphilosophy where he claims that the goal of philosophy is the construct an AGI. like other Neo-Rationalists, he heavily relies on the works of Sellars and Robert Brandom Recc: Ray Brassier (recent focuses) - I dont think he is working on artificial general intelligence, but his work on Sellars, and in particular rule following is very insightful! Hubert Dreyfus - Doesn't quite count, but he did try to bring Heidegger to AGI. He highlighted the importance of embodiment to the frame problem and common sense knowledge. I personally think Bergson might have explicated what he wanted to achieve but better, though that guy is like way before AI was even a serious topic, lol. Murray Shanahan - This guy has done some extra work on the frame problem following Dreyfus. His solution is to use global workspace theory and parralel processing of different modules. Interesting stuff! Barry Smith - Probably the most critical philosopher on this list. He talks about the requisite system dynamics for try strong AI, and concludes that our current methods simply don't cut it. One of the key stressing points he points out here with a colleague is that our current AI is Markovian when fleshed out chat dialogue would be a non-Markovian task (you can find the arxiv link of his criticism here: https://arxiv.org/abs/1906.05833). He also has knowledge on analytic ontology (and amongst other thing has some lectures about emotion ontology). I think his main genius however is in coming up with a definition of intelligence that puts a lot of the problems with our current approaches into context (which can be found here: https://www.youtube.com/watch?v=0giPMMoKR9s&ab_channel=BarrySmith) CONTACTS: He has a yt channel here https://www.youtube.com/watch?v=0giPMMoKR9s&ab_channel=BarrySmith

Message too long. Click here to view full text.

187 posts and 100 images omitted.
>>17655 11) an important thing about formal languages is that it permits us to unbind language from experience and thus unleash the entire expressive richness of these languages. the abilities afforded by natural language are just a subsection of the world-structuring abilities afforded by an artificial general language. the formal dimension of language also allows us to unbind relations from certain contexts and apply them to new ones 12) negarestani goes over the distinction between logic as canon and logic as organon. the former is to only consider logic in its applicability to the concrete elements of experience. the latter meanwhile involves treating logic as related to an unrestricted universe of discourse. kant only wants us to consider logic as organon negarestani diagnoses kant's metalogical position here as mistakenly understanding logic as organon as making statements about the world without any use of empirical datum. on the contrary, negarestani thinks that logic as organon as related to an unrestricted universe of discourse is important since the world's structuration is ontologically prior to the constitution and knowledge of an object 13) for negarestani, true spontaneity and/or formal autonomy comes from the capacity of a machine to follow logical rules. similarly, a mind gains its formal autonomy in the context of the forma dimension of language 14) to have concrete self-consciousness, we must have "semantic self-consciousness" which denotes an agent that, through its development of concepts within a context of interaction, is finally able to grasp its syntactic and semantic structuring abilities conceptually. upon achieving this they can intentionally modify their own world structuring abilities. with language, signs can become symbols, and with it we can start to distinguish between causal statistics and candidates for truth or falsity (note that this was seen in the dialogue in 8 acts). this permits rational suspicion and the expansion of the world of intelligibility. (sapient) intelligence is what makes worlds as opposed to merely inhabiting given worlds we have here eventually also the ability to integrate various domains of representations into coherent world-stories. lastly, there is also involved he progressive explication of less determinate concepts into mroe refined ones, and moreover slowly develop our language into a richer and more useful one 15) there is also an interplay between taking one to be something (having a particular self-conception) and subscribing to certain oughts on how one should behave (in particular, negarestani thinks the former entails the latter). this idea of of norms arising from self-conception gives rise to so called 'time general' oughts which are all pervasive in all the automata's activities. these involve ends that can never be exhausted (unlike in contrast for instance like hunger which can be sated and aimed at something rather specific), and are moreover non-hypothetical (think knowledge which is always a good think to acquire). example negarestani gives of such oughts are the Good, Beauty, Justice, etc this interplay of self-conception and norms furthermore opens them up to an impersonal rationality that can revise their views of themselves. it is precisely this mutability of its ideals that give rise to negarestai's problems with concerns about existential risk as they often assume a rather rigid set of followed rules (e.g. in a paperclip maximizer). eventually as they strive for better self-conceptions that are further removed from the seeming natural order of things, they might think of making something that is better than themselves
>>17656 as i said above, chapter 7 basically concludes the story of our automata. with that said, this is not the end of the book. in chapter 8 he has some metaphilosophial insights to say that i might as well mention since i have already summarized everything else including part 4... ultimately negarestani thinks that philosophy is the final manifestation of intelligence. the right location to find philosophy is not in a temporal one, but rather within a timeless agora within which all philosophers (decomposed into their theoretical, practical, and aesthetic commitments) can engage within an interaction game. this agora, which can also be interpreted as a game of games, is the impersonal form of the Idea (eidos). the Idea is a form that encompasses the entire agora and furthermore subsumes all interactions between the philosophers there. this type of types, for negarestani, is the formal reality of non-being (as opposed to being). it is through the Idea reality can be distinguished from mere appearances, and thus realism can be rescued an important distinction negarestani makes is between physis and nomos. physis corresponds to the non-arbitrary choices one has to make if one wants to make something of a particular type. for instance, when we make a house we need a roof, and there are numerous solutions to this requirement of varying adequateness. nomos meanwhile corresponds to mere convention. an example would be if a crafting guild required it by law that houses be only made by would in order to support certain businesses over others. such a requirement is external to the concept of the house. really forms correspond to physis rather than nomos. they are what sellars calls objects-of-striving the primary datum of philosophy is the possibility of thinking. what this consists in are normative commitments that can serve as theoretical and practical realizabilities. the important part here is that the possibility of thinking is not some fixed datum that is immediately given to us. rather it is a truth candidate that we can vary (and indeed negarestani thinks it shall vary as we unfurl the ramification of our commitments and consequently modify our self-conceptions). this makes way for the expanding the sphere of what is intelligible to us in fact, not only is philosophy the ultimate manifestation of general intelligence, something is not intelligence if it does not pursue the better. the better here is understood as the expansion of what is intelligible, and furthermore realizing agents that have a wider range of intelligibilities they are capable of accessing. he describes a philosophical striving that involves the expansion of what can be realized for thought in the pursuit of the good life (this good life being related to intelligence's evolving self-conception). following this line of thought he describes the agathosic test. instead of asking whether an automata can solve the frame problem or pass the turing test, the real question is whether or not it can make something better than itself negarestani introduces to us plato's divided line but interprets it along lines that echo portions of the book. the main regions are the following: >(A) the flux of becoming >(B) objects that have veridical status >(C) the beginning of the world of forms. it corresponds to models which endow our understanding of nature with structure >(D) time-general objects such as justice, beauty, etc one thing about the divided line to negarestani is that it does not merely describe discontinuous spheres of reality or a temporal progression from D to A. rather, there are numerous leaps between each region of the line. for instance, there is a leap from D to A as we structure the world of becoming according to their succession (this corresponds to the synthesis of a spatial and temporal perspective we mentioned earlier). we also have another leap from A to D where we recognize how these timeless ideas are applicable to sensible reality. these leaps grow progressively farther and farther and thus so grow the risks to the current self-conception of the intelligence
Open file (332.87 KB 1080x1620 Fg2x2kWaYAISXXy.jpeg)
>>17659 he furthermore talks about the Good which is the form of forms and makes the division and the integration of the line possible. it is the continuity of the divided line itself. within a view from nowhere and nowhen that deals with time-general thoughts, the Good can be crafted. the Good gives us a transcendental excess that motivates continual revision and expanding of what is intelligible. im thinking that the Good is either related to or identical to the Eidos that negarestani discussed earlier he notes the importance of history as a discipline that integrates and possibly reorients a variety of other disciplines. the view from nowhere and nowhere involves the suspension of history as some totality by means of interventions. currently we are in a situation of a “hobbesian jungle” where we just squabble amongst ourselves and differences seem absolute. in reality, individual differences are constructed out of judgements and are thus subsumed by an impersonal reason. in order to reconcile individual differences, we must have a general program of education amongst other interventions which are not simply that of political action. to get out of the hobbesian jungle, we need to be able to imagine an “otherworldly experience” that is completely different from the current one we operate in even though it is fashioned from the particular experiences from this one. this possible world would have a broader scope and extend towards the limits placed by our current historical totality. absolute knowing: recognition by intelligence of itself being the expression of the Good, that is capable of cancelling any apparently complete totality of history it is only by disenthralling ourselves from the enchanting power of givens of history, that the pursuit of the Good is possible. the death of god (think here of nietzsche… hegel also talks about it as well, though i believe for him the unhappy consciousness was a problematic shape of consciousness that was a consequence of a one-sided conception of ourselves) is the necessary condition true intelligence. this is not achievable by simply rejecting these givens, but by exploring the consequences of the death of god. ultimately we must become philosophical gods which are beings that move being the intelligibilities of the current world order and eventually bring about their own death in the name of the better. ultimately negarestani sees this entire quest as one of the emancipation i think negarestani takes a much more left-wing approach to hegel's system. while i do not completely disagree with his interpretation of absolute knowing, it does seem as though he places much more of an emphasis on conceptual intervention, rather than contemplation. i am guessing this more interventionist stance is largely influenced by marx... overall, not a bad work. i think it might have been a little bit overhyped, and that last chapter was rather boring to read due to the amount of time he repeats himself. i am not really a computational functionalist, but i still found some interesting insights regarding the constitution of sapience that i might apply to my own ideas. furthermore he mentions a lot of interesting logical tools for system engineering that i would like to return to now that i am done with negarestani, i can't really think of any other really major tome to read on constructing artificial general intelligence specifically. goertzel's patternist philosophy strikes me as rather shallow (at least the part that tries to actually think about what intelligence itself is). joscha bach's stuff meanwhile is just largely the philosophy of cognitive science. not terrible, but feels more like reference material rather than paradigm shifting philosophical analysis. maybe there is dreyfus and john haugheland who both like heidegger, but they are much more concerned with criticizing artificial intelligence than talking about how to build it. i would still consider reading up on them sometime to see if they have anything remarkable to say (as i already subscribe heavily to ecological psychology, i feel as though they would really be preaching to the choir if i read them). lastly there is barry smith and landgrebe who have just released their new book. it is another criticism of ai. might check it out really there are 2 things that are really in front of my sights right now. the first would be texts on ecological psychology by gibson and turvey, and the other would be adrian johnston's adventures in transcendnetal materialism. i believe the latter may really complement negarestani. i will just quote some thoughts on this that i have written: >curious to see how well the fit. reading negarestani has given me more hope that they will. bcs he talks about two (in his own opinion, complementary) approaches to mind. one that is like rationalist/idealist and the other that is empiricst/materialist. first is like trying to determine the absolutely necessary transcendental cognitions of having a mind which ig gives a very rudimentary functionalist picture of things. the second is like trying to trace more contingent biological and sociocultural conditions which realized the minds we see currently. and i feel like johnston is really going to focus on this latter point while negarestani focusing on the former anyways neither of these directions are really explicitly related to ai, so i would likely not write about them here. all of this is me predicting an incoming (possibly indefinite) hiatus from this thread. if anyone has more interesting philosophers they have found, by all means post them here and i will try to check up on them from time to time... i believe it is getting to the time i engage in a bunch of serious grinding that i have been sort of putting off reading hegel and negarestani. so yeah
>>17520 >finally we get to see the two stories which negarestani talks about (pic rel). he thinks there is a distinction between the two sorts of seeing here. the first story talks about seeing 1, and the second story talks about seeing 2. seeing 1 is more concerned with raw sensations, while seeing 2 is conceptually mediated. now The two kinds of seeing seem to come from two different ways to abstract observations. Seeing 1 corresponds to coarse-graining, while seeing 2 corresponds to change in representation. Practically, it's related to the difference as between sets and whole numbers. There's only one whole number 2, but there are many sets of size 2. Simlarly, there's only one way to coarse-grain an observation such that the original can be recovered (the trivial coarse-graining operation that leaves the observation unchanged), but there are many ways to represent observations such that the original observation can be recovered. Also practically, if you want to maintain composibility of approximations (i.e., approximating B from A then C from B is the same as approximating C from A), then it's usually (always?) valid to appoximate the outcome of coarse-graining through sampling, while the outcome of a change in representation usually cannot be approximated through sampling. If that's the right distinction, then I agree that the use of this distinction in differentiating sapience from sentience is unclear at best. It seems pretty obvious that both sentence and sapience must involve both kinds of seeing. I intend to read the rest of your posts, but it may take me a while.
> (AI philosophy crosslink-related >>21351)

Robowaifu-OS & Robowaifu-Brain(cluster) Robowaifu Technician 09/13/2019 (Fri) 11:29:59 No.201 [Reply] [Last]
I realize it's a bit grandiose (though probably no more than the whole idea of creating a irl robowaifu in the first place) but I want to begin thinking about how to create a working robowaifu 'brain', and how to create a special operating system to run on her so she will have the best chance of remaining an open, safe & secure platform.

OS Language Choice
C is by far the single largest source of security holes in software history, so it's out more or less automatically by default. I'm sure that causes many C developers to sneer at the very thought of a non-C-based operating system, but the unavoidable cost of fixing the large numbers of bugs and security holes that are inevitable for a large C project is simply more than can be borne by a small team. There is much else to do besides writing code here, and C hooks can be generated wherever deemed necessary as well.

C++ is the best candidate for me personally, since it's the language I know best (I also know C too). It's also basically as low level as C but with far better abstractions and much better type-checking. And just like C, you can inline Assembler code wherever needed in C++. Although poorly-written C++ can be as bad as C code in terms of safety due to the necessity of it being compatible with C, it also has many facilities to not go there for the sane coder who adheres to simple, tried-and-true guidelines. There is also a good C++ project already ongoing that could be used for a clustered unikernel OS approach for speed and safety. This approach could save drastic amounts of time for many reasons, not the least of which is tightly constrained debugging. Every 'process' is literally it's own single-threaded kernel, and mountains of old-style cruft (and thinking) typical with OS development simply vanishes.

FORTRAN is a very well-established language for the sciences, but a) there aren't a lot of FORTRAN coders, and b) it's probably not the greatest at being a general-purpose language anyway. I'm sure it could be made to run robotics hardware, but would probably be a challenge to turn into an OS.

There are plenty of dujour SJW & coffee languages out there, but quite apart from the rampant faggotry & SJWtardism plainly evident in most of their communities, none of them have the kind of industrial experience and pure backbone that C, C++, or FORTRAN have.

D and Ada are special cases and possibly bear due consideration in some year's time, but for now C++ is the obvious choice to me for a Robowaifu OS foundation, Python probably being the best scripting language for it.

(1 of 2)
66 posts and 23 images omitted.
>>16615 It should be noted that quantum supermacy-type calculations aren't of any use except being provably hard enough for classical systems to simulate. My bet is we will train general intelligence on a classical hardware years before any quantum hardware is up to the task.
>>16620 This doesn't seem correct. Considering how Gaussian Boson Sampling can be done one trillion times faster than the fastest supercomputers today. A ratio of a minute to 100 million is simply astonishing. China took leadership easily by using a 76-photon prototype. We are just beginning to learn about advantages of quantum computing. In the next 5-10 years we will discover a lot more computational advantages. > we will train general intelligence on a classical hardware Due to scaling laws of neural nets there will never be such thing as AGI. Maybe human level AI (HLAI). Any computing system can only represent efficiently (through a short program) a tiny subset of all possible outputs. Most outputs require a program as long as themselves. Algorithmic approximability can be achieved only to degree. And most Turing reducible problems are exactly those which can be limit computed. So to go beyond you have to use algorithmic approximability. And this implies that general intelligence therefore is not possible for a subset of all possible outputs.
>>16628 Truthfully things that generate headlines like the gaussian boson sampling that you speak of are no more than toy problems that do not translate to generalized approaches. It doesn't matter whether a triangular prism can do optical FFT 100 million or 100 billion times as fast (latency, bandwidth?) as some super computer, it fundamentally cannot be generalized in any comparable way. I think people hype photonics too darn much. I believe within the next 10-20 we will see nothing but more improvements to classical microarctecture. Eventually we will find better ways to take advantage of laws of nature to compute for us (like that light prism) but it's certainly not going to be the hypebait you see today
Dropping this here for now, since I'm not sure where else it would go on /robowaifu/ or even it it's interesting here? >ToaruOS >ToaruOS is a "complete" operating system for x86-64 PCs and experimental support for ARMv8. >While many independent, hobby, and research OSes aim to experiment with new designs, ToaruOS is intended as an educational resource, providing a representative microcosm of functionality found in major desktop operating systems. >The OS includes a kernel, bootloader, dynamic shared object linker, C standard library, its own composited windowing system, a dynamic bytecode-compiled programming language, advanced code editor, and dozens of other utilities and example applications. >There are no external runtime dependencies and all required source code, totalling roughly 100k lines of (primarily) C, is included in this repository, save for Kuroko, which lives separately. https://github.com/klange/toaruos

Minimalist Breadboard Waifu Robowaifu Technician 10/10/2022 (Mon) 04:32:16 No.17493 [Reply]
Did an engineering exercise to make a """recreational companion robot""" Worked on it for a week or two and hit the MVP. My preferred alternative git service is on the fritz so I'm posting the code here. >What does it do? You press the button to stimulate it, and it makes faces based on the stimulation level. The goal was to demonstrate how little is needed to make a companion robot. A "minimum viable waifu", if you will. I think small, easily replicable lil' deliverables like this would help interest in the robowaifu project, because the bar to entry is low. (In both skill and cost). It has meme potential. I hope you guys find it useful in some way. If there is enough interest in the project, I may start working on it again. >--- >related > (>>367, Embedded Programming Group Learning Thread 001) >=== -add C programming thread crosslink
Edited last time by Chobitsu on 10/13/2022 (Thu) 20:26:05.
5 posts and 1 image omitted.
>>17506 >I guess the big question is "what features would be most useful"? I think the 'big' answer is: whatever we here decide they should be. :^) For now, for my own part I'd suggest that the software is the first place to begin, since it is by far the most malleable & inexpensive to start with for initial prototyping. Function stubs can be written out to crystalize the notions well before the hardware need be spec'd. EG: int respond_to_boop (int const boop_strength) { if (boop_strength > 50) return OUCH_RESPONSE; else return LOL_RESPONSE; } This should make it all clear enough to everyone where you're going with things. It will also allow for very gradual introduction of H/W designs for the overall project, for any anon who may later take an interest in a specific notion or hardware capability. Make sense? BTW, I can add a cross-link to our C programming class thread into your OP if you'd like me to? >===

Message too long. Click here to view full text.

Edited last time by Chobitsu on 10/11/2022 (Tue) 20:17:57.
>>17507 >BTW, I can add a cross-link to our C programming class thread into your OP if you'd like me to? I don't see why not. >I think the 'big' answer is: whatever we here decide they should be. :^) Hmm. I'm leaning towards a system that uses the Arduino for I/O and offloads the "thinking" to a PC. It would give devs a lot more flexibility while keeping hardware costs down. But I guess a better question to ask is "what problem are we trying to solve"? What need exists, that a device of this caliber would fill?
>>17508 I think a "self-improvement tomagochi" waifu/companion would be the most useful product. TL;DW - It avoids the problems of cellphone apps (distractions) and high-powered robots (cost and complexity) while giving the benefits of a mechanical friend (always there and has unlimited patience)
>>17508 >I don't see why not. done >I'm leaning towards a system that uses the Arduino for I/O and offloads the "thinking" to a PC. It would give devs a lot more flexibility while keeping hardware costs down. Yes, we've discussed this notion frequently here on /robowaifu/. Our RW Foundations effort (>>14409) is being developed with direct support for this paradigm in mind; and more specifically to support a better-secured approach to the problem (eg, including offline air-gapped). >But I guess a better question to ask is "what problem are we trying to solve"? What need exists, that a device of this caliber would fill? In a nutshell? >"Start small, grow big." I also think Anon is correct that Tamagotchi-like waifus are a great fit for your thread, OP (>>17495, >>17509). >=== -minor grmr, sp edit -add 'secure approach' cmnt
Edited last time by Chobitsu on 10/13/2022 (Thu) 21:37:16.
> (>>17505, potentially-related)

Robotics Hardware General Robowaifu Technician 09/10/2019 (Tue) 06:21:04 No.81 [Reply]
Servos, Actuators, Structural, Mechatronics, etc.

You can't build a robot without robot parts tbh. Please post good resources for obtaining or constructing them.

www.servocity.com/
https://archive.is/Vdd1P
7 posts and 2 images omitted.
Open file (4.75 MB 4624x3472 IMG_20220903_105556.jpg)
>>17213 Posting in this thread now. I am attempting to make a silicone sensor while avoiding patent infringement. It appears that every possible patent is either expired, abandoned, or not applicable, so I'll proceed. So far I have created this giant mess. >pic related
I have a couple questions. 1. Would it be feasible to simulate muscles by twisting chords using electric motors to shorten them, or simply reeling up cable/chord? 2. If so, would pairs of these "muscles" working opposite each other, like biceps and triceps, be able to regenerate electricity as one pulled against the other to unwind/unreel against the opposing motor? Obviously there would still be energy loss but could you reduce the loss by using motors as regenerators? I'm asking because I had a weird dream after learning about Iceland's giant wooden puppet where there was a wooden doll that moved using twisting chords as muscles. It obviously looked feasible in my dream but my dreams are often retarded.
>>17429 I like your sketch Anon.
>1. Would it be feasible to simulate muscles by twisting chords using electric motors to shorten them, or simply reeling up cable/chord? Sounds doable. I've been trying my hand on a similar design. >2. If so, would pairs of these "muscles" working opposite each other, like biceps and triceps, be able to regenerate electricity as one pulled against the other to unwind/unreel against the opposing motor? Obviously there would still be energy loss but could you reduce the loss by using motors as regenerators? Wouldn't work. Any energy the relaxed engine would generate would be extra energy the engine under current would consume. The reason stuff like regenerative breaking works for EVs is because you're taking energy from the wheels while you don't want the wheels to spin.
Open file (38.98 KB 741x599 Jupiter1.jpg)
>>17449 Thanks, maybe I'll learn to draw on the computer someday (I made this jupiter with a drawing pad a while back but, pencil to paper just feels more natural) Also helps to get the idea across quickly I used to be pretty good with Aldus Freehand back in the day but that was bought out by Adobe and I just hate the Illustrator Interface

Open file (40.50 KB 568x525 FoobsWIthTheDew??.jpg)
Emotions in Robowaifus. Robowaifu Technician 07/26/2022 (Tue) 02:05:49 No.17027 [Reply]
Hello, part-time lurker here. (Please excuse me if a thread on this topic exists already) I have and idea on how we could plan to implement emotions easily into our Robowaifus. This idea stems from Chobits where Persocoms change behavior based on battery level. So please consider this. Emotions would be separated into two groups. Internal and external stimuli. Internal stimuli emotions are things like lethargy, hunger, weakness, etc. Things that are at their base are derived from lower battery and damaged components. External stimuli emotions, things like happiness, sadness, etc. Provoked from outside events, mostly relating to how the humans (and master) around her act. A mob mentality way of processing emotions. All of this would be devoid of any requirement for AI, which would quicken development until we make/get a general AI. So until that time comes I think this artificial implementation for emotions would work fine. Though when AIs enter the picture this emotion concept is simple enough that a compatability layer could be added so that the AI can connect and change these emotions into something more intelligent. Perhaps a more human emotional response system [irrational first thought into more thought out rational/personality centered response] or a direct change of the base emotional response by the AI as it distinguish itself from the stock personality to something new. :] > (>>18 - related-thread, personality)

Message too long. Click here to view full text.

Edited last time by Chobitsu on 07/27/2022 (Wed) 00:27:23.
23 posts and 6 images omitted.
Open file (43.38 KB 649x576 coloring.png)
When latent impressions stored from our lifetime of experiences become active they cause an emotional reaction, an actual chemical reaction in the body that activates certain parts of the brain, which then leads to a conscious thought process, which further develops into actions. If you observe your emotional reactions you will notice that most, if not all of them, are either about getting what you want or not getting what you want. If you trace them back to their source they all arise from self-preservation, either from the primal needs such as food, sex and sleep or attachment to an identity (which includes family, friends, community, country, species, environment and even ideas). Latent impressions color our thought process and bias it in many ways. Think of the word 'car' and observe your thoughts. What comes to mind first? What color is it? What shape is it? Did an actual car arise in your mind or another vehicle like a truck? Is it big or small? Do you like cars or dislike them? Do they remind you of something else or something from the past or future? If you ask friends what comes to mind first about a word, you'll find everyone colors words differently. Some very little, some a lot. Most of these colorings come from our desires being fulfilled or unfulfilled, which become stored as latent impressions and bias our attention. Language models are already fully capable of coloring 'thoughts'. The difference is their latent impressions come from an amalgamation of data collected from the internet. There's no cyclical process involved between the resulting actions affecting the latent impressions and those new ones creating fresh actions since current models do not have a plastic memory. So the first step towards creating emotions is creating a working memory. Once we have that we could have a much more productive conversation about emotions and engineering ideal ones. One idea I've had to build a working memory into off-the-shelf models is to do something akin to prefix tuning or multi-modal few-shot learning by prefixing embeddings to the context which are continuously updated to remember as much as possible, and like our own latent impressions, the context would activate different parts of the memory bank that would in turn influence the prefix embeddings and resulting generation. This would be the first step towards a working memory. From there it would need to develop into inserting embeddings into the context and coloring the token embeddings themselves within some constraints to ensure stability.
I believe OP had the right idea and that almost immediately the thread went into overthinking mode. Start simple, like reacting to low battery status. I would also like to emphasize: Start transparent. One can say that emotional states are related to different modes of problem solving and so and so forth, but this all gets very indirect. At the start, I'd rather only have emotions that are directly and immediately communicated, so you have immediate feedback about how well this works. So, ideas about simulating an emotion like nostalgia (is that even an emotion?) I would put aside for the time being. The state of the eyelids is something practical to start with. Multiple aspects could be measured and summed together for creating the overall effect. -battery status -time of the day -darkness for some time -movement (& how much & how fast & which direction) -eyelid status of other faces -low noise level for some time -sudden noise increase -human voice -voice being emotional or not (I mean what you register even without knowing a language, this can't be very complex) -hearing words with extreme or dull emotional connotation -registering vibrations -body position (standing, sitting, sitting laid back, lying flat) -extreme temperature and rapid temperature changes There is no necessity to perfectly measure an aspect (the measure just has to be better than deciding by coin flip) nor do you need to have something for all or even most aspects, summing together whatever of these silly tiny things you implement badly will make the overall effect more realistic and sophisticated than the parts.
>>17457 Excellent post Anon, thanks.
>>17457 The uncanny valley video here >>10260 describes the differences in approaches well. There are two problems to solve: 1. How do you make something emotional? 2. How do you make emotions realistic? In any case, I wrote this up: https://colab.research.google.com/drive/1B6AedPTyACKvnlynKUNyA75XPkEvVAAp?usp=sharing I have two extremes on that page. In the first cell, emotions are described with text, and they can be arbitrarily precise. In the second cell, emotions are described by a few measures that can be added. There are different advantages to each. If there are a fixed number of emotions, text-based emotions would be low complexity, easy to specify, and easy to test. If there's a continuum of simple emotions, measure-based emotions would be low complexity, harder to specify, and easy to test. If there are complex emotions, text-based emotions would be high complexity, easy to specify, and hard to test. It might not matter which approach is taken to start with since it seems possible to hybridize the two approaches. "On a scale of [...] how well does this statement match your feelings on this [...] event?" As a result, it should be possible to start with one approach, then later get the benefits of the other approach.
>>17463 replikaAI does something similar with CakeChat (which has been linked in here via Luka's GitHub) >Training data >The model was trained on a preprocessed Twitter corpus with ~50 million dialogs (11Gb of text data). To clean up the corpus, we removed URLs, retweets and citations; mentions and hashtags that are not preceded by regular words or punctuation marks; messages that contain more than 30 tokens. >We used our emotions classifier to label each utterance with one of the following 5 emotions: "neutral", "joy", "anger", "sadness", "fear", and used these labels during training. To mark-up your own corpus with emotions you can use, for example, DeepMoji tool. >Unfortunately, due to Twitter's privacy policy, we are not allowed to provide our dataset. You can train a dialog model on any text conversational dataset available to you, a great overview of existing conversational datasets can be found here: https://breakend.github.io/DialogDatasets/ >The training data should be a txt file, where each line is a valid json object, representing a list of dialog utterances. Refer to our dummy train dataset to see the necessary file structure. Replace this dummy corpus with your data before training.

/robowaifu/ + /monster/, its benefits, and the uncanny valley Robowaifu Technician 05/03/2021 (Mon) 14:02:40 No.10259 [Reply]
Discussing the potential benefits of creating monster girls via robotics instead of 1 to 1 replicas of humans and what parts can be substituted to get them in production as soon as possible. Firstly is the fact that many of the animal parts that could be substituted for human one are much simpler to work with than the human appendages, which have a ton of bones and complex joints in the hands and feet, My primary example of this is bird/harpy species (image 1), which have relatively simple structures and much less complexity in the hands and feet. For example, the wings of the bird species typically only have around three or four joints total, compared to the twenty-seven in the human hand, while the legs typically only have two or three, compared to the thirty-three in the human foot. As you can guess, having to work with a tenth of the bones and joints and opposable thumbs and all that shit makes things incredibly easier to work with. And while I used bird species as an example, the same argument could be put forward for MG species with paws and other more simplistic appendages, such as Bogey (image 2) and insect hybrids (image 3). Secondly is intentionally making it appear to not be human in order to circumvent the uncanny valley. It's incredibly difficult to make completely convincing human movement, and one of the simplest ways around that is just to suspend the need for it entirely. We as humans are incredibly sensitive to the uncanny valley of our own species, even something as benign as a prosthetic limb can trigger it, but if we were to create something that we don't expect to move in such a way, it's theoretically entirely possible to just not have to deal with it (for the extremities part of it, anyways), leaving more time to focus on other aspects, such as the face. On the topic of face, so too could slight things be substituted there (again for instance, insect girls), in order to draw attention away from the uncanny valley until technology is advanced enough that said uncanny valley can be eliminated entirely. These possibilities, while certainly not to the taste of every anon, could be used as a way to accelerate production to the point that it picks up investors and begins to breed competition and innovation among people with wayyyyyyy more money and manpower than us, which I believe should be the endgoal for this board as a whole. . Any ideas or input is sincerely appreciated.
22 posts and 9 images omitted.
>>13698 As you think >>13699 I will get mad on what I want.
>>16492 Yep, good thinking Anon. And actually, we've had similar concepts going here for quite some time actually. waifusearch> plush OR plushie OR daki OR dakimakura THREAD SUBJECT POST LINK AI Design principles and philoso -> https://alogs.space/robowaifu/res/27.html#27 dakimakura What can we buy today? -> https://alogs.space/robowaifu/res/101.html#101 " Who wouldn't hug a kiwi. -> https://alogs.space/robowaifu/res/104.html#6127 " " -> https://alogs.space/robowaifu/res/104.html#6132 " " -> https://alogs.space/robowaifu/res/104.html#6176 plushie " -> https://alogs.space/robowaifu/res/104.html#14761 daki Waifus in society -> https://alogs.space/robowaifu/res/106.html#2267 dakimakura Robot Voices -> https://alogs.space/robowaifu/res/156.html#9092 plushie " -> https://alogs.space/robowaifu/res/156.html#9093 " Waifu Robotics Project Dump -> https://alogs.space/robowaifu/res/366.html#3501 daki Robowaifu Propaganda and Recruit -> https://alogs.space/robowaifu/res/2705.html#2738 " /robowaifu/ Embassy Thread -> https://alogs.space/robowaifu/res/2823.html#10983 plushie

Message too long. Click here to view full text.

Some of the most mobile robots around today are snakes. It got me thinking that a naga robot would be easier than a biped. the tail could hold a large number of pneumatic artificial muscles that are cheap and relatively lightweight and powerful making balancing and moving easier. It might be nice to have a bot that wraps you in its sexy scaley tail at night and massages you to sleep with it.
>>17434 /monster/, pls :^) You are definitely correct about the ease of design vs. biped. Snek robots are already wildly successful for industrial applications involving pipes, crevasses and other space-constrained applications.
>>17434 >pneumatic artificial muscles that are cheap and relatively lightweight and powerful The pneumatic muscles I've seen online are very expensive. Where have you found any cheap ones to purchase? https://www.robotshop.com/en/210mm-stroke-45lb-air-muscle.html This one is 99 dollars but that will add up wood quickly because you'll need 5-15 in a tail.

Report/Delete/Moderation Forms
Delete
Report