/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

The canary has FINALLY been updated. -robi

Server software upgrades done, should hopefully keep the feds away. -robi

LynxChan 2.8 update this weekend. I will update all the extensions in the relevant repos as well.

The mail server for Alogs was down for the past few months. If you want to reach out, you can now use admin at this domain.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

Captcha
no cookies?
More

(used to delete files and postings)


Knowing more than 100% of what we knew the moment before! Go beyond! Plus! Ultra!


Humanoid Robot Projects Videos Robowaifu Technician 09/18/2019 (Wed) 04:02:08 No.374 [Reply] [Last]
I'd like to have a place to accumulate video links to various humanoid – particularly gynoid – robotics projects are out there. Whether they are commercial scale or small scale projects, if they involve humanoid robots post them here. Bonus points if it's the work of a lone genius. I'll start, Ricky Ma of Hong Kong created a stir by creating a gynoid that resembled Scarlett Johansson. It's an ongoing project he calls an art project. I think it's pretty impressive even if it can't walk yet. https://www.invidio.us/watch?v=ZoSfq-jHSWw === Instructions on how to use yt-dlp to save videos ITT to your computer: (>>16357)
Edited last time by Chobitsu on 05/21/2022 (Sat) 14:20:15.
83 posts and 21 images omitted.
Therewhile in China (EX Robot): https://youtu.be/tEDXba5GaH0
Artbyrobot on YouTube, a very ambitious (maybe a bit delusional) guy with the plan to build a powerful humanoid robot, not sure if male or female: https://youtu.be/6UOFTFJ0elQ Playlist: https://youtube.com/playlist?list=PLhd7_i6zzT5-MbwGz2gMv6RJy5FIW_lfn
>>16799 >>16819 >>16868 >>16901 Great stuff Anons, thanks.
Clone (formerly Automaton) Silent water pump: https://youtu.be/6yWyerfQMR4 Fingers holding 30 kg: https://youtu.be/zA8QS2tp7e4
>>17076 Cool, thanks!

3D printer resources Robowaifu Technician 09/11/2019 (Wed) 01:08:12 No.94 [Reply] [Last]
Cheap and easy 3D printing is vital for a cottage industry making custom robowaifus. Please post good resources on 3D printing.

www.3dprinter.net/
https://archive.is/YdvXj
64 posts and 14 images omitted.
leaving this here Injection molding meets 3D printing in this 300 piece 3D printed injection molding machine https: //www.3ders.org/articles/20160331-injection-molding-meets-3d-printing-in-this-300-piece-3d-printed-injection-molding-machine.html
Many print quality issues and long term maintenance requirements explained in one video: https://youtu.be/-cm1vIER_bk
Open file (246.22 KB 473x444 zipties.png)
This is about putting hardware into paused prints and printing it into the parts. https://www.youtube.com/watch?v=rk6MkW1eRiY
>>17074 Make sure your captive part is nice and degreased so the next layer can stick to it, one idea I haven't tried is using a gluestick on the top surface of the part. It also helps for the part to be as flat as possible.

Bipedal Robot Locomotion General Robowaifu Technician 09/15/2019 (Sun) 05:57:42 No.237 [Reply] [Last]
We need to talk about bipedal locomotion. It's a complicated topic but one that has to be solved if we are ever to have satisfyingly believable robowaifus. There has surely already been a lot of research done on this topic, and we need to start digging and find the info that's out there. There are some projects that have at least partial robolegs solutions working, but none that I know of that look very realistic yet. We likely won't come up with some master-stroke of genius and solve everyone's problems here on /robowaifu/, but we should at least take a whack at it who knows? We certainly can't accomplish anything if we don't try.

I personally believe we should be keeping the weight out of the extremities – including the legs – while other anons think that we should add weight to the feet for balance. What's you're ideas anon? How do we control the gait? How do we adjust for different conditions? What if our robowaifu is carrying things? What about the legs during sex? Should we focus on the maths behind MIP (Mobile Inverted Pendulum), or is there a different approach that would be more straightforward? A mixture? Maybe we can even do weird stuff like reverse-knee legs that so many animals have. Robofaun waifu anyone? What about having something like heelys or bigger wheels in the feet as well?

I'm pretty sure if we just put our heads together and don't stop trying, we'll eventually arrive at least one good general solution to the problem of creating bipedal robot legs.

>tl;dr
ITT post good robowaifu legs

>tech diagrams sauce
www.youtube.com/watch?v=pgaEE27nsQw
www.goatstream.com/research/papers/SA2013/SA2013.pdf
98 posts and 40 images omitted.
>>13447 How would you implement them?
>>13451 same way the human body does a feedback loop making continuous micro adjustments autopilots already do this with stabilizers, but thats easy for something with a plane perpendicular to gravity, parallel planes are in a league of their so dont bother until synthetic musclefibers become a thing
> (>>16593 - information & videos -related)
>Energy optimization during walking involves implicit processing [1] >Gait adaptations, in response to novel environments, devices or changes to the body, can be driven by the continuous optimization of energy expenditure. However, whether energy optimization involves implicit processing (occurring automatically and with minimal cognitive attention), explicit processing (occurring consciously with an attention-demanding strategy) or both in combination remains unclear. Here, we used a dual-task paradigm to probe the contributions of implicit and explicit processes in energy optimization during walking. To create our primary energy optimization task, we used lower-limb exoskeletons to shift people's energetically optimal step frequency to frequencies lower than normally preferred. Our secondary task, designed to draw explicit attention from the optimization task, was an auditory tone discrimination task. We found that adding this secondary task did not prevent energy optimization during walking; participants in our dual-task experiment adapted their step frequency toward the optima by an amount and at a rate similar to participants in our previous single-task experiment. We also found that performance on the tone discrimination task did not worsen when participants were adapting toward energy optima; accuracy scores and reaction times remained unchanged when the exoskeleton altered the energy optimal gaits. Survey responses suggest that dual-task participants were largely unaware of the changes they made to their gait during adaptation, whereas single-task participants were more aware of their gait changes yet did not leverage this explicit awareness to improve gait adaptation. Collectively, our results suggest that energy optimization involves implicit processing, allowing attentional resources to be directed toward other cognitive and motor objectives during walking. >Humans Are Designed to Think While Walking [2] >One of my favorite things to do when I am on vacation is hike in the mountains and take in as much scenery and contact with wildlife as possible. The former requires that I stay in good enough physical conditioning that I can achieve 15+ miles of mountain hiking per day. Therefore, when I am not on vacation, I go for a two-to-four-mile run every day before breakfast. That morning routine keeps me in physical shape and prepares me to undertake the research and writing projects for that day. Our problems are much, much simpler than God's were when He was designing us human beings with all our facilities, including these two. However, I'd say it's a good model for us to follow. After all, robowaifus should be able to talk with us about different things; like being young newlyweds while she's cooking a meal for us upstairs at the pub, right Anon? :^) Maybe Carver Mead's (et al) Neuromorphics can help us all out with this a bit. To wit: push the computation out to the edges of a [robowaifu's] system[s]. That way, while the 'autonomous' things are happening, her central-core computation mesh can be freed up to talk with us about important things. 1. https://pubmed.ncbi.nlm.nih.gov/34521117/ 2. https://reasons.org/explore/blogs/todays-new-reason-to-believe/humans-are-designed-to-think-while-walking >=== -minor fmt edit -add 'important things' cmnt
Edited last time by Chobitsu on 07/25/2022 (Mon) 22:07:22.
I think this general approach should be applicable, it is validated in real-life swiss quadruped robot [1] and in simulated [2] & real bipedals [3]. Compared to less-validated approaches it's a clear winner. You don't have to implement it in main AI, it's better if it runs on a small low-latency auxilary NN. It doesn't require too much compute or data, and the gait can be tuned via adding energy expenditure & smoothness terms to the loss. You can also include mocap data and tune the model on it for humanlike gait. 1. https://leggedrobotics.github.io/rl-blindloco/ https://www.youtube.com/watch?v=8sO7VS3q8d0 2. https://www.youtube.com/watch?v=hx_bgoTF7bs 3. https://techxplore.com/news/2021-04-robot.html

Privacy, Safety, & Security General Robowaifu Technician 04/20/2021 (Tue) 20:05:08 No.10000 [Reply] [Last]
This thread is for discussion, warnings, tips & tricks all related to robowaifu privacy, safety & security. Or even just general computing safety, particularly if it's related to home networks/servers/other systems that our robowaifus will be interacting with in the home. >=== -update OP -broaden thread subject
Edited last time by Chobitsu on 07/21/2022 (Thu) 21:37:59.
47 posts and 13 images omitted.
>>16657 Nice trick (& inexpensive too). Thanks Anon.
Related: Why not using mobile devices instead SBCs or Laptops >>16963
More security research on old Intel CPUs (till Atom 5000 series, current one is 6000): >At the beginning of 2020, we discovered the Red Unlock technique that allows extracting Intel Atom Microcode. We were able to research the internal structure of the microcode and then x86 instruction implementation. Also, we recovered a format of microcode updates, algorithm and the encryption key used to protect the microcode https://github.com/chip-red-pill/MicrocodeDecryptor Maybe this will also make sure that there won't be any intentional secrit backdoors? I don't know if they might be able to upgrade it at some point. Currently not: >Only decryption is supported, because microcode has an RSA signature for integrity protection.
The guide to (embedded) linux kernel we deserve https://github.com/0xAX/linux-insides/blob/master/SUMMARY.md as developers
>>17001 Great information Anon, thanks for the link!

Hand Development Robowaifu Technician 07/28/2020 (Tue) 04:43:19 No.4577 [Reply] [Last]
Since we have no thread for hands, I'm now opening one. Aside the AI, it might be the most difficult thing to archive. For now, we could at least collect and discuss some ideas about it. There's Will Cogleys channel: https://www.youtube.com/c/WillCogley - he's on his way to build a motor driven biomimetic hand. It's for humans eventually, so not much space for sensors right now, which can't be wired to humans anyways. He knows a lot about hands and we might be able to learn from it, and build something (even much smaller) for our waifus. Redesign: https://youtu.be/-zqZ-izx-7w More: https://youtu.be/3pmj-ESVuoU Finger prototype: https://youtu.be/MxbX9iKGd6w CMC joint: https://youtu.be/DqGq5mnd_n4 I think the thread about sensoric skin >>242 is closely related to this topic, because it will be difficult to build a hand which also has good sensory input. We'll have to come up with some very small GelSight-like sensors. F3 hand (pneumatic) https://youtu.be/JPTnVLJH4SY https://youtu.be/j_8Pvzj-HdQ Festo hand (pneumatic) https://youtu.be/5e0F14IRxVc Thread >>417 is about Prosthetics, especially Open Prosthetics. This can be relevant to some degree. However, the constraints are different. We might have more space in the forearms, but we want marvelous sensors in the hands and have to connect them to the body.

Message too long. Click here to view full text.

76 posts and 19 images omitted.
>>8651 Cheap but reasonably powerful (like MG996R that can produce up to 15kg starting torque at 6V) analog servos weight around 55g each. There are 15 servos on that arm (3 for each finger it seems). So servos alone would weight 825g. And that's only for fingers actuation! There supposed to be more moving parts in that part of the arm but as you can see on the vid there's barely any space left and construction itself doesn't look like it was designed for anything more than this one demo. There are smaller and lighter servos. And they cost a lot. I think they can weight something like 15g. That would make 15x15=225(g). Better but super expensive.Imagine paying 750-1500$ just for servos that move only fingers on only one hand. It seems like they are using something like that in their vid because servos are tiny.
>>11482 The mass of servos is exactly why I've been lobbying for relocating them into the 'central core' of the robowaifus, and transmitting force out to the extremities via Bowden cables instead. >tl;dr Reducing thrown-weight in extremities is crucial to both agility/performance, and (much more importantly) reducing energy consumption in our robowaifus.
Open file (307.67 KB 1024x1024 gallery-5-1.jpg)
Open file (299.27 KB 1024x1024 gallery-4.jpg)
Open file (288.84 KB 1024x1024 gallery-2.jpg)
>>11007 >>10075 therobotstudio has a new videos and a website about his Nano Hand online. Intro: https://youtu.be/uOeS_jklU2Y Website: www.robotnanohand.com Playlist with assembly: https://youtube.com/playlist?list=PLy7gxZH9jzfSGinQz8W42F5HdiTkT0Xm8
>During RSS 2021, my colleagues and I published a surprising new finding. It turns out, the softness of hands lets us manipulate some objects completely blind: without any visual/tactile/force feedback. We studied the nature of these open-loop skills, and identified three key design principles for robust in-hand manipulation. ... >Yet here, with this soft hand, a simple step adjustment to two actuators' inflations triggers this cool twirl. We do not compute anything. Instead, we just hitch a ride on the physics. This approach is super low-tech, and does something that advanced robot hands have a hard time doing. We published this in 2021. But this work could as well have been done 30 years ago. Air pumps and inflatable rubber are old things. https://aditya.bhatts.org/sensorless-in-hand-manipulation https://youtu.be/2vwdP4WjGoQ Paper, same as uploaded: http://www.roboticsproceedings.org/rss17/p089.pdf
Open file (137.64 KB 350x350 carlos.png)
>>13977 Neat, that guy is quite a talent. >>16898 This is interesting. I'm a firm believer in 'holistic-systems-by-design'. Clearly our hands in particular exhibit this kind of thing. It will be extraordinarily-gratifying to me personally once /robowaifu/'s anons can produce appealing & effective hands for our robowaifus that don't cost an arm & a leg.

Robo Face Development Robowaifu Technician 09/09/2019 (Mon) 02:08:16 No.9 [Reply] [Last]
This thread is dedicated to the study, design, and engineering of a cute face for robots.
169 posts and 101 images omitted.
>>16008 watching this rn. Good point. Personally robots should be robots. I don't want wrinkles, "eye creases" or synthetic pores and skin imperfections on my r-waifu. Synthetic humanoids should embrace their .. syntheticness. That is my philosophy. (I guess I can make one allowance for 2B's chin mole, lol)
Nvidia AI can create or change faces based on sketches: https://youtu.be/MO2K0JXAedM - Not always working out great, but might be useful to go from real face to anime like at some point, or at least do some changes to a pic like increasing the size of the eyes. Though, I'm currently more inclined to use Artbreeder or something alike. I don't really need any pretty actress as a starting point, after what I've got there without much effort.
>>16679 >nosering disgusting.
>>16880 Maybe, but also not relevant. She looks very human-like but even better.
A projection mapped bit of fabric and a small blue laser projector that you can get for like 50$ is enough to have a fully believable and awesome lifelike face. If we can do in vrchat and not care you can project on an opaque fabric doll head.

DCC Design Tools & Training Robowaifu Technician 09/18/2019 (Wed) 11:42:32 No.415 [Reply] [Last]
Creating robowaifus requires lots and lots of design and artistry. It's not all just nerd stuff you know Anon! :^) ITT: Add any techniques, tips, tricks, or content links for any Digital Content Creation (DCC) tools and training to use when making robowaifus. >note: This isn't the 'Post Your Robowaifu-related OC Arts & Designs General' thread. We'll make one of those later perhaps. >--- I spent some time playing with the program Makehuman and I'll say I wasn't impressed. It's not possible to make anime real using Makehuman, in fact the options (for example eye size) are somewhat limited. But there's good news, don't worry! The creator of Makehuman went on to create a blender plugin called ManuelBastioniLAB which is much better (and has a much more humorous name). The plugin is easy to install and use (especially if you have a little blender experience). There are three different anime female defaults that are all pretty cute. (Pictured is a slightly modified Anime Female 2.) There are sliders to modify everything from finger length to eye position to boob size. It even has a posable skeleton. Unfortunately the anime models don't have inverse kinematic skeletons which are much easier to pose. Going forward I'm probably going to use MasturBationLABManuelBastioniLAB as the starting point for my designs. >=== -re-purpose OP for DCC consolidation
Edited last time by Chobitsu on 08/10/2021 (Tue) 23:39:41.
133 posts and 65 images omitted.
Focuses on a realistic male body in OpenSCAD, but still interesting: >Carl Davidson edited this page on Dec 4, 2015 >In this tutorial we will show you how to model the human body. This is an ongoing project so make sure to check back for updates. https://github.com/davidson16807/relativity.scad/wiki/Human-Body >We're striving for realism, here - this is not meant to be a mannequin or cartoon character, at least not in its finished form. We will attempt to maintain correct body proportions using a method that's common to classical figure drawing. Bone, muscle, and cartilage will all be present. If you have any experience in figure drawing, your input is welcome. Feel free to edit this document or submit a pull request with your changes to the body.scad script.
> Why choose Bforartists, and not Blender? > An own keymap, which is reduced to just the necessary hotkeys and a navigation that can be purely done by mouse. > Cleaned up User Interface. Lots of unnecessary double, triple or even more identical menu entries removed. > Extended User Interface. Many tools that were formerly hotkey only have a menu entry now. > Rearranged User Interface. Some things are better accessible now, some are not so much in the way anymore. > Improved defaults. > Colored and as double as much icons than Blender. > A configurable toolbar with icon buttons. > A tool shelf with the old tools too, in tabs. > Tabs in the toolshelf. > Improved layouts. > Left aligned checkboxes and text where possible. > Better tooltips. > Better readable standard theme. > Some neat add-ons to improve usability, like the reset 3D View add-on or the Set Dimensions add-on with which you can scale in world coordinates in edit mode. > And lots more small details like not so much confirm dialogs.

Message too long. Click here to view full text.

>>16856 Neat, Anon. TDs have to focus on their artist colleagues just as much (more, actually) as their pipeline devs. Will be interesting to look into, thanks!
>>16856 ~/tor_ytdl_retry.sh https://www.youtube.com/watch?v=iXu2t3e9NkA Nice, concise official video highlighting several differences between the two products. ~24mins. >via https://www.bforartists.de/the-differences-to-blender/
Open file (11.34 KB 336x188 hqdefault.jpg)
>>16856 >Quickstart ~/tor_ytdl_retry.sh https://www.youtube.com/playlist?list=PLB0iqEbIPQTZEkNWmGcIFGubrLYSDi5Og >Quickstart_play.m3u Quickstart Intro [Gm7aCzI4xws].webm Quickstart Navigation [TDzMx7huGzk].mkv Quickstart Scale and Extrude [-oaTVNOIf-c].webm Quickstart Modeling [f9ubBGV4js0].webm Quickstart UV Mapping Smart UV Project [oRebOmPy2iU].webm Quickstart UV Mapping Unwrapping [qTCTPf0gFiY].webm Quickstart UV Mapping Cleaning up [f004Kvke10c].webm Quickstart Export UV Layout [sRYZLgqPJjM].webm Quickstart Adding Cycles Material [h3LFl59qnLk].webm Quickstart Camera [c34RBNXvIZI].webm Quickstart Lights [X83gVkyT4B8].webm Quickstart Rendersettings Cycles [W8EZNTQYCOY].webm

Message too long. Click here to view full text.


Aoki Lapis model; Robot fairy Robowaifu Technician 09/16/2019 (Mon) 02:51:51 No.266 [Reply]
Height; 15cm
Type; Vocaloid

I'm making it in as natural looking way as I can, this means a bone structure and a similar layout of the electronics and components as with a living humanoid

This is a prototype version that I'm working on, I might change things later and make modifications or adjustments to the design or components
39 posts and 38 images omitted.
Today I imported 7 different Lapis models into blender. I could also use MMD to get them to display more easily as fully textured references and change the poses and such. I also worked on refining the design for the skull and considered various ideas. Later on I may upgrade my 3d printer and print with flexible TPU filament to maybe make skin and fatty parts. First I'm going to print a lot more things out of PLA. I picked up a bone/ivory color filament today and tested printing the upper arm bones out of that material. Progress may be a bit slow, but I'm working on multiple different things.
>>16640 Glad to see you're still making progress! Keep it up and best of luck!
>>16640 >I picked up a bone/ivory color filament today and tested printing the upper arm bones out of that material. Neat! May Lapis fly soon!
I managed to make models in blender and print upper leg bones so I'm showing those off here now. This is the simple stuff, work on the skull continues.. (it is tough to make properly)
>>16862 Good job, Hik. Thanks for the progress report. You might try looking at some facial modelling videos in Blender for Lapis' skull? Cheers.

/robowaifu/ Embassy Thread Chobitsu Board owner 05/08/2020 (Fri) 22:48:24 No.2823 [Reply] [Last]
This is the /robowaifu/ embassy thread. It's a place where Anons from all other communities can congregate and talk with us about their groups & interests, and also network with each other and with us. ITT we're all united together under the common banner of building robowaifus as we so desire. Welcome. Since this is the ambassadorial thread of /robowaifu/, we're curious what other communities who know of us are up to. So w/o doxxing yourselves, if this is your first time posting ITT please tell us about your home communities if you wouldn't mind please Anons. What do you like about them, etc? What brought you to /robowaifu/ today? The point here is to create a connection and find common-ground for outreach with each other's communities. Also, if you have any questions or concerns for us please feel free to share them here as well.
Edited last time by Chobitsu on 05/23/2020 (Sat) 23:13:16.
166 posts and 52 images omitted.
>>11235 >It's not about the money, but skills and effort and that's about 100k $ per year per engineer- assuming they are driveny by that goal, because it'd be a low wage for a competent ML/DL/AI engineer.
This seems like the best thread to ask, so: what is your opinion on techpriest esque waifus as opposed to a full robot? As well as becoming one yourself? It seems to be way easier to me, and you filter out roasties by the fact that only a woman of character would go through something like that (or innocent vat grown humans whatever). If you became a techpriest yourself would the likely lack of sexual ability dissuade you, or would wholesome lovey Dovey roamnce and faith shit be enough?
>>16643 Also personally I think AI is evil unless you viably and carefully transition humanity into it.
>>16643 I'm pretty big on transhumanism myself, but I don't see a perfect woman coming out of the tech priest program. >>16644 I am partial to the idea of using xenocyborgs with biocomputers for brains personally. It should be easier to get something capable of processing multiple parallel processes at once to think like us than traditional AI.

Robowaifu-OS & Robowaifu-Brain(cluster) Robowaifu Technician 09/13/2019 (Fri) 11:29:59 No.201 [Reply] [Last]
I realize it's a bit grandiose (though probably no more than the whole idea of creating a irl robowaifu in the first place) but I want to begin thinking about how to create a working robowaifu 'brain', and how to create a special operating system to run on her so she will have the best chance of remaining an open, safe & secure platform.

OS Language Choice
C is by far the single largest source of security holes in software history, so it's out more or less automatically by default. I'm sure that causes many C developers to sneer at the very thought of a non-C-based operating system, but the unavoidable cost of fixing the large numbers of bugs and security holes that are inevitable for a large C project is simply more than can be borne by a small team. There is much else to do besides writing code here, and C hooks can be generated wherever deemed necessary as well.

C++ is the best candidate for me personally, since it's the language I know best (I also know C too). It's also basically as low level as C but with far better abstractions and much better type-checking. And just like C, you can inline Assembler code wherever needed in C++. Although poorly-written C++ can be as bad as C code in terms of safety due to the necessity of it being compatible with C, it also has many facilities to not go there for the sane coder who adheres to simple, tried-and-true guidelines. There is also a good C++ project already ongoing that could be used for a clustered unikernel OS approach for speed and safety. This approach could save drastic amounts of time for many reasons, not the least of which is tightly constrained debugging. Every 'process' is literally it's own single-threaded kernel, and mountains of old-style cruft (and thinking) typical with OS development simply vanishes.

FORTRAN is a very well-established language for the sciences, but a) there aren't a lot of FORTRAN coders, and b) it's probably not the greatest at being a general-purpose language anyway. I'm sure it could be made to run robotics hardware, but would probably be a challenge to turn into an OS.

There are plenty of dujour SJW & coffee languages out there, but quite apart from the rampant faggotry & SJWtardism plainly evident in most of their communities, none of them have the kind of industrial experience and pure backbone that C, C++, or FORTRAN have.

D and Ada are special cases and possibly bear due consideration in some year's time, but for now C++ is the obvious choice to me for a Robowaifu OS foundation, Python probably being the best scripting language for it.

(1 of 2)
65 posts and 22 images omitted.
Open file (180.57 KB 596x685 Screenshot_4.png)
>>16522 yup. btw forgot another one. (Jun 3. 2022) pretty new tech. 36 microseconds vs 9000 yrs. one such unit would destroy the entire nvidia tensor gpus line :/ https://www.itmedia.co.jp/news/articles/2206/03/news172.html It's a pity, (((amazon))) have already stuck their paws in there!
>>16615 It should be noted that quantum supermacy-type calculations aren't of any use except being provably hard enough for classical systems to simulate. My bet is we will train general intelligence on a classical hardware years before any quantum hardware is up to the task.
>>16620 This doesn't seem correct. Considering how Gaussian Boson Sampling can be done one trillion times faster than the fastest supercomputers today. A ratio of a minute to 100 million is simply astonishing. China took leadership easily by using a 76-photon prototype. We are just beginning to learn about advantages of quantum computing. In the next 5-10 years we will discover a lot more computational advantages. > we will train general intelligence on a classical hardware Due to scaling laws of neural nets there will never be such thing as AGI. Maybe human level AI (HLAI). Any computing system can only represent efficiently (through a short program) a tiny subset of all possible outputs. Most outputs require a program as long as themselves. Algorithmic approximability can be achieved only to degree. And most Turing reducible problems are exactly those which can be limit computed. So to go beyond you have to use algorithmic approximability. And this implies that general intelligence therefore is not possible for a subset of all possible outputs.
>>16628 Truthfully things that generate headlines like the gaussian boson sampling that you speak of are no more than toy problems that do not translate to generalized approaches. It doesn't matter whether a triangular prism can do optical FFT 100 million or 100 billion times as fast (latency, bandwidth?) as some super computer, it fundamentally cannot be generalized in any comparable way. I think people hype photonics too darn much. I believe within the next 10-20 we will see nothing but more improvements to classical microarctecture. Eventually we will find better ways to take advantage of laws of nature to compute for us (like that light prism) but it's certainly not going to be the hypebait you see today

Report/Delete/Moderation Forms
Delete
Report