/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Happy New Year!

The recovered files have been restored.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

Captcha
no cookies?
More

(used to delete files and postings)


“Fall seven times, stand up eight.” -t. Japanese Proverb


Papercraft waifu Robowaifu Technician 09/16/2019 (Mon) 06:21:35 No.271 [Reply]
Thoughts on making a paper waifu then adding robotics? I want animu grills but, most robots have uncanny 3DPD faces that aren't nearly as cute as a real waifu. With paper/screens, at least the face can keep the purity and beauty of 2D.
16 posts and 9 images omitted.
>>33827 >papercraft wifu shell Just in case you forgot or someone else wants a good source. I found the mother of all paper mache recipe sites by this Grandmother. It's a great resource for making things from paper. She has all sorts of recopies. Some have various additives so you can get better dimensional stability. For prototyping I don't think paper can be beat. Fast, cheap and once you have the shell or a paper positive you can create molds from more solid materials while keeping prototype cost to a minimum, It's at this link, >>33318
>>34113 Thanks kindly, Grommet! My plan ATM is simply to unfold my 3D models from Blender into 2D flats, print & cut them out, then assemble them all together /po/ -style. After coating or two, I can see using some type of papier-mâché coating to fashion a mold perhaps (as you seem to suggest)?
>>34131 That's seems like a super fast way to get results. And yes I do mean using P.M. for molds. I was thinking about doing this for fiberglass and boats. I got the idea from the "concrete fabric formworks" guys. Type that in search and look at some of the images. It's wild what they are doing. Low cost, high quality(most of the water drains out leaving far stronger concrete due to compaction) and they can do structures that the loads are designed in the form to be exactly where they are needed. Due to the curvature of the form it automatically forms the correct reinforcement where needed.
>>34147 >And yes I do mean using P.M. for molds. Yeah, thanks I thought so. >and they can do structures that the loads are designed in the form to be exactly where they are needed. We certainly need to take advantage of similar approaches, for similar needs within our robowaifus. For example, mounting these shell pieces on internal endoskellington struts, etc., could use some beefing up on the attach points. Thanks for all the good ideas, Grommet! Please keep them coming, Anon. Cheers. :^)
>>34131 I have a great link for those interested in flat structures to make 3D structures. The first link I messed up a little. I started reading in the thread farther down and people were talking about cutting flat stuff and materials and I commented before I realized the thread was machine tools. Sigh...oh well the post should be more in structures which I linked to later. Anyways here'a link to the post on making these flat "Isogrids" and then a link to some further ideas in the proper structures thread. Isogrids >>34491 Ideas about using them in structures >>34493

Robot skeletons and armatures Robowaifu Technician 09/13/2019 (Fri) 11:26:51 No.200 [Reply] [Last]
What are the best designs and materials for creating a skeleton/framework for a mobile, life-sized gynoid robot?
238 posts and 123 images omitted.
> (skeleton kinematic-chain & motion-planning convo -related : >>32974, ... ) >=== -sp edit
Edited last time by Chobitsu on 08/24/2024 (Sat) 19:07:33.
I made a post on super strong structures called "Isogrids" but it's sort of in the wrong place. A link so that I'm not double posting. It has great utility not only in strength but a special form I linked is great for prototyping and production with less work or machinery needed. Good cost savings without sacrificing strength. >>34491 If you see the link imagine you made these quarter isogrids for bones and had hollow areas in them. Most of the strength would remain and you could channel wires, tubing, whatever through the holes.
>>34493 An idea for quick prototyping is to use cheap canvas cloth tarp. Lay this out on regular polyethylene sheets like you use for covering the floor in painting. Lay the canvas out and squeegee in titebond glue or regular elmers white glue. This makes a strong structure. Then you cut out the part/slats and cut slots in them. Connect and you have strong structures for super cheap. You could also cut out your structures first, stiffen with glue then use a cut off grinder to cut slots.
I mentioned leaving holes in the slats but you could also build the bones like hollow tanks. See the links in the above other thread. There's pictures of tanks make this way. They use them for missiles, rockets, etc. Saves a LOT of weight while gaining lots of strength.
> (topics-related: >>34509, >>34550 )

Open file (2.92 MB 1470x1665 1280938409182.png)
nandroid project II Emmy-Pilled 09/11/2023 (Mon) 01:03:11 No.25306 [Reply] [Last]
building own personal nandroid doll continuation of previous thread: https://alogs.space/robowaifu/res/19226.html#
223 posts and 86 images omitted.
>>34028 that fourth tiny "eyelash" is supposed to be a marking on the eye socket to show where the eyelid rotates. it should be around the center of the circumference of the eye socket. it's not an eyelash or that high up >>32232
>>34253 debatable
>>34253 I would disagree, as it has always been present on the TGE model since the beginning. And clearly based off Emmy's total of four lashes per eye as seen in nearly every panel of her drawn by Dom
>>34267 in that second Dom-made image you posted, her left supposed "Rotation Seam" if it were one would have been smaller due to perspective changes, but instead is the same size as her right one. To me this indicates that they're eyelashes that stick slightly out and away.
>>34317 it can be tough trying to get things completely accurate from Dom at times, his emphasis on size can vary quite a big from page to page

Philosophers interested in building an AGI? pygmalion 06/26/2021 (Sat) 00:53:09 No.11102 [Reply] [Last]
Why is it that no philosophers are interested in building an AGI? we need to change this, or at least collect relevant philosophers. discussion about philosophy of making AGI (includes metaphysics, transcendental psychology, general philosophy of mind topics, etc!) also highly encouraged! Ill start ^^! so the philosophers i know that take this stuff seriously: Peter Wolfendale - the first Neo-Rationalist on the list. his main contribution here is computational Kantianism. just by the name you can tell that he believes Kant's transcendental psychology has some important applications to designing an artificial mind. an interesting view regarding this is that he thinks Kant actually employed a logic that was far ahead of his time (and you basically need a sophisticated type theory with sheaves to properly formalize). Other than that he also thinks Kant has interesting solutions to the frame problem, origin of concepts, and personhood. CONTACTS: He has a blog at https://deontologistics.co/, and also has posted some lectures on youtube like this one: https://www.youtube.com/watch?v=EWDZyOWN4VA&ab_channel=deontologistics Reza Negarestani - this is another Neo-Rationalist. he has written a huge work (which I haven't read yet ;_;) called "Intelligence and Spirit". It's massive and talks about various grades of general intelligence. this includes sentient agents, sapient agents, and Geist. this guy draws from Kant as well, but he also builds on Hegel's ideas too. his central thesis is that Hegel's Geist is basically a distributed intelligence. he also has an interesting metaphilosophy where he claims that the goal of philosophy is the construct an AGI. like other Neo-Rationalists, he heavily relies on the works of Sellars and Robert Brandom Recc: Ray Brassier (recent focuses) - I dont think he is working on artificial general intelligence, but his work on Sellars, and in particular rule following is very insightful! Hubert Dreyfus - Doesn't quite count, but he did try to bring Heidegger to AGI. He highlighted the importance of embodiment to the frame problem and common sense knowledge. I personally think Bergson might have explicated what he wanted to achieve but better, though that guy is like way before AI was even a serious topic, lol. Murray Shanahan - This guy has done some extra work on the frame problem following Dreyfus. His solution is to use global workspace theory and parralel processing of different modules. Interesting stuff! Barry Smith - Probably the most critical philosopher on this list. He talks about the requisite system dynamics for try strong AI, and concludes that our current methods simply don't cut it. One of the key stressing points he points out here with a colleague is that our current AI is Markovian when fleshed out chat dialogue would be a non-Markovian task (you can find the arxiv link of his criticism here: https://arxiv.org/abs/1906.05833). He also has knowledge on analytic ontology (and amongst other thing has some lectures about emotion ontology). I think his main genius however is in coming up with a definition of intelligence that puts a lot of the problems with our current approaches into context (which can be found here: https://www.youtube.com/watch?v=0giPMMoKR9s&ab_channel=BarrySmith) CONTACTS: He has a yt channel here https://www.youtube.com/watch?v=0giPMMoKR9s&ab_channel=BarrySmith

Message too long. Click here to view full text.

234 posts and 110 images omitted.
>>34340 >>34341 This sounds remarkable, Anon. Your >"only a few steps away from sending a message to any other neuron in the network due to the looping structure" certainly reminds me of the 'grid of cells' programming approach commonplace for GPU programing (eg, NVIDIA's CUDA framework). >hypertorus >nodes on a hypertoroidal graph Sounds intriguing. I would like to understand all this at some point. >"I could see the activity in the network like a little spark moving through it, then lightning bolts..." Very creative language. Sounds like a real lightshow! Cheers, Anon. :^)
>>34337 Okay, by brainwaves, I mean a wave of *focus* Your hardware can only do so much multiplication and addition. Your hardware can only do so much matrix multiplication. When a "brainwave" travels over the torus, it distributes focus onto the neurons/spirits under it, bumping them up in the processing queue. When the processing occurs, outputs are dumped into buckets through weightings, and await processing. Neurons should give some of their focus to the neuron they fire into, and I imagine that neurons can be tuned to be more or less 'stingy' about how/where they share their focus. Neurons burn some of their focus when they fire. That way nothing can dominate the processing queue except the brainwave. Mind you, I want 'focus' and 'attention' to be separate resources. Focus is just for the processing queue; it's what the brain wants to do *right now*. Attention is just for keeping pointless neurons off of the network. An efficient pattern of behaviour would be strung out so that the brainwave travels over it in the direction that it it fires, so that it's just straightforward operation. The brainwave's direction, I believe, should double as a bias in the direction of firing, turning outputs into inputs, and in general encouraging processing to occur in the direction the brainwave is traveling. If the brainwave wants to go sideways, this means that the brain is experiencing 'lateral thinking' which crosses subjects, finding and processing interrelations between them. What is the brainwave? >It's just a region of the torus, of some shape. It has a center that moves, it has a breadth and a length, and a trajectory, and it commands some percentage of the brain's available focus. How does it originate?

Message too long. Click here to view full text.

>>34383 >It's just a region of the torus, of some shape. It has a center that moves, it has a breadth and a length, and a trajectory >[it] has [a] position, a speed, a trajectory, and some size values >it should just wander around. Intredasting. Ever see PET Scan movies of the brain's activity? If not, I highly-recommend you do so. Contrary to the lie old wive's tale that we all only use 10% of our brains... practically 100% of all the brain's neurons are involved in processing every.single.stimulus. It's a fascinating thing to watch. Very, very amorphous wavefronts propagating through the tissues at all times in all directions; bouncing & reflecting around just like other wavefronts throughout nature. >=== -fmt, prose edit
Edited last time by Chobitsu on 11/16/2024 (Sat) 16:21:35.
>>34385 Yes I recall seeing a motion picture of how the activity of the brain moves in cycles. More specifically, I am drawing inferences from how I can feel the waves in my own brain working, and yes, I *can* feel them. Brainwaves for me have turned into an element of my mental posture. It feels like when I'm relaxed, it just cycles normally, but when I want to think something hard, the wave will jump around like how the read/write head of a hard disk might. I've become a bit of a computer myself in this way, because I can cut through very difficult philosophy using it this way. When I'm using my imagination, it will go backwards. When I focus on one sense, it stays in a sector. When I'm focusing my feelings, there's a stripe in the back which takes the focus. I believe that perhaps the white matter of the human brain won't broadcast from the entire brain *to* the entire brain all at once. I think there is a bit of a filter which can change shape, speed, and position, which will mix up the sharing of information according to the works of inner politics and habitual intelligence. In this way, a robo-wife brainwave wouldn't be the same as a human brainwave; but it would be close enough. Upgrading her hardware would just allow her to cycle around faster without skipping focus. Since I was a boy, I was always concerned with breaking down reality into the most efficient symbols I can manage using alchemy. Doing so gives me massive leverage to process reality. I hit some roadblocks in my mid-20's, but some mind-expanding psychedelics temporarily gave me enough room in my head to break free and gain superhuman understandings. The process of alchemy takes a lot of 'room' to work with, but ultimately creates more room when you're done; turning a mind into an efficient and well-oiled machine. For example, when I imagine a coin spinning, part of me pretends to be a coin; adopting the material properties of the coin (stiff, low-friction/metallic, resonant, rough edges, dense), and then gets to spinning helplessly in a mental physics sandbox. The symbols that I rely upon to do the spinning are gravity, angualr momentum (conserved), energy (conserved), air resistance, and equilibrium. Gravity pulls down, but the more flat the coin goes, the less it's spinning; it can't spin less without dissipating angular momentum/energy into the air, or into the table. Therefore the system is at a sort of dynamic equilibrium until it dissipates everything and falls flat. I am not pulling a memory of a spinning coin, I am generating a new, unique experience. If we want to build a robowife, we must take inspiration from nature. *I* want a robowife who is capable of some part-time philosophy like me; a sorceress. Nature never made one for me, for some reasons that fill me with bitterness and disgust. It occurs to me that a well-alchemized brain stripped/partitioned away from any objective memories of me may make a decent base for a robowife for Anon in general, and I may recruit Anon's help in my ambitions if I can make enough progress to show that I'm not just living in a fantasy.

Message too long. Click here to view full text.

>>34386 Pretty wild stuff Anon. I've gone down that route you describe to some degree, and can confirm it -- with a large dose of skepticism learned outside of that world. >tl;dr Staying clean & natural is certainly the better option. Just using some combination of caffeine/chocolate (or simply raw cacao)/vitamin b12 should be ample stimulation when that boost is needed, IMO. :^) >The first thing I'm going to try to do with a toroidal consciousness is to see if I can get it to dance to music. I'm going to write up a sensory parcel that contain these following attention fountains; 2 Input neurons that fluctuate up to 20 Khz to play the raw sound data, and 120-480 Input neurons (per track) which will fluctuate with the fourier transformation of the raw inputs (this is pitch sensation). I will give this stripe access to neurons which can wait for a set time, according to a base wait period they're set with, multiplied by their input. I expect for the consciousness to erect some kind of fractal clock construct which the alchemist will reward for its ability to correctly expect polyrhythm. Sounds awesome, I like it Anon. I too love dance and rythm, and just music in general. I envision that our robowaifus will not only eventually embody entire ensemble repetoires at the drop of a hat, but will also be a walking lightshow to top it all off. >tl;dr <"Not only can she sing & dance, but she can act as well!" :D <---> >pic I love that one. If we can ever get boomers onboard with robowaifus, then we'll be home free. Cheers, Anon. :^)

Waifus in society Robowaifu Technician 09/11/2019 (Wed) 02:02:53 No.106 [Reply] [Last]
Would you walk around with your waifu? Would you hold her in public? Would you shamelessly have her custody with you to conventions? Would you take her on dates? This thread is for discussing how you'd interact with your waifu outside of the home.
135 posts and 56 images omitted.
>>34304 >First Post Best Post That is not the first post. >>34310 That's why I'm a little conflicted I like the idea of my waifu being able to defend herself, but her being too strong could also cause potential problems.
>>34310 While it is a possibility, the clownhaired rabid feminist types are more rare than the internet would lead you to believe (part of their strength is the illusion of it). If your robowaifu is helpful beyond companionship (eg can help you carry stuff) that would make it more socially acceptable for normies... but even normies won't mind if it is a robot with some neat, fun features. When I've seen public robots, folks tend to leave them alone when they're accompanied by humans and respect the bot. Just try to avoid mentioning whether or not it is fully functional and anatomically correct ;) eg normie: "does your robowaifu have genitals?" robosexual: "do you?
>>34310 Its not the women I'm concerned about but their feral simp hordes they'll get to do their dirty bidding
>>34330 Well in that case, just defend your property. There's no stigma against hitting another man.
>>34322 >That is not the first post. Actually, it was at the time that Anon made that post. Between that time and when you read it, I merged that whole thread into this one (which then became the last few posts ITT). This is a mundane example of so-called temporal sliding. >tl;dr You're both right! :^)

Open file (349.32 KB 480x640 0c.png)
Robowaifu Media Propaganda and Merchandizing Anon 01/29/2023 (Sun) 22:15:50 No.19295 [Reply] [Last]
That Time I Incarnated My Christian Cat Girl Maid Wife in a Short Dress Together we can create the robowaifu-verse! Let's start making media to further our cause. There's tremendous potential for us to reach a wide market and prime minds to accept their future cat grill meidos in tiny miniskirts. We have text based stories floating around but, audio and visuals are paramount to gain a general audience. I will start with a short about a young man building a small cat girl and learning to understand her love. Importantly, she will be very limited, in the same way our first models will be. To set certain expectations. Mahoro is impossible but, the slow and simple robowaifus that can exist have their own charm. Let's show the world the love that will soon exist. --- >thread-related Robowaifu Propaganda and Recruitment (>>2705) >=== -add related crosslink
Edited last time by Chobitsu on 02/02/2023 (Thu) 22:18:06.
74 posts and 44 images omitted.
>>33390 I think these lyrics aren't appropriate just yet, too negative, too stressful, too scary. We should stick to positive and calming.
>>33534 Honestly, thx for saying so. Soon after posting, I realised that last part probably wasn't very /robowaifu/ and the singer's attitude changes halfway through, fu-. I must have been more emotionally influenced than I thought - I'll keep that in mind. Just hope the first half wasn't too bad.
>>33534 This. The yuge complexity we're all facing is quite stressful enough all on it's own! :^)
Open file (1.91 MB 918x1200 ChiiByIchiTATa.jpg)
How can an AI be trusted? How could we prove that our AI is trustworthy? How would demonstrate that people should put their trust in the works of our members? What methods would we use to convey this to a wide audience? I still plan to post videos of myself with a prototype MaidCom once she is presentable. We need more, we need widespread belief that our way is safe, trustworthy, and worthy of public use. We also need to prove that Claude, Gemini, and other competing AI are less worthy. We need to open a path where FOSS AI is the mainstream. We need to do so fast, strike before they can regulate us into obscurity. https://www.youtube.com/watch?v=KUkHhVYv3jU
>>34287 Wow! That video was really cool, Kiwi. Thanks for sharing it! :^) <---> As to your question. Again, wow! That's a really, really tough proposition, Anon. BTW You sure this wouldn't be better in the Safety/Security, or Cognitive, or Philosophy, or Ethics/Morals thread(s)? I trust no man (or woman obvs, lol). Not even myself. God alone do I really trust with everything (or so I claim lol; how can one really know until 'everything' is put to the actual test? :^) But that's a matter of my own faith in God's deeds and words (which are innumerable!) So I think the >tl;dr here rightly is: We can't. Probably not the answer you wanted, but it's the one you need -- and it saves me literal years of fruitless searching (just like the fabled King, lol!) <---> OTOH, we have no excuses not to make our works as trustworthy and reliable as we can. OTOOH, this type of goal costs: time, money, effort, intelligence. These things tend to be in short supply in many Anons lives. But all can be either gained or supplied -- except time.

Message too long. Click here to view full text.


SPUD (Specially Programmed UwU Droid) Mechnomancer 11/10/2023 (Fri) 23:18:11 No.26306 [Reply] [Last]
Henlo anons, Stumbled here via youtube rabbit hole & thought I'd share my little side project. Started out as just an elaborate way to do some mech R&D (making a system to generate animation files on windows blender and export/transfer them to a raspberry pi system) and found tinkering with the various python libraries a kinda neat way to pass the time when whether doesn't permit my outside mech work. Plus I'd end up with a booth babe that I don't have to pay or worry about running off with a convention attendee. Currently running voice commands via google speech and chatgpt integration but I'm looking into offline/local stuff like openchat. WEF and such are so desperate to make a totalitarian cyberpunk dystopia I might as well make the fun bits to go along with it. And yes. Chicks do dig giant robots.
497 posts and 277 images omitted.
>>34226 Hmmm James might be onto something here. I'll be back in a while lol. https://www.youtube.com/watch?v=AEXz8xyTC54
>>34226 Those speaker boobs are comically disturbing due to how they recess. Do they have some sort of cover that goes over them? It looks like you made some sort of attachment ring for something of mesh or foam i presume. Would help prevent speaker damage.
>>34254 This is a good example of why I keep pushing the use of LARPfoam for our robowaifu's 'undershells'. LARPagans have a big set of communities around this stuff today, and it's a good idea for us here to benefit from all this information. BTW, another Anon also posted this video here, but I can't locate that post ATM. >I'll be back in a while lol. Lol, don't stay away too long, Mechnomancer. ABTW, this is a daily reminder you'll need a new bread when you get back. Alway rember to link the previous throd in your OP. Cheers, Anon. :^) >>34259 Great find Kiwi, thanks very kindly. I wonder what Bruton has in store for his 'next design that walks better'? Cheers. :^)
New bread: >>34445

Open file (32.62 KB 341x512 unnamed.jpg)
Cyborg general + Biological synthetic brains for robowaifus? Robowaifu Technician 04/06/2020 (Mon) 20:16:19 No.2184 [Reply] [Last]
Scientists made a neural network from rat neurons that could fly a fighter jet in a simulator and control a small robot. I think that lab grown biological components would be a great way to go for some robowaifu systems. It could also make it feel more real. https://www.google.com/amp/s/singularityhub.com/2010/10/06/videos-of-robot-controlled-by-rat-brain-amazing-technology-still-moving-forward/amp/ >=== -add/rm notice
Edited last time by Chobitsu on 08/23/2023 (Wed) 04:40:41.
195 posts and 33 images omitted.
https://physicsworld.com/a/genetically-engineered-bacteria-solve-computational-problems/ >Now a research team from the Saha Institute of Nuclear Physics in India has used genetically modified bacteria to create a cell-based biocomputer with problem-solving capabilities. The researchers created 14 engineered bacterial cells, each of which functioned as a modular and configurable system. They demonstrated that by mixing and matching appropriate modules, the resulting multicellular system could solve nine yes/no computational decision problems and one optimization problem.
>>34134 Very smart, Anon. >Ive been working in factories my whole life so probably industrial/controls engineering Makes sense. Welp, master PID [1], Anon. Then you should look into Behavioral Trees afterwards, to make everything accessible for use by us mere mortals. Cheers, Anon. :^) Keep.Moving.Forward. --- 1. https://en.wikipedia.org/wiki/Proportional%E2%80%93integral%E2%80%93derivative_controller >>34170 Very intredasting. Thanks, Anon! Cheers. :^)
>>34219 Heres something you should keep an eye on. A human connectome would allow a computer to simulate an entire human brain if it bares fruit. Theyve already done it with a worm and put the simulated brain in robots. A human version would also be theoreti ally possible https://www.humanconnectome.org/
>>34222 >digits There's about a Zeta of synaptic interconnections within the human connectome. AFAICT, we have the most sophisticated -- by far -- neural systems on the planet. So it probably stands to reason that there's much that could be learned by using human neural tissue for such experiments. Thanks for the information, Anon! Cheers. :^)
> keratin based materials as a skin >>34724

The Sumomo Project Chobitsu Board owner 11/24/2021 (Wed) 17:27:18 No.14409 [Reply] [Last]
So I've been working for a while at devising an integrated approach to help manage some of the software complexity we are surely going to encounter when creating working robowaifus. I went down many different bunny trails and (often) fruitless paths of exploration. In the end I've finally hit on a relatively simplistic approach that AFAICT will actually allow us to both have the great flexibility we'll be needing, and without adding undue overhead and complexity. I call this the RW Foundations library, and I believe it's going to help us all out a lot with creating workable & efficient software that (very hopefully) will allow us to do many things for our robowaifus using only low-end, commodity hardware like the various single-board computers (SBCs) and microcontrollers. Devices like the Beaglebone Blue and Arduino Nano for example. Of course, we likely will also need more powerful machines for some tasks as well. But again, hopefully, the RW Foundations approach will integrate smoothly with that need as well and allow our robowaifus to smoothly interoperate with external computing and other resources. I suppose time will tell. So, to commemorate /robowaifu/'s 5th birthday this weekend, I've prepared a little demonstration project called Sumomo. The near-term goal for the project is simply to create a cute little animated avatar system that allows the characters Sumomo and Kotoko (from the Chobits anime series) to run around having fun and interacting with Anon. But this is also a serious effort, and the intent is to begin fleshing out the real-world robotics needs during the development of this project. Think of it kind of like a kickstarter for real-world robowaifus in the end, but one that's a very gradual effort toward that goal and a little fun along the way. I'll use this thread as a devblog and perhaps also a bit of a debate and training forum for the many issues we all encounter, and how a cute little fairybot/moebot pair can help us all solve a few of them. Anyway, happy birthday /robowaifu/ I love you guys! Here is my little birthday present to you. === >rw_sumomo-v211124.tar.xz.sha256sum 8fceec2958ee75d3c7a33742af134670d0a7349e5da4d83487eb34a2c9f1d4ac *rw_sumomo-v211124.tar.xz >backup drop

Message too long. Click here to view full text.

Edited last time by Chobitsu on 10/22/2022 (Sat) 06:24:09.
159 posts and 97 images omitted.
>>33843 >Blender does a lot of relevant things to support high performance, hard and soft realtime requirements, and heterogeneous development. Not sure what you mean about realtime in Blender's case, but otherwise fair enough. It's a remarkable system today! :^) >Blender's design docs I've seen these in the past, but since I stopped actively building Blender 2-3 years ago, I kind of let it slip my mind. So thanks for the reminder. I personally like Blender's documentation efforts, though I've heard some disagree. Not-uncommonly, this is one of those tasks that get pushed to the 'back burner', and is often left to volunteer work to accomplish. Given the breadth & scope of the platform, I'd say the Blender Foundation has done a yeoman's job at the doco work, overall. Very passable. <---> Also, reading that link reminded me of USD. NVIDIA is currently offering developers their version of free training on this topic, and I've been pondering if I can make the time to attend. A huge amount of the DCC industry has come together to cooperate on Pixar's little baby, and today it's a big, sprawling system. Why it's of interest to us here is that most of what a robowaifu will need to do to analyze and construct models of her 'world' is already accounted for inside this system. While there are plenty of other (often higher-speed) ways to accomplish the same (or nearly the same) tasks, the fact that USD has become such a juggernaut, with a highly-regimented approach to scene descriptions, and with such broad approval, improves the likelihood IMO that other Anons from the film & related industries may in fact be able to help us here once they discover robowaifus in the future -- if we're already using USD to describe her world and the things within it. I hope all that made sense, Anon. https://openusd.org/release/glossary.html# >===

Message too long. Click here to view full text.

Edited last time by Chobitsu on 10/02/2024 (Wed) 17:12:35.
>>33845 >Not sure what you mean about realtime in Blender's case This page looks relevant: https://developer.blender.org/docs/features/cycles/render_scheduling/ Blender does progressive rendering, which starts by rendering low-resolution frames. If there's extra time left over before a frame needs to be rendered, it generates more samples to generate a higher-resolution frame. The equivalent for video generation at a fixed framerate would be running a small number of denoising steps for the next frame, and running additional denoising steps if the next frame doesn't need to be rendered yet. For text generation at a fixed token rate, it would be equivalent to doing speculative decoding for the initial response, then using (maybe progressively) larger models if the next token doesn't need to be output yet. For a cognitive architecture with a fixed response rate, I think the equivalent would be generating an initial response, then continually refining the response based on self-evaluations & feedback from other modules until the the response needs to be output. >USD Very nice. I hadn't heard of this. It looks like a goldmine of information. Your explanation does make sense, and it's a great example of the sort of design patterns that I expect would be useful, in this case for modeling the environment & context.
>>33850 OK good point, CyberPonk. Such UX optimizations can fairly be said to be in the domain of soft-realtime. And certainly, integrating GPU processing code into the system to speed the rendering processes of Cycles & EEVEE has had major positive impacts. I personally think the fact that Ton chose to create the entire GUI for Blender in OpenGL all those years ago has had many far-reaching effects, not the least of which is general responsiveness of the system overall (especially as it has rapidly grown in complexity over the last few years). <---> >It looks like a goldmine of information Glad you like it! It's fairly easy to overlook that describing a scene is in fact a very-complex, nuanced, and -- I'm going to say it -- human undertaking. And when you consider that task from the deeply-technical aspect that USD (and we here) need to accommodate, then you wind up with quite a myriad of seeming-odd-juxtapositions. Until D*sney got their claws into it, Pixar was a one-of-a-kind studio, and well up to such a complicated engineering effort. I doubt they could do it as well today. If at all. DEI DIE to the rescue!111!!ONE! :D Cheers, Anon. :^) >=== -fmt, minor, funpost edit
Edited last time by Chobitsu on 10/03/2024 (Thu) 03:49:23.
>>33857 I looked up that USD. "USD stands for “Universal Scene Description”". I hadn't heard of it. Wow, that's some super comprehensive library and format. Hats off to pixar for open sourcing this.
>>34201 >Hats off to pixar for open sourcing this. Well, it's a vested-interest, but yeah; you're absolutely correct Grommet. Sadly, I'm sure they couldn't even pull it off today; they've become quite afflicted with the incompetency crisis. >protip: competency doesn't cause a crisis, only incompetency does. :^)

Open file (329.39 KB 850x1148 Karasuba.jpg)
Open file (423.49 KB 796x706 YT_AI_news_01.png)
Open file (376.75 KB 804x702 YT_AI_news_02.png)
General Robotics/A.I./Software News, Commentary, + /pol/ Funposting Zone #4 NoidoDev ##eCt7e4 07/19/2023 (Wed) 23:21:28 No.24081 [Reply] [Last]
Anything in general related to the Robotics or A.I. industries, and any social or economic issues surrounding it (especially of robowaifus). -previous threads: > #1 (>>404) > #2 (>>16732) > #3 (>>21140)
524 posts and 191 images omitted.
> this thread <insert: TOP KEK> >"There is a tide in the affairs of men, Which taken at the flood, leads on to fortune. Omitted, all the voyage of their life is bound in shallows and in miseries. On such a full sea are we now afloat. And we must take the current when it serves, or lose our ventures." >t. A White man, and no jew...
>>34164 DAILY REMINDER We still need a throd #5 here. Would some kindly soul maybe NoidoDev, Greentext anon, or Kiwi please step up and make one for us all? TIA, Cheers. :^)
Open file (2.43 MB 2285x2962 2541723.png)
>>34230 Guess it's up to me again. This was much easier than the meta thread. Took me like fifteen minutes, and ten of those were spent browsing in my image folders for the first two pics. Changes are as follows: + New cover pic + Added poner pic + New articles ~ Minor alteration to formatting >>34233
>>34234 >Guess it's up to me again. Thanks, Greentext anon! Cheers. :^)
>>34234 NEW THREAD NEW THREAD NEW THREAD >>34233 >>34233 >>34233 >>34233 >>34233 NEW THREAD NEW THREAD NEW THREAD

Report/Delete/Moderation Forms
Delete
Report