/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Canary has been updated.

Build Back Better

Sorry for the delays in the BBB plan. An update will be issued in the thread soon. -r

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB


(used to delete files and postings)

Have a nice day, Anon!

Open file (329.39 KB 850x1148 Karasuba.jpg)
Open file (423.49 KB 796x706 YT_AI_news_01.png)
Open file (376.75 KB 804x702 YT_AI_news_02.png)
General Robotics/A.I./Software News, Commentary, + /pol/ Funposting Zone #4 NoidoDev ##eCt7e4 07/19/2023 (Wed) 23:21:28 No.24081
Anything in general related to the Robotics or A.I. industries, and any social or economic issues surrounding it (especially of robowaifus). -previous threads: > #1 (>>404) > #2 (>>16732) > #3 (>>21140)
>>30571 I mean, not officially, but for all intents and purposes /clang/ is as dead as it gets. >>30577 I'll see if I can get someone to send an invite.
>>29736 Jerry Pournelle and Larry Niven had a sci-fi world, universe, they wrote about(known space, lots of books and stories). To get around the AI problem they postulated that all AI's over a certain intelligence would go mad...we seeing that????
>>29803 >White supremacists is an insult White supremacists is what the globalhomo calls White people who don't hate themselves and take pride in the very large accomplishments that White people have made. It's simply an attack to shame Whites into not looking after their own interest first before all the other races. Who, BTW, have no problem looking after themselves first.
>>30718 This. Simple as. While I'm perfectly happy to see other men lift their own races up towards the God-given light freely granted to us all through Jesus Christ's sacrifice... given Current Year Globohomo agendas, I'm currently much more concerned about the temporal welfare of, and protection of, my own race. 'IT'S OK TO BE WHITE :^)
>>30718 > supremacist (n.) >"one who believes in the inherent superiority of one race or sex or social group," by 1892, in white supremacist, originally with reference to political campaigns and candidates in the U.S. South (Louisiana), from supremacy + -ist. Compare supremist. Related: Supremacism. >1892 https://www.etymonline.com/word/supremacist
>>30722 Which means "white privilege" is an expression of white supremacy, a contemporary expression of "white man's burden" (the idea that whites must help non-whites develop civilization and prosper).
>>30976 >It's just usual content farming all big YouTubers do. I believe that's just called 'clickbait', isn't it Anon? :^) >I have never in the wild seen anyone care beyond just feeling sorry someone feels that lonely. Then I think it likely you haven't broached this topic clearly, with any women who consider themselves still to have SMV (today that's probably even up to 50yo+ grannies, lol). Or with a hard-core Leftist/Filthy-Commie. They -- all of them -- hate the very idea itself. Most of the ones I've engaged with in any way also threaten physical violence against robowaifus if/when they ever see one. We'll see how that all works out for them. :^) The Filthy Commies go one step further and threaten physical attack against robowaifu owners too since that's how Filthy Commies behave, after all (think: Pantyfags, F*men, etc.) -- under the bribery directives of their Globohomo puppetmasters, ofc. LOL. Heh, we all need to look up Stickman (is he out yet?) and give him a complementary Model A robowaifu! :^) Blacks just destroy things simply b/c they're blacks, by all appearances. They also will be involved with this violence to be directed against robowaifus/owners; but mindlessly, not for the agenda-driven motives of the first two groups mentioned here. Once they see the GH media glorifying physical attacks against robowaifus, I'm sure they'll be all-in with it for a while, too. (And -- like these other Leftists -- they too have their paid rabble-rousers [cf. the paper-hangin', pregnant-woman-abusin', multi-feloner Fentanyl Floyd's overdose-death's -- mostly-peaceful, mind you -- Burn Loot Murder 'honorarium' """protests""", et al]). All this type of clickbait (cf. >>30975, et al) is literally just GH predictive-programming attempting to prepare the masses for violence, come the day. TOP KEK! May the stones that they are preparing to roll down on us, all roll back upon their own heads instead, in Jesus' name!! [1] :DD Make no mistake: this is a broad cultural war already going on within our so-called """society""" of today. Robowaifus will amp that up to 12. I'm sure /cow/ and their ilk will be delighted, once the time is ripe. So get your popcorn ready, kids! :D >t. Noooticer. :^) --- 1. "If you set a trap for others, you will get caught in it yourself. If you roll a boulder down on others, it will crush you instead." https://biblehub.com/proverbs/26-27.htm (NLT) >=== -fmt, prose edit -add scriptural ref/hotlink; for any Anons who didn't get the prayer's reference
Edited last time by Chobitsu on 04/21/2024 (Sun) 15:03:44.
>>30986 This is completely and abundantly true. The programality (program-reality, is this even a word, if not it should be) of it all is baked in. Like those fools that buy pit bulls and tell everyone it's how you raise them. Of course the nice doggies rip the skin off their children heads.
>>30989 All pit bulls should be destroyed, outright. I love doggos in general (I've had several), but not those demonic little sh*tes. In fact, in some bizarre spiritual sense, they seem almost allegorical in their natures (for demons, ofc) to me.
>>30991 Based
>>26500 the problem is that OpenAI isn't our friend. They're neither open source nor working in small-scale projects.
>>31303 More like ClosedAI Also he’s a jew and chat got is very biased to the left
>>31307 *chat gpt
>>31303 Make a waifu using gpt-4chan lol. How cyberpunk is it there are now "illegal" ai like that model? lol
yo where can i find a good waifu AI? preferably something that's not paywalled any links?
>>31303 But they're still the one at the forefront of AI research. What they do trickles down to the rest of the industry. If we want AI to accelerate, then they're essential. Also, for context in my previous reply, the ones who ousted hum are Ilya Sutskever and Helen Toner. They are major decels, the kind that didn't want to open source GPT-2 because they thought it'd be too dangerous. I just want more and more accelerationists in power in the AI sector. But yeah, I don't like OpenAI very much. iirc Sam is lobbying for regulatory capture and banning open source AI.
>>31354 Look around on Reddit and 4chan which LLM is currently working best, e.g. on r/LocalLama subreddit. We need something more complex than that, but it hasn't been build yet: https://alogs.space/robowaifu/res/24783.html
>>31354 working on this dont hold your breath but I am working on it
>>30550 wholesome -ish or something like that stealing a few of these - for reasons
>>31440 The difference is that western media is made for the simpleton retard (average man) While jap media is made for autistic Brown retard virgins
>>31444 The difference is that robogirl western media shit is made for NPC braindead normie goy cattle and eastern robowqifu media is made for sophisticated gentlemen that don’t appreciate ugliness. Literally the majority of the western robogirl media is anti waifu propaganda creepy shit, in the trash it goes!
>>31444 >name Lol >>31449 This
>>31397 Neat. While it's clearly good ITT, I'm going to relo this link to our on-topic thread before long -- where I think it may help here better, Anon. Cheers. :^)
>>31440 Lol, I can actually see some good elements in some of these Western shows and movies, and they can still be entertaining. M3gan was a quite nice mother-like companion for the child, until her flaws came up. It's not that everything about these franchises is bad, but the twist towards the negative will make some simple minded people scared of AI and robots. >>31444 >DarkVersionOfNoidoDev Hmm, okay. Not really.
Sorry for putting this in the news thread but no image-posting-capable Anon has stepped up with /meta-10 yet, as hoped for. Is Baste China the hero we all need? https://www.youtube.com/watch?v=zssbA_jGB8s --- Heh, you know listening to both Elon's & the CCP's comments in this video makes me a little suspicious they both have assistants who have discovered (and likely now-monitor) /robowaifu/. :^)
>>31771 4K 2024 Tesla shareholder meeting : https://www.youtube.com/watch?v=remZ1KMR_Z4 (smol'r-sized backup :) https://www.youtube.com/watch?v=UQhnxPu67G4 (Elon begins ~43mins in; Optimus initial stuff about 10mins later, and the bulk at ~1h:27m) --- Tesla Optimus at US$10K eventually? https://www.youtube.com/watch?v=djvPSd1d6E4 --- www.nbcnewyork.com/news/business/money-report/elon-musk-claims-optimus-robots-could-make-tesla-a-25-trillion-company-more-than-half-the-value-of-the-sp-500-today/5505911/ >=== -add'l links, edit
Edited last time by Chobitsu on 06/25/2024 (Tue) 07:52:42.
>>31771 >Is Baste China the hero we all need? You should be able to answer that yourself: These bots are not very feminine and optimized for work. But thanks for the news update.
Conducted highlight sequences [1] of NVIDIA's Jensen Huang's recent (~3+wks ago) presentation at Computex 2024 in Taipei, Taiwan. [2] >protip: The fallout from this presentation recently/briefly made NVIDIA the highest-valuation company in the world. So yeah, big deal... including for robowaifuists! :^) >question: How can we here take advantage of these ideas, w/o getting overly-entangled with the NVIDIA ecosystem itself? --- 1. https://www.youtube.com/watch?v=IurALhiB6Ko 2. https://www.youtube.com/watch?v=pKXDVsWZmUU >=== -add'l cmnt
Edited last time by Chobitsu on 06/27/2024 (Thu) 15:07:02.
>>31772 >optimus-robots-could-make-tesla-a-25-trillion-company I think this is possible. I'm surprised no one sees the real, stupendous cash flow monster. Nursing home care. Why people are not raving about the possibilities, I have no clue. Here's some numbers to work with, if a little rough. "...About 1,290,000 Americans currently reside in nursing homes, according to the 2020 U.S. Census. That number is expected to nearly double by 2050. Over 15,600 nursing home facilities are in operation, 69% of which are for-profit. The average monthly cost of nursing home care in 2021 was $8,910 per month..." https://www.aplaceformom.com/senior-living-data/articles/nursing-home-statistics $11,493,900,000 a month, $65,410,740,000 a year, $106,920 per person a year. "...By 2060, 155 million Europeans — 30% of the total population — will be aged 65 or older..." "...Persons 65 or older REQUIRING ASSISTANCE WITH ADLS 44.4M...double today" [Activities of Daily Living (ADLs)] So 22.2 million currently. https://globalcoalitiononaging.com/wp-content/uploads/2018/06/RHBC_Report_DIGITAL.pdf?ref=hir.harvard.edu The link above suggest assisted home care from family to lower cost. I suspect strongly that a Tesla robot combined with access to Tesla taxi service could dramatically cut cost AND make Tesla a huge whooping pile of money. Most of what the robot would have to do is help move people around, wash, go to the toilet. I do not think with internet linkage to larger AI's, that it would not be impossible for it to cook. The cost to own a 2022 Model 3 Sedan Long Range 4dr Sedan AWD is $8,451 per year https://www.edmunds.com/tesla/model-3/2022/cost-to-own/ I don't think it would be a stretch to say you could make and keep a Tesla robot for twice that, $16,902. The real cost would be much, much lower because it requires so much less in material cost. $25,353 for Tesla robot and a Tesla model 3 (yes it's more likely they will use distributed taxis but just to get a number). Double that $50,706 or triple $76,059, and you still come way under the cost of nursing homes. The robot could be far more attentive and provide someone (or something) to talk to, and the taxi service could shuttle the elderly all over to make their quality of life much better. At double, a profit of $37,865,370,000 a year just for the US. Add an equal number in Europe and you get $75,730,740,000. And this is all profit. The numbers would actually be much higher as I'm using full retail price. So the government saves a vast amount of money, people get individualized care and are allowed to stay in their homes. And the number I quoted for a Tesla robot is tremendously inflated. I think I read the present processor in a Model 3 is $35. Let's say you add ten of these for more power. $350. Maybe $600 for all wiring, other semiconductors and power supply. 2,500W/h per day for batteries. At $132/kWh we have $330. Maybe $120 of plastic and aluminum. Comes to $1,400. You could surely build one for even less than this. I wonder if this is not Musk long term plan. He never talked about Starlink, it was always, Mars, Mars, Mars, but as soon as he had the capability, he went full throttle Starlink. I think his robot plan is much the same. As soon as he saw he was close to the software stack and manufacturing capability needed, it’s now all, robot, robot. The only question is why governments are not pouring tens or even hundreds of billions into finding ways to make this happen. How I calculated the battery needs. Likely inflated, but should withstand worst case scenario. Maybe not perfect but something to work with. "...Normal human metabolism produces heat at a basal metabolic rate of around 80 watts..." (Note: Heat not work.) "...Over an 8-hour work shift, an average, healthy, well-fed and motivated manual laborer may sustain an output of around 75 watts of power...." "...During a bicycle race, an elite cyclist can produce close to 400 watts of mechanical power over an hour and in short bursts over double that—1000 to 1100 watts.... An adult of good fitness is more likely to average between 50 and 150 watts for an hour of vigorous exercise. Athlete human performance peak power, but only for seconds, 2,000 Watts..." For reference a good horse working at a good constant rate works at 746 Watts. Let's say you need 400 watts for 2 hours a day then normal moving about at 100 Watts a hour with 7 hours for recharge at zero watts, We need 2 x 400W/h + 17 x 100W/h = 800w/h + 1,700W/h = 2,500W/h per day
>>31820 >I'm surprised no one sees the real, stupendous cash flow monster. Nursing home care. They might have this on their mind, but replacing the lower qualified people in factories with flexible human-like robots will easier and is also very big.
Open file (388.38 KB 500x275 patrick as a teslabot.gif)
>>31820 The problem is that promises of AI in future are likely to be overstated, like any technology. It isn't that big a deal if your LLM gives you an incorrect answer, Stable Diffusion gives a lousy piece of art or Suno.ai doesn't give the song you want. However, if a robot accidentally rips off a 90 year olds weener while scrubbing him that will be a big problem. I mean I'd love it if the Asimovian dream of robots becomes a reality. But science fiction isn't science fact, and I've found the best policy is hope for the best and plan for the worst.
>>31783 Heh. Little by little, Anon. Little by little. :^) >>31820 >I think this is possible. I do too. I'll go further and say that, all else being equal, it's inevitable. I believe that within a quarter century, even just the OG gang (and any productive newcomers here) will represent the better part of a trillion dollar industry just by ourselves alone. >>31889 >However, if a robot accidentally rips off a 90 year olds weener while scrubbing him that will be a big problem. Kek. Very true Anon! :DD >tl;dr I think most ppl don't understand just how uncommon so-called 'commonsense' really is! Cheers. :^)
Not sure where to put this but it looks somewhat interesting. It seems to have all the good specs software wise but doesn't seem to walk very well, or what I saw. https://hackaday.io/project/196759-mini-open-source-ros-high-performance-robot
>>31995 Thanks alot, Grommet. They have some very nice-looking, compact & (relatively) low-mass actuators -- apparently of their own devising. https://www.youtube.com/@HighTorqueRobotics
I don't really know much about AI, or even programming for that matter, but I've been doing some reading and I've come across a question about AI that confuses me. How can an AI have delayed gratification? If there is a significant delay between an action that it performs & a reward or punishment, then how can it know the action it took led to that outcome and not confuse it for something more immediate?
>>32131 Well, it is one of the biggest problems in reinforcement learning. Researchers have to balance it out. Its specific to different problems. You can't go too far in either direction, immediate rewards or delayed rewards. Generally, it can more more meaninful connections with more training and data. The AI itself usually can slowly adjust the rate of its rewards, from immediate to future over a long period of training.
>>32131 By reasoning which action lead to that reward. If you're not using a language model though you're screwed because there's no way to assign credit. It's a difficult skill requiring leveraging prior knowledge of the environment being acted in and testing hypotheses. Even people suck at credit assignment.
>>32131 I'm an AI researcher, the way this works is pretty subtle. I can describe the fundamental idea behind policy gradient methods (stuff like PPO are a refinement on top of this basic idea). So during training, the agent has to experience multiple episodes in an environment, where each episode has a beginning and an end. The agent follows a policy, telling it which action to take at each state along the episodes. If an episode ends with a high total reward, _all_ the actions along the path are given equal credit and are updated to be more probable in the policy. However, we know in reality that some of these actions don't deserve this credit. This is corrected for in other episodes, where sometimes the same unimportant action at the same state led to a low total reward at the end of the episode, causing the probability of the action to decrease. Policy gradient methods have a strict requirement on how you sample the experiences, to ensure that these updates balance correctly, that the expected value of the credit given to the actions equals the true advantage of the actions, and that the learning converges to the optimal policy. So for example, if you do RLHF, you cannot sample the LLM responses with custom temperature, top-P, top-K, etc.
>>32147 What I'm trying to figure out is how to do it without any prior knowledge of the environment. Will spurious reinforcements just be canceled-out over time? >>32152 Working in episodes like that might work great for something like an LLM, but I was thinking about a more general AI for a robowaifu. If every day was a new episode, but that might be a stretch.
>>32173 Indeed the same idea can be extended even to environments that don't have multiple independent episodes, basically where the agent only experiences one very long "episode". However, this requires two additions. First, you need a discount factor g, a number between 0 and 1, strictly less than 1. This is used to make the agent prioritize near term rewards more than long term rewards, for example a reward that is 100 actions away is discounted by a factor of g^100. You cannot handle the "one very long episode" case without some kind of time horizon, and the discount factor effective creates a kind of soft time horizon. Second, you have to bootstrap off a learnable value function estimate. The value function equals the expected value of the total reward the agent gets when starting from a state and using its policy until the end of the episode. When there is only "one very long episode", this needs to be the infinite sum for all future actions, which is still a finite number thanks to the discount factor g. You can then cut the "one very long episode" into spans of arbitrary length. For each span, you assume the total reward after the cut will equal the value function estimate, and you can still train the agent with policy gradient. At first, the value function estimate will be randomly initialized and is totally inaccurate, you simultaneously need to train it to more closely resemble the true value function. This training is also done with the cut up spans, you again apply the value function estimate for the end of the span, and bootstrap off it to compute more accurate (but still biased) estimates of the value function for the other steps in the span. With a discount factor g strictly less than 1, the training eventually makes the estimate converge to the true value. And when the value function is estimated accurately, the policy converges to the optimal actions.
>>32195 That is amazingly-interesting, Anon. It's also very encouraging to know that this approach can serve in a less-preplanned, more-adhoc environment. Any estimate about the hardware compute costs involved? POTD
>>32197 Yeah, this would be an example of a model-free RL algorithm. It only needs to know what state the agent is in, which action is taken, and what the reward for the action is. My impression is that for RL used in robotics, the dominant cost is the simulation of the environment (or worse, the data collection from running the agent in the real world), not the calculations or updates from the loss function. Running these simulations can be pretty similar to running high graphics quality 3D games. For RLHF with LLM's you have a reward model that's a neural net, so you need big enough GPUs to fit the models. But regardless, you want an algorithm that converges in as few steps as possible. With model-free RL, the training speed is mostly limited by how quickly you can gain information about the environment. You want a policy that explores interesting actions more often than uninteresting actions during training. You cannot optimize prematurely based on too little information, or the policy will be stuck in a local optimum. This is also why LLMs need supervised fine-tuning before RLHF, or why some RL experiments add some kind of curiosity into the agent. If you have control over the reward definition, you also want to shape the reward so that partial progress is rewarded too, even if the agent fails to achieve the final goal. You can see Nvidia's Eureka paper, they used LLM's to write reward functions to do this.
>>32201 Great! That's about what I expected regarding costs. We still need to try finding a solution for Robowaifu@home for training needs, heh. I've grabbed the paper and I'll make time to go over it soon. Thanks, Anon! Cheers. :^)
>>32201 >You want a policy that explores interesting actions more often than uninteresting actions during training. You cannot optimize prematurely based on too little information, or the policy will be stuck in a local optimum. I've mentioned before in other threads that the AI I'm looking to make will have "needs" to motivate her actions. Most will be essential things like changing the battery, but otherwise her needs are to cater to my needs. One of the needs is "boredom" which will punish her for being idle for too long or too much repetition. It might be worth it to do things in a less than optimal way if it means potentially discovering a more efficient way of doing something.

Report/Delete/Moderation Forms