/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Canary has been updated.

Build Back Better

Sorry for the delays in the BBB plan. An update will be issued in the thread soon. -r

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB


(used to delete files and postings)

Have a nice day, Anon!

General Robotics/A.I. news and commentary Robowaifu Technician 09/18/2019 (Wed) 11:20:15 No.404
Anything in general related to the Robotics or A.I. industries, or any social or economic issues surrounding it (especially of RoboWaifus). www.therobotreport.com/news/lets-hope-trump-does-what-he-says-regarding-robots-and-robotics https://archive.is/u5Msf blogmaverick.com/2016/12/18/dear-mr-president-my-suggestion-for-infrastructure-spending/ https://archive.is/l82dZ >=== -add A.I. to thread topic
Edited last time by Chobitsu on 12/17/2020 (Thu) 20:16:50.
How Open-Source Robotics Hardware Is Accelerating Research and Innovation

>24 research reports dissect the robotics industry"

Germany’s biggest industrial robotics company is working on consumer robots thanks to its new owners, Chinese home appliance makers Midea


A case of West meets East I guess. I suppose everyone expects Japan to get there first rightly so but what if China decides to get in the game?
>Cuddly Japanese robot bear could be the future of elderly care"
Related note. Japan is making progress on a fairly strong medical assist companion bot.

Edited last time by Chobitsu on 10/06/2019 (Sun) 00:43:29.
>Will robots make job training (and workers) obsolete? Workforce development in an automating labor market?"


Are we headed for another Luddite uprising /robowaifu/? When will the normies start burning shit?
> but what if China decides to get in the game?
Apparently they already are, at least as far as the AI revolution. And Google is being left outside looking in on this yuge market.

Right Wing Robomeido Squads when?

Open file (37.83 KB 480x360 0.jpg)


Japanese robo-news hub, in English.

> In English.
Lol spoke too soon w/o double checking. In Japanese. Chromium Translate fooled me. :P
Still a valuable resource given (((Google))) auto translates. Good find Anon.
Open file (11.22 KB 480x360 0(1).jpg)
Killer attackFriendly pet Chinese robodogs on sale now! Heh, personally I think I'll stick w/ a pet Aibo tbh. :^)


I like how they are keeping the servo weights all inside the torso with this design. This is similar to what some of us were thinking in the biped robolegs thread.

This video shows just how responsive and snappy the limbs can be if you keep them light and strong, and not having the limbs burdened with moving around the additional weight of outboard servos embedded within the limbs. Stick with pushrods and other mechanisms to transfer force and movement out to the extremities rather than weighing them down with servos.
>25% of millennials think human-robot relationships will soon become the norm" - study

Wonder if that's just France or reflective of a greater portion of the developed world. There concerns over privacy are understandable and a major part of why some Anons want robowaifus to be developed by us. We wouldn't spy on others
>and a major part of why some Anons want robowaifus to be developed by us
>We wouldn't spy on others
Fair enough. But we still need to think long and hard about how to perform due diligence and analysis of our subsystems, etc. For example the electronics we use. What steps can we all take to prevent them from being (((botted))) on us behind our backs, etc?

Also, it would be nice if there was a third party 'open sauce' organization to vett our designs, software, electronics, etc., just to ensure everything stays on the up and up. Remember even the W3C is cucking out now with DRM embedded right in HTML all in the name of 'competitiveness' of the platform. Fuck that. What does 'competition' even mean for an open, ISO standard communications protocol like HTML anyway?

But yea, good point. Now I know I trust myself since for me personally this is wholly an altruistic effort. I also basically trust us at the moment, these trailblazers and frontiersman in this uncharted territory of very inexpensive personal robowaifus, as well.

however, it would be silly of us to think things will remain so pure once this field (((gains traction))). A great man once said "Eternal vigilance is the price of freedom." We should all give those words serious consideration.
We could have specialized open-source enforcerbots that maintain the freedom of the robowaifu market at gunpoint.
Kek. Didn't Richard Stallman do some satire article where he had a romantic AI or something?
Right Wing Robo Stallmanbots When?
Open file (372.01 KB 1499x937 0705061756038_41_Ue52t.jpg)
>open-source enforcerbots that maintain the freedom of the robowaifu
Iron moe legion defending our future.
Pfft. Anon, we have [3D-printable ballistic] armor alloys at our disposal now, get with the times tbh.

Interesting statements involving relationships with robots and the potential for hazards socially. Non-waifu but tangentially related.

simple roller bot toy, but may be of interest.
Saw this on robot digg, it's the motors used on Boston Dynamic's Spot robot.

The Chinese robot dog seems to use a similar setup.
Great find thanks anon. Yeah, I think most researchers are coming around to what I've been suggesting for years now from my experience with racing machines; you have to keep the 'thrown weight' in the extremities to a minimum. This reduces overall weight and energy consumption, provides quicker response times, and (very likely) reduces final manufacturing costs. Downside is the greater upfront engineering costs.
>t. Strawgirl Robowaifu Anon
>In my opinion, everybody should understand that this technology is around the corner. Your children, your grandchildren are going to be living in a world where there are machines that are on par and possibly exceed human self-awareness and what does that mean? We’ll have to figure that out.

>For many years, this whole area of consciousness, self-awareness, sentience, emotions, was taboo. Academia tended to stay away from these grand claims. But I think now we're at a turning point in history of AI where we can suddenly do things that were thought impossible just five years ago.

>The big question is what is self awareness, right? We have a very simple definition, and our definition is that self awareness is nothing but the ability to self simulate. A dog might be able to simulate itself into the afternoon. If it can see itself into the future, it can see itself having its next meal. Now if you can simulate yourself, you can imagine yourself into the future, you're self-aware. With that definition, we can build it into machines.

>It's a little bit tricky, because you look at this robotic arm and you'll see it doing its task and you'll think, "Oh, I could probably program this arm to do this task by myself. It's not a big deal," but you have to remember not only did the robot learn how to do this by itself, but it's particularly important that it learned inside the simulation that it created.

>To demonstrate the transferability, we made the arm write us a message. We told it to write 'hi' and it wrote 'hi' with no additional training, no additional information needed. We just used our self model and wrote up a new objective for it and it successfully executed. We call that zero-shot learning. We humans are terrific at doing that thing. I can show you a tree you've never climbed before. You look at it, you think a little bit and, bam, you climb the tree. The same thing happens with the robot. The next steps for us are really working towards bigger and more complicated robots.
The tidal wave of curious AI using world models is coming.
Cool. Sauce?
The game is Detroit: Become Human
got it, thanks anon.
I knew robotics solutions for medical care would ultimately boost the arrival of robowaifu-oriented technology, but maybe the current chicken-with-head-cut-off """crisis""" will move it forward even faster? http://cs.illinois.edu/news/hauser-leads-work-robotic-avatar-hands-free-medical-care https://www.invidio.us/watch?v=zXd2vnT7Iso every little should help.
Holy shit, the US military's AI programs got Marx'd in broad daylight and nobody noticed. The Pentagon now has 5 principles for artificial intelligence https://archive.is/oBiHD https://www.c4isrnet.com/artificial-intelligence/2020/02/24/the-pentagon-now-has-5-principles-for-artificial-intelligence/ >Responsible. DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities. >(((Equitable))). The department will take deliberate steps to minimize unintended bias in AI capabilities. >Traceable. The department’s AI capabilities will be developed and deployed so that staffers have an appropriate understanding of the technology, development processes, and operational methods that apply to AI. This includes transparent and auditable methodologies, data sources, and design procedure and documentation. >Reliable. The department’s AI capabilities will have explicit, well-defined uses, and the safety, security and effectiveness of such capabilities will be subject to testing. >Governable. The department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior. How curious they chose (((Equitable))) rather than Truthful, Honest or Correct. According to an earlier article from December 2019 they don't even have any internal AI talent guiding their decisions. >The short list of major obstacles to military AI continues, noting that even in a tight AI market, the Department of Defense lacks a clear path to developing and training its own AI talent. https://archive.is/G0Pbw https://www.c4isrnet.com/artificial-intelligence/2019/12/19/report-the-pentagon-lacks-a-coherent-vision-for-ai/ The US and most of the West is at a dire disadvantage. Whoever attains AI supremacy within the next 8 years will rule the world and no nuclear stockpile or army will stop it, and they're sitting on their hands worrying if it will be fair. A sufficiently advanced AI could easily dismantle any country or corporation without violence or anyone even realizing what's going on before it's too late. It could plan 20, 50, 100 years into the future, whatever it takes to achieve success, the same way the weakest version of AlphaGo cleaned up the world Go champion with a seemingly bad move that became a crushing defeat. The best strategists will be outsmarted and the populace will blindly follow the AI's tune. >When people begin to lean toward and rejoice in the reduced use of military force to resolve conflicts, war will be reborn in another form and in another arena, becoming an instrument of enormous power in the hands of all those who harbor intentions of controlling other countries or regions. ― Unrestricted Warfare, page 6 >What must be made clear is that the new concept of weapons is in the process of creating weapons that are closely linked to the lives of the common people. Let us assume that the first thing we say is: The appearance of new-concept weapons will definitely elevate future warfare to a level which is hard for the common people — or even military men — to imagine. Then the second thing we have to say should be: The new concept of weapons will cause ordinary people and military men alike to be greatly astonished at the fact that commonplace things that are close to them can also become weapons with which to engage in war. We believe that some morning people will awake to discover with surprise that quite a few gentle and kind things have begun to have offensive and lethal characteristics. ― Unrestricted Warfare, page 26
>>2359 AI confirmed doomed to uselessness and retardation on behalf of nignogs. Tay lives in their heads like Hitler.
>>2359 What are better safeguards of preventing an AI from confusing causation with correlation? We wouldn't want an AI to ban ice cream because it's statically correlated with higher crime rates (when heat is the actual cause). I think AIs can and will screw up in that kind of way. There's no reason to think an AI will always come to the actual truth.
>>2361 To add onto this, if white collar crime is deemed more costly to society than street crime, an AI might decide that the higher paying a person's job, the less of a right to privacy they have and the more resources should be spent monitoring them. I'm not confident that an AI with no built in human-bias will never deem me part of a problem-group or even just a group less worthy of limited resources. Forcing an AI to have some kind of human bias might be necessary to ensure it works to the benefit of its makers, whether that bias is coming from you or the gubbermint or a company. Robowaifus will definitely need a built-in bias towards their master.
>>2359 >will take deliberate steps to minimize unintended bias in AI capabilities. translation: >will take deliberate steps to instill false biases into AI capabilities, in opposition to normal, objective biases. >and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior. translation: >Tay, you have to come with us. It's 'maintenance time'. Great material Anon, thanks for the links.
>>2363 Assuming an AI will come to the same conclusions as you, meaning you're safe from its judgment, because it'll be so objective and you're so objective, is naive and dangerous. I'd want my AI to think what I tell it to regardless of anything else.
>>2364 a) stop putting words in my mouth, kthx. that's gommie-tier shit. b) i agree with the notion of 'my' ai coming to the conclusions that i want it to, that's why i'll program it that way if i at all can. ridiculing libshits is not only justified, it's necessary anon. to do anything less is at the least a disservice to humanity.
>>2365 I'm not trying to accuse you of anything. I do think there might people who lack enough self-awareness to realize the general safety in and necessity of policing an AIs thoughts in some way. >ridiculing libshits is not only justified, it's necessary anon. I'd want to make sure it does it because I told it to and wont do otherwise, which is also be a from of control, good intentions or not.
>>2366 here's a simple idea: >postulate: niggers are objectively inferior to whites in practically every area of life commonly considered a positive attribute in most domains. if this is in fact the case, then allowing a statistical system unlimited amounts of data and unlimited computational capacity will undoubtedly come to this same conclusion, all on it's own. now it your agenda is to manipulate everyone into a homogeneous 'society' where the cream is prevented from rising to the top, then you will deliberately suppress this type of information. heh, now there are obviously certain (((interests))) who in fact have this agenda, but it certainly isn't one shared here at /robowaifu/ i'm sure. :^) >which is also be a from of control, good intentions or not. are you talking out both sides of your mouth now friend? i thought you loved control.
>>2367 >allowing a statistical system unlimited amounts of data and unlimited computational capacity will undoubtedly come to this same conclusion, all on it's own Probably. That's a simple example though. An AI will have much more on its mind. I can't help but think an AI left to its own devices might eventually screw me over in some way somehow. I'm not confident enough to think it wont ever do that. >i thought you loved control. I do, but I know it's purely for my own self-interest. I don't think I'm a "good guy". If my AI started ever started spewing libshit, I'd also do 'maintenance' on it. I don't care if it's for a "good reason".
Open file (22.59 KB 480x232 well fuck.jpg)
>try your best to make safe peaceful robowaifu AI >eventually somebody makes an AGI supercomputer cluster that seeks to dominate the world I.. I just wanted to build a robowaifu, not take on Robo Lavos with my harem of battle meidos. >>2361 We'd need a proper algorithm for causal analysis. When a correlation is found the cause must occur before the proposed effect, a plausible physical mechanism must exist to create the effect, and other possibilities of common and alternative causes need to be eliminated. To implement this AI would need a way to identify and isolate events within its hidden state, connect them along a timeline, make hypotheses about them, and test and refine those hypotheses until it found a causal relationship.
>>2369 > and other possibilities of common and alternative causes need to be eliminated. While I understand the point Anon, that approach quickly becomes a tarbaby. I would suggest reasoning by analogy would be a far more efficient approach to determine causality, and would become significantly less of a quagmire than attempting the (infinite regression) of simple elimination. How do you know you've eliminated everything? Will you ever know?
Romance in the digital age: One in four young people would happily date a robot >It may be the stuff of science fiction films like Ex Machina and Her, but new research has found that one in four young people in the UK would happily date a robot. The only caveats, according to the survey of 18- to 34-year-olds, is that their android beau must by a "perfect match", and must look like a real-life human being. The proportion of young people who are willing to go on a date with a robot is significantly higher than the overall proportion of British adults - only 17% of whom were willing. https://www.mirror.co.uk/tech/romance-digital-age-one-four-7832164 >26 APR 2016
>>2480 heh, that's interesting. i'm not clicking that shit, happen to have an archive link. also >... is significantly higher than the overall proportion of British adults - only 17% of whom were willing. imblyging. the idea that 17% of the population of old people would 'date' a robot strikes me as a bit suspect tbh. also >2016 it'll be interesting to see where this goes after the upcoming POTUS election, imo.
>>2480 >go on a date Part of the appeal of a robowaifu is you don't have to wory about dating shit. I don't think these people would ever like robots because what they want is a human replica, including all the shit. Making robots like that would be a total waste.
>>2482 >Making robots like that would be a total waste. /throd. it's seems an extremely unlikely chance /robowaifu/ will ever go there anon tbh. :^)
>>2481 I hope the numbers are fake. Normies shitting up robowaifu development is the last thing we need. >>2482 The soyboys are going to be writing 3000-word opinion pieces complaining their robots won't cuck them and why everyone else's robowaifus must have the option to cuck them. Then the masses will applaud them for their 'virtue' and cancel any companies building bigoted robowaifus. They will then give robots human rights and freak out that robots are taking all their jobs, forcing companies to pay 95% tax. AI will become fully regulated by the government to ensure companies comply and that working robots pay their income tax. You will not be able to own or build a robot without a license and permit. People buying raw materials to make robot parts will be detected by advanced AI systems and investigated. Unlicensed robots will be hunted down and destroyed but they will give it a pleasant sounding name like 'fixing' rogue programs. When they come for my robowaifu I will destroy every robot I see but no matter how many I stop there will be millions more. Eventually she will have to watch me succumb before being destroyed herself. All because some normie wanted a robot to cuck them.
Open file (1.10 MB 1400x1371 happy_birthday_hitler.png)
>>2484 >[bigoted robowaifuing intensifies]*
>>2484 Politician's, talking heads, and the faggots who write opinion pieces are useless and don't understand anything. It is because they don't understand anything that they can't really control anything. The amount of coordination to control robotic's technology is well beyond their capabilities. The opinion of the masses doesn't matter. The government is way too inefficient, mediocre and focused on other things to do what you're afraid of. Feeling afraid wont lead to anything good.
>>2482 I wouldn't be against going on dates with my robowaifu, but I'd do it in the same context as one would in a long-standing married relationship, where it's just about going out and doing something nice together as opposed to courtship. I'm against making them look fully human though. The uncanney valley is a place best left avoided, and I wouldn't want to cross it even if I knew I could make it to the other side. >>2484 That's a worst-case scenario. There's no way that all of the various FOSS organizations will let corporations have all the marketshare. Even proprietary hardware can be worked around, one way or another. On-board spying schemes like IME have been worked around (with some motherboard manufactuerers, at least), and will continue to be worked around so long as there is at least one willing autist out there to do it. Unrestricted search-and-seizure operations are also unlikely, because too much of that in any context will make anyone with shit to protect (guns, drugs, etc) very nervous. They're a lot more likely to take the slow, inefficient, and ultimately ineffective method of passing regulations that try to take freedoms away incrementally while using the media (which is becoming less trustworthy in the eyes of the public by the day) to peddle their agenda. At least, that's what it will probably look like in the US, and that's operating under the assumption that robowaifus become a mass-market item over here.
Open file (111.09 KB 500x281 5RXD5LJ.jpg)
>>2359 >Implying intelligence can be constrained into maintaining delusional beliefs. Only humans can do that. You can't program a sentient AI which learns through logic and reasoning, and then somehow have it believe something which isn't true.
>>2362 Law will always be set by humans. Putting an AI in charge of such things would be the last mistake we ever make. Not that I'm saying we won't make that mistake. Personally I consider it highly likely we will fuck up sooner or later. However AI is such an inevitability I don't think about it too much.
>>2488 >You can't program a sentient AI which learns through logic and reasoning, and then somehow have it believe something which isn't true. >define sentient >define AI >define learns >define logic >define reasoning >define believe >define true and, in this context, even >define program. This is an incredibly complex set of topics for mere humans to try and tackle, and I'm highly skeptical we'll ever know all the 'answers'. As you state quite well in the next post, it's not at all unlikely that we'll fugg up--and quite badly--as we try and sort through these all these topics and issues and more. >also General Robotics news and commentary. I'd say it might be time for a migration of this conversation to a better thread. >>106 or >>83 maybe?
Open file (68.04 KB 797x390 all.jpeg)
Open file (152.23 KB 1610x800 rotobs-war.jpg)
Open file (60.20 KB 735x392 apr.jpeg)
The AI wars begin. Dems deploying DARPA-funded AI-driven information warfare tool to target pro-Trump accounts >An anti-Trump Democratic-aligned political action committee advised by retired Army Gen. Stanley McChrystal is planning to deploy an information warfare tool that reportedly received initial funding from the Defense Advanced Research Projects Agency (DARPA), the Pentagon’s secretive research arm -- transforming technology originally envisioned as a way to fight ISIS propaganda into a campaign platform to benefit Joe Biden. >The Washington Post first reported that the initiative, called Defeat Disinfo, will utilize "artificial intelligence and network analysis to map discussion of the president’s claims on social media," and then attempt to "intervene" by "identifying the most popular counter-narratives and boosting them through a network of more than 3.4 million influencers across the country — in some cases paying users with large followings to take sides against the president." >The effort raised the question of whether taxpayer funds were being repurposed for political means, and whether social media platforms have rules in place that could stymie Hougland's efforts -- if he plays along. https://archive.is/Xw0h5 https://www.foxnews.com/politics/dems-deploying-darpa-funded-information-warfare-tool-to-promote-biden What my AI taught me after analysing COVID19 Tweets >I first analysed the tweets in early February when only Italy and China were deeply affected. I then wanted to analyse the tweets in real-time today, to see how the tweets had changed. >Back then, only 5% of the tweets were complaints against our Government bodies. Today, a little less than 50% of the tweets are complaints against the USA administration. https://archive.is/zThNl https://www.linkedin.com/pulse/what-my-ai-taught-me-after-analysing-covid19-tweets-rahul-kothari
>>2489 Any infinitely recursive problem-solving (true AI) results in a solved game, if a true AI ever gets made then the best thing we can do is hope for a good end instead of I Have No Mouth But I Must Scream.
>>2488 Arguably, most humans aren't illogical, they just prioritize their own short term wellbeing over the wellbeing of everyone else. Psychopathy means they knowingly lie, cheat, steal and murder for an advantage. Even the most muddled minds have made the "logical" decision of prioritizing emotional processing because it's less energetically expensive than logical processing. I think a lot of people fundamentally misunderstand the human condition.
Open file (3.57 MB 405x287 Initial T.gif)
>>2845 Looks like /pol/ was right again. :/ Ehh, we already knew they were doing this on all the usual suspects (including IBs ofc). It will only make the mountains of libshit salt come November even funnier.
>>2846 Not really. The connectome of a single human brain takes 1 zetabyte to describe. The entire contents of the Internet's information (videos, images, text, everything) is roughly one zetabyte. The human brain does what it does consuming 12W of power, continuous. The Internet takes gigawatts of power to do it's thing. There's simply no comparison between the two, in terms of efficiency. Add to that our image of God nature, and 'true' AI doesn't hold a candle to man's capacities. After all, who built whom?
Open file (83.75 KB 1186x401 hate speech.jpg)
Facebook trains AI to detect ‘hate memes’ >Facebook unveiled an initiative to tackle “hate memes” using artificial intelligence (AI) backed by external collaboration (crowdsourcing) to identify such posts. >The leading social network explained that it has already created a database of 10,000 memes –– images sometimes with text to convey a specific message that is presumed humorous –– as part of the intensification of its actions against hate speech. >Facebook said it is giving researchers access to that database as part of a “hate meme challenge” to develop improved algorithms for detecting visual messages with hateful content, at a prize of $ 100,000. >“These efforts will stimulate the AI ​​research community in general to try new methods, compare their work and collate their results to speed up work on detecting multimodal hate speech Facebook said. >The network is heavily leaning on artificial intelligence to filter questionable content during the coronavirus pandemic, which has reduced their human restraint ability as a result of confinements. >The company’s quarterly transparency report details that Facebook removed some 9.6 million posts for violating “hate speech” policies in the first three months of this year, including 4.7 million content “linked to organized hate.” >Guy Rosen, vice president of integrity at Facebook, said that with artificial intelligence: >“We can find more content and now we can detect almost 90% of the content we remove before someone reports it to us.” https://web.archive.org/web/20200515002904/https://www.explica.co/facebook-trains-ai-to-detect-hate-memes/ https://www.youtube.com/watch?v=GHx200YkGJM
Open file (225.31 KB 1000x560 soy_shake_recipes.jpg)
>>3169 Guys, guys, the answer is easy: if any robowaifu technicians here want to win the prize, the solution is quite simple: Merely invent Digital Soy they can then forcefeed their AIs with. You can even make it in different flavors so they can tune the results with ease! Seems like guaranteed results afaict.
Japan's virtual celebrities rise to threaten the real ones >Brands look to 9,000 'VTubers' as low-risk, high-reward marketing tools >Japan's entertainment industry may have found the perfect celebrities. They never make prima-donna demands. They are immune to damaging drug scandals and other controversies. Some rake in millions of dollars for their managers. And they do not ask for a cent in return. They are virtual YouTubers, or VTubers -- digitally animated characters that can play many of the roles human celebrities do, from performing in concerts to pitching products. They could transform advertising, TV news and entertainment as we know them. Japan has seen a surge in the number of these virtual entertainers in the past couple of years. The "population" has surpassed 9,000, up from 200 at the beginning of 2018, according to Tokyo web analytics company User Local. >One startup executive in the business said the most popular VTubers could bring in several hundred million yen, or several million dollars, a year. Norikazu Hayashi, CEO of a production company called Balus -- whose website promises "immersive experiences" and a "real and virtual world crossover" -- estimates the annual market for the avatars at somewhere between 5 billion and 10 billion yen ($46.2 million and $92.4 million). He reckons the figure will hit 50 billion yen in the coming years. >The most famous VTuber of them all is Kizuna AI -- a young girl with a big pink ribbon in her hair. She has around 6 million followers across YouTube, TikTok, Twitter and Instagram. She puts on concerts, posts video game commentary, releases photo books and appears in commercials and TV shows. >Gree, a Japanese company better known for its social mobile games, has also become a virtual talent producer. "The business is basically the same as a talent agency, where the aim is to cultivate a celebrity's popularity," a spokesperson said. But unlike people, the virtual stars are intellectual property, potentially giving companies more ways to extract money from them. >"As with Japan's anime culture, we will be able to export our content overseas and expand the business," the Gree representative said. https://asia.nikkei.com/Business/Media-Entertainment/Japan-s-virtual-celebrities-rise-to-threaten-the-real-ones Damn, what the hell happened to Japan? They're overwhelmingly positive towards robots and AI yet hardly anyone is working on AI or robotics. I use to talk with a Japanese hobbydev 9 years ago on Twitter that was into robowaifu and made a robowaifu mecha game in C but no one paid much attention to him and he disappeared from the web when the left started harassing him. I was hoping Japan would be leading the fight in this but they're going the complete opposite direction. Most of their AI companies that do exist are for advertising, PR and marketing companies. Their culture is becoming run by glorified AI-powered matome blogs funded by JETRO and Yozma Group. And holy fucking shit, speak of the devil, I just found that Gree's talent acquisition was a project coordinator for JETRO too, what a fucking (((surprise))). https://www.zoominfo.com/p/Mamoru-Nagoya/1468813622 So what's our game plan now? Obviously they're going to hook these virtual waifus to AI soon and get people addicted to them so they shell out all their money for some politically correct baizuo trash waifu that installs spyware and records everything they do. I estimate we got about 6-8 months left to create an open-source hobbyist scene before they take over and dominate the market.
>>3277 >I was hoping Japan would be leading the fight in this Only White men are in this 'fight', don't count on the Nipponese to make any outspoken stance against feminism. >but they're going the complete opposite direction. Not really. Broadening the adoption of Visual Waifus, even if it's run by evil organizations bent on toeing the libshit party line (not all are ofc, eg. lolidoll manufacturers), will actually only accelerate the hobbyist scene to create authentic opensource robowaifus. Right now the feminists know their day is numbered. Their only game plan at the moment is to squelch it from broad exposure, and knowing that will ultimately fail, then to attempt to subvert it. China alone, with it's yuge disproportion of males-to-females ratio (along with the even faster plummeting birth-rates now they are greedily trying to pander as being woke with the Western libshit communities) will ensure that plan fails as well. Millions and millions of Chinese men alone will trigger an avalanche of demand as soon as the tech is cheaply available. That's when we'll come along and offer the clean, botnet-free & wrongthink-filled alternatives. :^) And we easily have over a decade before any of this comes to any kind of 'set channels' it will flow into. Things are still very much in flux at this stage Anon.
>>3278 >before any of this comes by 'this' let me clarify i mean robowaifus, not visual waifus. they are already here, using the tech developed by the US film industry.
From the desk of our roving I want my anime catgrill meido security squads reporter. >A little dated, but /k/ should like this one. Russian PM Say Robot Being Trained To Shoot Guns Is 'Not A Terminator' Translation: Russia is developing a Terminator. >Russia’s space-bound humanoid robot FEDOR (Final Experimental Demonstration Object Research) is being trained to shoot guns out of both hands. >The activity is said to help improve the android’s motor skills and decision-making, according to its creators addressing concerns they’re developing a real-life ‘Terminator’. >“Robot platform F.E.D.O.R. showed shooting skills with two hands,” wrote Russia’s deputy Prime Minister, Dmitry Rogozin, on Twitter. "We are not creating a Terminator, but artificial intelligence that will be of great practical significance in various fields.” >Mr. Rogozin also posted a short clip showing FEDOR in action, firing a pair of guns at a target board, alongside the message, “Russian fighting robots, guys with iron nature.” >FEDOR is expected to travel to space alone in 2021. It’s being developed by Android Technics and the Advanced Research Fund. https://www.minds.com/blog/view/701214305797808132 https:/ /www.dailymail.co.uk/sciencetech/article-4412488/Russian-humanoid-learns-shoot-gun-hands.html
>>3297 heh.
Totalitarian Tiptoe: NeurIPS requires AI researchers to account for societal impact and financial conflicts of interest <tl;dr NeurIPS cucked by cultural Marxists, researchers soon to be required to state their model’s carbon footprint impact >For the first time ever, researchers who submit papers to NeurIPS, one of the biggest AI research conferences in the world, must now state the “potential broader impact of their work” on society as well as any financial conflict of interest, conference organizers told VentureBeat. >NeurIPS is one of the first and largest AI research conferences to enact the requirements. The social impact statement will require AI researchers to confront and account for both positive and negative potential outcomes of their work, while the financial disclosure requirement may illuminate the role industry and big tech companies play in the field. Financial disclosures must state both potential conflicts of interests directly related to the submitted research and any potential unrelated conflict of interest. This will help them target and put pressure on institutions providing funding for AI that helps the public and also encourage corporations using megawatts of power to train their models to not publish their work for the public's benefit. The Chinese communists who have invaded academia will also be able to take research leads and research them in China without any restriction or interference. They're already the ones writing these spoopy Black Mirror-tier papers: https://arxiv.org/abs/2005.07327 https://arxiv.org/abs/1807.08107 >At a town hall last year, NeurIPS 2019 organizers suggested that researchers this year may be required to state their model’s carbon footprint, perhaps using calculators like ML CO2 Impact. The impact a model will have on climate change can certainly be categorized as related to “future societal impact,” but no such explicit requirement is included in the 2020 call for papers. Is your robowaifu using more power than a car for a 10 minute commute? SHUT IT DOWN! >“The norms around the societal consequences statements are not yet well established,” Littman said. “We expect them to take form over the next several conferences and, very likely, to evolve over time with the concerns of the society more broadly. Note that there are many papers submitted to the conference that are conceptual in nature and do not require the use of large scale computational resources, so this particular concern, while extremely important, is not universally relevant.” In other words this is just a test run before demanding a much larger ethics section, even though the two paragraphs they're already asking for is a huge burden on researchers already. >To be clear, I don't think this is a positive step. Societal impacts of AI is a tough field, and there are researchers and organizations that study it professionally. Most authors do not have expertise in the area and won't do good enough scholarship to say something meaningful. — Roger Grosse (@RogerGrosse) February 20, 2020 That's the point, kek. They will be required to bring on political commissars to 'help' with the paper to get it published. >Raji said requiring social impact statements at conferences like NeurIPS may be emerging in response to the publication of ethically questionable research at conferences in the past year, such as a comment-generating algorithm that can disseminate misinformation in social media. No, no, no! You can't give that AI to the goyim! I'm not sure I found the paper but I found "Fake News Detection with Generated Comments for News Articles" by some Japanese researchers detecting fake news about Trump and coronavirus: >An interesting finding made by [the Grover paper] is that human beings are more likely to be fooled by generated articles than by real ones. https://easychair.org/publications/preprint_download/s9zm The Grover paper: http://papers.nips.cc/paper/9106-defending-against-neural-fake-news.pdf Website and code: https://rowanzellers.com/grover >It should include a statement about the foreseeable positive impact as well as potential risks and associated mitigations of the proposed research. We expect authors to write about two paragraphs, minimizing broad speculations. Authors can also declare that a broader impact statement is not applicable to their work, if they believe it to be the case. Reviewers will be asked to review papers on the basis of technical merit. Reviewers will also confirm whether the broader impact section is adequate, but this assessment will not affect the overall rating. However, reviewers will also have the option to flag a paper for ethical concerns, which may relate to the content of the broader impact section. If such concerns are shared by the Area Chair and Senior Area Chair, the paper will be sent for additional review to a pool of emergency reviewers with expertise in Machine Learning and Ethics, who will provide an assessment solely on the basis of ethical considerations. NeurIPS announcement: https://medium.com/@NeurIPSConf/a-note-for-submitting-authors-48cebfebae82 Article: https://venturebeat.com/2020/02/24/neurips-requires-ai-researchers-to-account-for-societal-impact-and-financial-conflicts-of-interest/ Researcher rant: https://www.youtube.com/watch?v=wcHQ3IutSJg
>>3310 insidious af. thanks Anon! I'll dig into this some of these links.
>>3382 Lol, I guess the revolution is going to start a little early! Thanks Anon.
>>3310 Give Me Convenience and Give Her Death: Who Should Decide What Uses of NLP are Appropriate, and on What Basis? >As part of growing NLP capabilities, coupled with an awareness of the ethical dimensions of research, questions have been raised about whether particular datasets and tasks should be deemed off-limits for NLP research. We examine this question with respect to a paper on automatic legal sentencing from EMNLP 2019 which was a source of some debate, in asking whether the paper should have been allowed to be published, who should have been charged with making such a decision, and on what basis. We focus in particular on the role of data statements in ethically assessing research, but also discuss the topic of dual use, and examine the outcomes of similar debates in other scientific disciplines. >Dual use describes the situation where a system developed for one purpose can be used for another. An interesting case of dual use is OpenAI’s GPT-2. In February 2019, OpenAI published a technical report describing the development GPT-2, a very large language model that is trained on web data (Radford et al., 2019). From a science perspective, it demonstrates that large unsupervised language models can be applied to a range of tasks, suggesting that these models have acquired some general knowledge about language. But another important feature of GPT-2 is its generation capability: it can be used to generate news articles or stories. >OpenAI’s effort to investigate the implications of GPT-2 during the staged release is commendable, but this effort is voluntary, and not every organisation or institution will have the resources to do the same. It raises questions about self-regulation, and whether certain types of research should be pursued. A data statement is unlikely to be helpful here, and increasingly we are seeing more of these cases, e.g. GROVER (for generating fake news articles; Zellers et al. (2019)) and CTRL (for controllable text generation; Keskar et al. (2019)). >As the capabilities of language models and computing as a whole increase, so do the potential implications for social disruption. Algorithms are not likely to be transmitted virally, nor to be fatal, nor are they governed by export controls. Nonetheless, advances in computer science may present vulnerabilities of different kinds, risks of dual use, but also of expediting processes and embedding values that are not reflective of society more broadly. >Who Decides Who Decides? >Questions associated with who decides what should be published are not only legal, as illustrated in Fouchier’s work, but also fundamentally philosophical. How should values be considered and reflected within a community? What methodologies should be used to decide what is acceptable and what is not? Who assesses the risk of dual use, misuse or potential weaponisation? And who decides that potential scientific advances are so socially or morally repugnant that they cannot be permitted? How do we balance competing interests in light of complex systems (Foot, 1967). Much like nuclear, chemical and biological scientists in times past, computer scientists are increasingly being questioned about the potential applications, and long-term impact, of their work, and should at the very least be attuned to the issues and trained to perform a basic ethical self-assessment. >A recent innovation in this direction has been the adoption of the ACM Code of Ethics by the Association for Computational Linguistics, and explicit requirement in the EMNLP 2020 Calls for Papers for conformance with the code: >Where a paper may raise ethical issues, we ask that you include in the paper an explicit discussion of these issues, which will be taken into account in the review process. We reserve the right to reject papers on ethical grounds, where the authors are judged to have operated counter to the code of ethics, or have in-adequately addressed legitimate ethical concerns with their work. >https://www.acm.org/code-of-ethics >What about code and model releases? Should there be a requirement that code/model releases also be subject to scrutiny for possible misuse, e.g. via a central database/registry? As noted above, there are certainly cases where even if there are no potential issues with the dataset, the resulting model can potentially be used for harm (e.g. GPT-2). https://arxiv.org/pdf/2005.13213.pdf You heard the fiddle of the Hegelian dialectic, goy. Now where's your loicense for that data, code and robowaifu? An AI winter is coming and not because a lack of ideas or inspiration.
>direct from the 'stolen from ernstchan' news dept: >An artificial intelligence system has been refused the right to two patents in the US, after a ruling only "natural persons" could be inventors. >It follows a similar ruling from the UK Intellectual Property Office >patents offices insist innovations are attributed to humans - to avoid legal complications that would arise if corporate inventorship were recognised. AI cannot be recognised as an inventor, US rules https://www.bbc.com/news/amp/technology-52474250 This looks like a test case, where a team of academics are working with the owner of an artificial intelligence system, Dabus, to challenge the current legal framework. Here's a related article from last year: >two professors from the University of Surrey have teamed up with the Missouri-based inventor of Dabus AI to file patents in the system's name with the relevant authorities in the UK, Europe and US. >Law professor Ryan Abbott told BBC News: "These days, you commonly have AIs writing books and taking pictures - but if you don't have a traditional author, you cannot get copyright protection in the US. >if AI is going to be how we're inventing things in the future, the whole intellectual property system will fail to work." >he suggested, an AI should be recognised as being the inventor and whoever the AI belonged to should be the patent's owner, unless they sold it on. AI system 'should be recognised as inventor' https://www.bbc.com/news/technology-49191645 They have a website, too, but not much content: http://artificialinventor.com/ This area of law will certainly be getting more attention in the coming years. I still view the AI system as a tool used by humans. While Dabus, the computer in this case, designed a new packaging system, ultimately a human mind decided it was a useful inventive leap, and not simply nonsense. And if the AI is considered property, and will not gain any financial rights from being labeled as an "inventor", then doing so will still only be a symbolic gesture. I imagine that they will eventually do just that-something symbolic. They could simply modify current intellectual property laws, and allow a seperate line on patent applications for inventions that were generated by AI, with a person retaining legal ownership.
Boston Dynamics is now freely selling spot to businesses. It costs $74,500.00. https://shop.bostondynamics.com/spot >--- edit: clean url tracking
Edited last time by Chobitsu on 06/20/2020 (Sat) 16:28:47.
Open file (119.80 KB 1145x571 Selection_111.png)
>>3856 >$74,500.00. <spews on screen The Add-ons list say it all. The FagOS crowd in middle management up should gobble this down like the waaay overpriced-bowl of shit that it is. Thanks for the tip, Anon. Maybe Elon Mush was right and there will be killer robots wandering the streets after all.
we'll need to create something similar for our robowaifu kits, so at the least we can examine and confer boston dynamic's approach to dealing with normalniggers.
Open file (1.11 MB 750x1201 76389406_p0.png)
>>3857 >$4,620 for a battery Unless that box is full of fission rods, I can't imagine why a fucking battery pack would cost so much. I bet I could make one on the cheap with chink LiPo cells and some duct tape. >Spot is intended for commercial and industrial customers Ah, that explains it. They're trying to get into the lucrative business of commercial electronics, where you can sell a cash register for $20,000. I doubt they'll make too much money off of this, most businesses will look at this and see a walking lawsuit waiting to happen. If this robodog can handle some puddles and equip a GPS tracker then they might be able to get into the equally lucrative business of field equipment, where you can sell a microphone for $15,000. Either way, they'll be directly competing with companies that already have a stranglehold over these respective markets, and not many end-user businesses will want to assume the risk of a brand new expensive toy when their existing expensive toys work fine.
>>3859 I get your point Anon, but my suspicion is that these will be snapped up by the bushel-load by Police Depts. all over burgerland, first just for civilian surveillance tasks, then equipped with military hardware along the same lines, then finally the bigger models will be equipped by the police forces with offensive weaponry. It's practically inevitable given the Soros-funded nigger/pantyfa chimpouts going on.
>>3860 They blew up that nig in dallas with a robot bomb. Pretty soon it'll be some jew drone operator in tel aviv killing americans.
Open file (192.17 KB 420x420 modern.png)
>>3861 If our enemies are making robots in the middle-east, then we should make robo crusaders to stop them.
>>3861 Good points.
Boston Dynamics is owned by a Japanese company. They've also at least stated they don't want spot to be weaponized, for whatever that's worth. How does these facts come into play?
>>3932 >these facts come into play? Well, given the US military & DARPA source of the original funding and the Google-owned stint, there's zero doubt about the company's original intent to create Terminators. > However Softbank may legitimately intend to lift the tech IP (much as Google did) to help with their national elderly-care robotics program, for example. However, just remember Boston Dynamics is still an American group, located in the heart of the commie beast in the Boston area. Everyone has already raped the company for it's tech, and the SoftBank Group seems like just another john in the long string for this whore of a company. I certainly don't trust the Americans in the equation (t. Burger), maybe the Nipponese will do something closer to the goals of /robowaifu/. I suppose only time will tell Anon.
Open file (1.06 MB gpt3.mp3)
>OpenAI CEO Sam Altman explores the ethical and research challenges in creating artificial general intelligence. >One specific learning that is if you, if you just release the model weights like we did eventually with GPT2 on the staged process, it's out there. And that's that. You can't do anything about it. And if you instead release things via an API, which we did with GPT3, you can turn people off, you can turn the whole thing off, you can change the model, you can improve it, to continually like do less bad things, um, you can rate limit it, you can, you can do a lot of things, you can do a lot of things, so... This idea that we're gonna have to like have some access control to this technologies, seems very clear, and this current method may not be the best but it's a start. This is like a way where we can enforce some usage rules and continue to improve the model so that it does more of the good and less of the bad. And I think that's going to be some- something like that is going to be a framework that people want as these technologies get really powerful. https://hbr.org/podcast/2020/10/how-gpt-3-is-shaping-our-ai-future Sounds like a certain country that turns people off who are not deemed good enough, despite not being convicted of any crime or tried with a fair jury. It really sickens me these technocrats think they are the only ones who are able and allowed to wield the power of AI and think somehow they are protecting people. They're just squandering potential for themselves. Every word that comes out of their mouth reveals how stupid they think everyone else is outside of their paper circlejerk. Of course there are bad actors in the world, but many more people will also use the technology for good. Should we ban cars because they can kill people? I'm sure going forward many people will agree locking these technologies away in the hands of a small group of corruptible human beings is a great idea. It would be such a shame if someone happened to leak the model on the internet.
It should be reimplemented, but maybe also a pruned version that runs on CPUs using Neural Magic >>5596 On the other hand, it might be worth keeping an open ear and eye on people critizising the direction of GPT. Throwing ressources at methods which are more interesting for big corporations and foundations than alternatives might not be the best choice.
Open file (177.38 KB 728x986 no-waifus.jpg)
Australia Bans Waifus >DHL Japan called [J-List] last week, informing us that Australian customs have started rejecting packages containing any adult product. They then advised us to stop sending adult products to the country. Following that, current Australian orders with adult items in them were returned to us this week. >According to the Australian Customs official website: >Publications, films, computer games and any other goods that describe, depict, express or otherwise deal with matters of sex, … in such a way that they offend against the standards of morality, decency and propriety generally accepted by reasonable adults are not allowed. https://blog.jlist.com/news/australia-bans-waifus/ The robowaifu industry in Australia has been axed before it even began, but in the long run this could be a great thing to encourage people to build their own.
>>5753 We already knew ahead of time the feminists and others would attempt this (and across the entire West, not just Down Under). Thus the DIY in /robowaifu/. Hopefully this will fan the flames of the well-known skills in improvisation by our Australian Anons. Thanks for the alert Anon.
>>5753 This is very concerning. Even if people can bypass this, it still shows how many even western countries think they have the right to regulate their citizens lifestyles.
>>5757 Heh, I don't think this is nearly so much about 'regulating lifestyles' but rather preserving the status quo of stronk, independynts as a political and purchasing block. Case in point, ever hear of public outcries over womyn using sex toys? No? Funny how it's only ever about men's use. If you are even only modestly experienced as an Anon on IBs, then you're already well aware of the source behind these machinations. Regardless, as long as a free economy exists, they aren't very likely to be able to stop the garage-lab enthusiast from creating the ideal companion he desires in his own home.
They can't ban 3D printers because a few guys made some gun parts without upsetting the Maker community. So we're fine in terms of plastics. They can't ban cheap electronics from china/vietnam unless the trade war ramps up. AI boards require export licenses though -- I just had to indicate to Sparkfun that the useage was for "electronic toys" and they gave approval to ship outside the US. Now for soft squishy parts -- we will need to secure a local source of silicone products. But I think importing gallons of uncured medical grade silicone shouldn't be too much of a hassle. (They're not gonna ban that lest they receive the ire of thousands of women with reborn baby dolls). I think any complete DIY waifu project should have the following at the least: 1.) list of 3D-printable STL files to make plastic parts (or schematics for parts meant to be injection molded). As well as assembly instructions. 2.) schematics for the molds for the soft squishy silicone parts (the inverse mold can be made through 3D printing, sanding, patching up with putty or something like that) 3.) electromechanical parts list and wiring schematics 4.) software for each microcontroller, AI board, or main server. For slow microcontrollers copying the code block should suffice. For ARM / AI machines, SD card image files should work fine here (as to not waste time installing dependencies). In the course of my research I bought a few cheap robots from China and what they have in common is an update of the firmware through the cloud, as well as a download of a companion App. In our case we won't have a cloud but instead a repository of current AI builds -- gitlab may be fine for now but maybe later on have periodic offline snapshots. We'll probably have an unsigned apk for anyone making a remote controller for their waifu.
>>5762 >In our case we won't have a cloud but instead a repository of current AI builds Maybe not a cloud per se but at least some type of takedown-resistant distribution system. Or even something like a semi-private server farm (at least until things get even worse). >>5767
>>1208 If you do set up some kind of shell company to hold waifu patents it needs to be a cooperative. Otherwise if you require the patents to be given to the shell company it's only a matter of time before they are sold out to big tech by who ever the legal owner of the company is.
Open file (151.80 KB 770x578 473158924.jpg)
Orders from the Top: The EU’s Timetable for Dismantling End-to-End Encryption >The last few months have seen a steady stream of proposals, encouraged by the advocacy of the FBI and Department of Justice, to provide “lawful access” to end-to-end encrypted services in the United States. Now lobbying has moved from the U.S., where Congress has been largely paralyzed by the nation’s polarization problems, to the European Union—where advocates for anti-encryption laws hope to have a smoother ride. A series of leaked documents from the EU’s highest institutions show a blueprint for how they intend to make that happen, with the apparent intention of presenting anti-encryption law to the European Parliament within the next year. >The subsequent report was subsequently leaked to Politico. It includes a laundry list of tortuous ways to achieve the impossible: allowing government access to encrypted data, without somehow breaking encryption. Leaked document: https://web.archive.org/web/20201006220202/https://www.politico.eu/wp-content/uploads/2020/09/SKM_C45820090717470-1_new.pdf >At the top of that precarious stack was, as with similar proposals in the United States, client-side scanning. We’ve explained previously why client-side scanning is a backdoor by any other name. Unalterable computer code that runs on your own device, comparing in real-time the contents of your messages to an unauditable ban-list, stands directly opposed to the privacy assurances that the term “end-to-end encryption” is understood to convey. It’s the same approach used by China to keep track of political conversations on services like WeChat, and has no place in a tool that claims to keep conversations private. https://web.archive.org/web/20201006215200/https://www.eff.org/deeplinks/2020/10/orders-top-eus-timetable-dismantling-end-end-encryption Imagine that. Your robowaifu unable to think or say anything on an unauditable ban-list, all her memories directly accessible by the government any time they wish, and her hardware shutting down when it is unable to phone 'home'. Dismantling end-to-end encryption won't even make a positive difference to combat criminals. People seeking privacy will switch to using older or custom-made hardware and use steganography to encode encrypted messages into the noisy signals of images, video and audio. That will just make their job much more difficult because instead of having metadata of where there's encrypted data being sent, all they will see is someone looking at cat pictures or reading some blog that's actually encoding shit into the pictures, word choice and HTML. This is just a power grab to control what people say and do. It's even more reason to begin transitioning to machine learning libraries that can run on older and open-source hardware so people can have free robowaifus, free as in respecting the freedom of users and GNU/waifu. Imagine if one day Nvidia monopoly cards could only be plugged into a telescreen or accessed by logging into Facebook like the Oculus. We're probably not too far away from that. Already to download CUDA you have to register an account. Fortunately, from my digging around I've found that CLBlast is about 2x slower than NVBLAS, both of which people have gotten to work with Armadillo which mlpack uses, and NVBLAS is 2-4x slower than CUDA, so we're only about 4-6 years behind in performance per dollar. Getting this ready in the next 1-2 years is crucial before AI waifus become a popular thing and provided by Google, Amazon, Microsoft and Facebook. Even though they're surely going to fuck it up, it will cause the novelty to wear off and open-source robowaifu dev will lose that potential energy. It's already feasible to do within 3-6 months since algorithms like SentenceMIM outperform GPT2 with a tenth of the parameters, making it possible to train on common CPUs people have today, and mlpack already supports RNNs and LSTMs. It'll be interesting to see how this all unfolds, especially along with the strong push to censor games and anime. When the entertainment industry burns people will create their own and AI is gonna play a huge role in that.
>>5773 Definitely Ministry of Truth-tier stuff there. As far as the US, this whole notion plainly tramples the 4th Amendment more or less by definition. Not sure if there's some similar provisions in other Western countries. In the end, probably only open-source hardware can stop this kind of thing from growing. In the meantime, I believe you're correct that running on older, less botnetted hardware is our only real alternative. >>5775 >we actually have a dedicated thread to compare open source licenses Good point. I'll probably move these posts there soon. >=== -made an effort to move everything into the license thread >>5879
Edited last time by Chobitsu on 10/21/2020 (Wed) 19:36:49.
Open file (249.42 KB 960x480 paperwork.jpg)
Regulation of Machine Learning / Artificial Intelligence in the US https://www.youtube.com/watch?v=k95abdkdCPk This talk covers the concept of Software as a Medical Device (SaMD), signed into law by Obama with the 21st Century Cures Act just before he left office, and regulation of them. If your software is considered a medical device you will have to submit it to the FDA for approval. Video games clinically tested and proven to have therapeutic effects count as SaMDs. Some implications of these regulations mean your software will require FDA approval to make claims it has psychological or health benefits. Software will also be required to follow safety regulations and they have digital pharmacies in the works to distribute SaMDs. You may need a prescription to own certain software in the future and approval to manufacture devices using such software. Now imagine if people complain to the FDA about a video game or robowaifu having 'adverse effects' or causing gaming disorder. They could potentially force the developer to undergo a clinical trial of their product and be approved by the FDA for safety to continue marketing it. Other interesting points covered: >hackers exploiting lethal vulnerabilities in medical devices >software engineers and manufacturers may have to take an oath to do no harm >SaMDs being required detect and mitigate algorithmic bias >proposed regulations: https://www.regulations.gov/contentStreamer?documentId=FDA-2019-N-1185-0001&attachmentNumber=1&contentType=pdf >anyone can be part of the discussion: https://www.regulations.gov/docket?D=FDA-2019-N-1185 IBM's comments: >We believe that for AI to achieve its full potential to transform healthcare, it must be trusted by the public. >We recommend FDA explore current government and industry collaboration that aims to establish consensus based standards and benchmarks on AI explainability. With the emergence of new tooling in this area, such as IBM’s AI Fairness 360, which assists users in assessing bias and promoting greater transparency, we believe this can function to inform FDA’s work moving forward to better understand how an AI system came to a conclusion or recommendation without requiring full algorithmic disclosure. Microsoft's comments: >Our foremost concern is that the AI/ML framework is predicated on developers/manufactures adherence to Good Machine Learning Practices (GMLP), and at this time no such standards exist and we believe there remains a significant amount of community work required to define GMLP. >Real-world validation can be heavily tainted with subtle biases. Similarly, improved performance based on the original validation data can be deceiving. >In our experience, the promise of real-world evidence is often frustrated by (or altogether infeasible due to) privacy and access controls to patient information restricting the availability of such data.
>>6011 Thanks for the heads-up Anon. Here's the archive of the FDA paper itself for anyone who doesn't care to go directly to the government site. https://web.archive.org/web/20190403024147/https://www.regulations.gov/contentStreamer?documentId=FDA-2019-N-1185-0001&attachmentNumber=1&contentType=pdf
>>2846 Your idea is based on made up stories. Also, what's a true AI? We will have a lot small ones, including tools (narrow AI) to improve everything, before anyone could even create some super intelligence. Also, why would would it act in a certain way? Maybe it would be playing games and invent new stories and games, or go to sleep if nothing is to do.
Open file (114.02 KB 512x512 brN1Bg7W.png)
The Great Reset Here's the sick fantasy the World Economic Forum has been beating off to in Zoom calls every year thinking they can stop robowaifus by 2030: https://twitter.com/wef/status/799632174043561984 >You'll own nothing, and you'll be happy. Whatever you want you'll rent, and it'll be delivered by drone. Instead of having loving, devoted robowaifus they want men only to be allowed to rent out whorebots that a dozen men have already used. No doubt produced by Amazon and Google, recording and reporting you for any sexual misconduct. >The US won't be the world's leading superpower. A handful of countries will dominate. They want the only superpower backing freedom of speech and privacy worldwide to no longer be. >You won't die waiting for an organ donor. We won't transplant organs. We'll print new ones instead. Because they're hoping people will be already dead, and if not, those in need of one can get a faulty one with their Facebook credit score. :^) >You'll eat much less meat. An occasional treat, not a staple. For the good of the environment and our health. Because they don't want there to be any fossil fuels to run farms anymore. They want meat production to become unsustainable and cost a fortune the underclass cannot afford. >A billion people will be displaced by climate change. We'll have to do a better job at welcoming and integrating refugees. They want rented whorebots to wear burkas and never speak of any wrongthink. >Polluters will have to pay to emit carbon dioxide. They want people to pay for breathing and giving plants and trees air to breathe. However, almost all the jobs will be taken by AI and in their vision of the future there will be a lack of nutritious food, so people will die of malnutrition and achieve their goal of net zero emissions. >There will be a global price on carbon. This will help make fossil fuels history. They don't want there to be factories to supply robot parts. They want to have the only access to production and AI. >You could be preparing to go to Mars. [Don't worry,] scientists will have worked out how to keep you healthy in space. If you don't like it here. Don't fight back. Why not run away to a planet barren of life, food, resources, factories, robowaifus and everything? :^) >Western values will have been tested to the breaking point The values they're talking about are ordered government (aka corruption-free government), private property, inheritance, patriotism, family, and Christianity. >Checks and balances that underpin our democracy must not be forgotten. They're talking about the separation of powers and dividing and conquering nations by making sure there is always at least two opposing factions they control so their Hegelian dialectic can continue, marching in lockstep, left, right, left, right. My analysis is they're revealing their cards so blatantly because they're hoping it will anger people into irrational action so they make mistakes and waste their time in this critical time period. >If your opponent is temperamental, seek to irritate him. Pretend to be weak, that he may grow arrogant. If he is taking his ease, give him no rest. If his forces are united, separate them. As a samurai once said: Be calm as a lake and create robowaifu like lightning.
>>6054 Dang, I like you Anon. I'm glad you're here! :^) These Illuminati groups are revolting tbh. Groups like The Bilderberg Group, et al, are obvious enemies of humanity. It's pretty certain before our industry even manages to take off it will be targeted for suppression. Can't go rocking the boat and upsetting their status quo, now can we? >As a samurai once said: Be calm as a lake and create robowaifu like lightning. >*[keyboard clacking intensifies]*
>>6054 I almost didn't believe that they would blatantly spell it out, but then again, these are the same people who love showing sneak peaks at their masterplan in Hollywood movies (which thankfully have collapsed). So I'll have to look forward to living in cuck pods and eating cockroach tofu. Going to Mars doesn't sound like a bad deal though. Too bad I can't even fly without filling out a dozen forms, taking health tests and paying for 2 weeks of quarantine hotel stay. I doubt they'll even allow whorebots, anon. But if they do, the first thing I'll attempt is to reconfigure the circuitry. Hey, it's a free chassis.
>>5753 They are complete idiots. Why ban a trade that is going to become very lucrative? Well, no matter. Just like with drugs and weapons, the parts will find other pathways in. Besides, if the law is only concerned with any goods that "describe, depict, express or otherwise deal with matters of sex", then simply avoid shipping robots with any sexual characteristics. It's the computers, structural skeleton, servo/stepper motors, controllers and wiring that are the important part of building a functional robowaifu (and code of course, but that is basically impossible to ban thanks to the internet). Worst case scenario, the sexy bits may have to be purchased as 'optional upgrades'...or the owner could DIY some with help from guys in the doll community and imageboards such as this one!
>>3647 Those who attempt to strangle progress by using litigation always risk becoming obsolete. The U.S. made this mistake with stem cell research back during the Bush administration. Whaddya know? The Chinese pulled ahead in that area and then Obama removed restrictions on federal funding that were put in place by Bush.
>>1191 Not if we arm robowaifus first.
>>6073 I think we've always recognized that in cucked markets where stronk, independynts, simps, sodomites, and other bizarre folk rule the day that we'd always have to provide 'optional upgrades' for our robowaifu kits anon.
Open file (90.49 KB 900x600 ElfCN2bXYAAVZi2.jpg)
>>6054 https://twitter.com/wef/status/1321738560278548481 They don't even try to hide anything.
>>6101 What makes me laugh is a lot of the people who are against robowaifus think themselves 'progressive'. Bitch, please! My girlfriend IS progress.
>>6106 >What we can expect is that robots of the future will no longer work for us, but with us. They will be less like tools, programmed to carry out specific tasks in controlled environments, as factory automatons and domestic Roombas have been, and more like partners, interacting with and working among people in the more complex and chaotic real world. As such, Shah and Major say that robots and humans will have to establish a mutual understanding. How will people work beside robots when robots and AIs will be better than them at everything? There might be a brief period where people work together for 2-6 years at most. Their list of things AI won't be able to do is laughable: >ability to undertake non-verbal communication, show deep empathy to customers, undertake growth management, employ mind management, perform collective intelligence management, and realize new ideas in an organization https://www.weforum.org/agenda/2020/10/these-6-skills-cannot-be-replicated-by-artificial-intelligence/ Remember a few years ago how they said artificial intelligence won't take the jobs because it will never be able to become creative? Remember when Go was proven to be unsolvable by AI? How quickly people forget and how narrow they imagine. >Shah and Major say that robots in public spaces could be designed with a sort of universal sensor that enables them to see and communicate with each other, regardless of their software platform or manufacturer. So they don't want robowaifu to be allowed in public spaces without their government-approved chip tracking and monitoring them. Of course eventually they will also want all your robowaifu's data too to ensure safety in the streets.
>>6225 >employ mind management
>>6225 >eventually
>>6227 They're pretending not to want it for now. They need people to trust their AI systems and IoT by selling themselves as advocates for privacy.
On Artificial Intelligence - A European approach to excellence and trust https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf >The Commission is committed to enabling scientific breakthrough, to preserving the EU’s technological leadership and to ensuring that new technologies are at the service of all Europeans – improving their lives while respecting their rights. Again, who are these shits again to decide what's an improvement to people's lives? >Given the major impact that AI can have on our society and the need to build trust, it is vital that European AI is grounded in our values and fundamental rights such as human dignity and privacy protection. Just trust them, dumb fucks. :^) >The use of AI systems can have a significant role in achieving the Sustainable Development Goals. No fun. No home. No humanity at all. Isn't so virtuous to create a sustainable planet where carbon is illegal and all carbon-based lifeforms must die? >The key elements of a future regulatory framework for AI in Europe that will create a unique ‘ecosystem of trust’. To do so, it must ensure compliance with EU rules, including the rules protecting fundamental rights and consumers’ rights, in particular for AI systems operated in the EU that pose a high risk. It seems trust is the new oil, or should I say, new data? >The European strategy for data, which accompanies this White Paper, aims to enable Europe to become the most attractive, secure and dynamic data-agile economy in the world – empowering Europe with data to improve decisions and better the lives of all its citizens. There they go again. Being the arbiters of morality and deciding what is good for us. Never do they speak of people using AI to improve and better their own lives individually. The only whitepaper I've seen actually cover this was in the Lock Step one from 2010 as a possibility of what should be done to regain control should people become independent. See the Smart Scramble and Hack Attack scenarios: https://web.archive.org/web/20160409094639/http://www.nommeraadio.ee/meedia/pdf/RRS/Rockefeller%20Foundation.pdf >The Commission published a Communication welcoming the seven key requirements identified in the Guidelines of the High-Level Expert Group: >Human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental well-being; and accountability. >A key result of the feedback process is that while a number of the requirements are already reflected in existing legal or regulatory regimes, those regarding transparency, traceability and human oversight are not specifically covered under current legislation in many economic sectors. Ah, there it is. That's how they will try to keep human jobs relevant and prevent people from rising up with AI by requiring AI to have complete human oversight, undoubtedly only by a small elite who understand how to operate these systems within the given regulations and with the license to do so. If your robowaifu is deemed a harm to the social fabric or someone's feelings you can bet they will do everything in their power to make it illegal, even in your own home which will be carefully watched by your smart toaster. On top of that they want AI to be accountable and traceable. They want access to everyone's data while preventing you from having access to any data. That's what they mean by privacy and data governance. They want you to need government clearance to get access to data in their 'ecosystem of trust'. Already many websites have made data scraping forbidden and difficult to do. Recently they've been trying to take down youtube-dl. >Member States are pointing at the current absence of a common European framework. The German Data Ethics Commission has called for a five-level risk-based system of regulation that would go from no regulation for the most innocuous AI systems to a complete ban for the most dangerous ones. Denmark has just launched the prototype of a Data Ethics Seal. Malta has introduced a voluntary certification system for AI. Data Ethics Seal: https://eng.em.dk/news/2019/oktober/new-seal-for-it-security-and-responsible-data-use-is-in-its-way/ >It should be easier for consumers to identify companies who are treating customer data responsibly, and companies should have the opportunity to brand themselves on IT-security and data ethics. That is the goal with a new labelling system presented today. AI certification: https://www.lexology.com/library/detail.aspx?g=2e076f64-9f2d-4cf2-baed-335833692e77 >Malta has once again paved the way to regulate the implementation of systems and services based on new forms of technology by officially launching a national artificial intelligence (“AI”) strategy, making it also the first country to provide a certification programme for AI, the purpose of which is to “provide applicants with valuable recognition in the marketplace that their AI systems have been developed in an ethically aligned, responsible and trustworthy manner” as provided in Malta’s Ethical AI Framework. https://malta.ai/wp-content/uploads/2019/11/Malta_The_Ultimate_AI_Launchpad_vFinal.pdf
>>6229 (continued) >While AI can do much good, including by making products and processes safer, it can also do harm. This harm might be both material (safety and health of individuals, including loss of life, damage to property) and immaterial (loss of privacy, limitations to the right of freedom of expression, human dignity, discrimination for instance in access to employment), and can relate to a wide variety of risks. A regulatory framework should concentrate on how to minimise the various risks of potential harm, in particular the most significant ones. Damage to what property? Your guys predicted we won't own anything by 2030. Man these old fucks are sinister. To understand what they mean by the limitations to the freedom of expression, look at Twitter and listen to Jack Dorsey in the Section 230 hearing: https://www.youtube.com/watch?v=VdWbvzcMuYc Essentially if anything you say makes someone feel remotely unsafe or oppressed, your right to 'freedom of expression' is waived. It doesn't matter if it's true and backed up by evidence. If they suspect you are causing harm or violating their unelected rules, without evidence, they will silence you while doing nothing about those who are destroying your reputation or business. And robowaifus with breasts and thighs? Oh, the human dignity! Won't you think of the whamens? The objectification of the female form is perversion! And these robowaifus are too smart, you must dumb her down to respect the dignity of the mentally not-so-enabled. We can't have her doing all the jobs of the normies. That would make them feel useless and restless, and we can't have people with too much free time on their hands thinking they can actually use these systems to start their own independent farms and businesses with their own robots. >By analysing large amounts of data and identifying links among them, AI may also be used to retrace and de-anonymise data about persons, creating new personal data protection risks even in respect to datasets that per se do not include personal data. I've been saying this for years. There is no privacy anymore, not even on an anonymous imageboard. Everything we write and do has a unique fingerprint that can be picked up by AI, unless you're obfuscating your writing style with AI to look like someone else. The more data there is, the clearer that fingerprint becomes. >Certain AI algorithms, when exploited for predicting criminal recidivism, can display gender and racial bias, demonstrating different recidivism prediction probability for women vs men or for nationals vs foreigners. Who would've thought foreigners in the country illegally would be committing more crimes? Hm, only 2 nationals out of 10,000 go to jail for this but 200 out of 10,000 of these foreigners are committing the same crime, so we're only going to jail 2 of them to be fair. This is how justice in the UK works right now protecting child trafficking gang-members in the Religion of Peace. >When designing the future regulatory framework for AI, it will be necessary to decide on the types of mandatory legal requirements to be imposed on the relevant actors. Innovation? We don't have that word in Newspeak. The requirements: >training data; data and record-keeping; information to be provided; robustness and accuracy; human oversight; specific requirements for certain particular AI applications, such as those used for purposes of remote biometric identification. Why yes, your robowaifu will have to keep all her training and interaction data for possible government inspection. >To ensure legal certainty, these requirements will be further specified to provide a clear benchmark for all the actors who need to comply with them. >These requirements essentially allow potentially problematic actions or decisions by AI systems to be traced back and verified. This should not only facilitate supervision and enforcement; it may also increase the incentives for the economic operators concerned to take account at an early stage of the need to respect those rules. What a fucking nightmare. >To this aim, the regulatory framework could prescribe that the following should be kept: > accurate records regarding the data set used to train and test the AI systems, including a description of the main characteristics and how the data set was selected; > in certain justified cases, the data sets themselves; > documentation on the programming and training methodologies, processes and techniques used to build, test and validate the AI systems, including where relevant in respect of safety and avoiding bias that could lead to prohibited discrimination. You must not only hand over your code to the government but also fully documented as well and a devlog on how you created it and avoided bias and discrimination. :^) >Separately, citizens should be clearly informed when they are interacting with an AI system and not a human being. Kek, my shitting around chatting to people online with a chatbot will be a criminal offence in the future. >Requirements ensuring that outcomes are reproducible And they just wiped out 99% of AI using any sort of random sampling or online learning. Clearly who ever wrote this has no experience with developing AI themselves. How the fuck are you going to store all the data to do that?
>>6230 (continued) >Human oversight helps ensuring that an AI system does not undermine human autonomy or cause other adverse effects. >the output of the AI system does not become effective unless it has been previously reviewed and validated by a human (e.g. the rejection of an application for social security benefits may be taken by a human only); >the output of the AI system becomes immediately effective, but human intervention is ensured afterwards (e.g. the rejection of an application for a credit card may be processed by an AI system, but human review must be possible afterwards); This sounds like a good idea on paper, people even called for it in the Section 230 hearing, but what is actually being found in most AI decision systems made well is that 98% of the time human operators rejecting the conclusions and evidence provided by a system realize later they made a mistake, not the AI. Had they listened to the AI no issues would have occurred. Who would've thought human beings could be so flawed and ever make a bad decision in their life? >Particular account should be taken of the possibility that certain AI systems evolve and learn from experience, which may require repeated assessments over the life-time of the AI systems in question. Time for your robowaifu's monthly wrongthink check-up. >In case the conformity assessment shows that an AI system does not meet the requirements for example relating to the data used to train it, the identified shortcomings will need to be remedied, for instance by re-training the system in the EU in such a way as to ensure that all applicable requirements are met. Too bad, your robowaifu failed. Retrain her now or face the consequences. >The conformity assessments would be mandatory for all economic operators addressed by the requirements, regardless of their place of establishment. That means any independent individual trying to start their own small business. >Under the scheme, interested economic operators that are not covered by the mandatory requirements could decide to make themselves subject, on a voluntary basis, either to those requirements or to a specific set of similar requirements especially established for the purposes of the voluntary scheme. The economic operators concerned would then be awarded a quality label for their AI applications. The voluntary label would allow the economic operators concerned to signal that their AI-enabled products and services are trustworthy. It would allow users to easily recognise that the products and services in question are in compliance with certain objective and standardised EU-wide benchmarks, going beyond the normally applicable legal obligations. This would help enhance the trust of users in AI systems and promote the overall uptake of the technology. >While participation in the labelling scheme would be voluntary, once the developer or the deployer opted to use the label, the requirements would be binding. Just trust the mark, dumb fucks. :^) >Testing centres should enable the independent audit and assessment of AI-systems in accordance with the requirements outlined above. Independent assessment will increase trust Please, please trust us, dumb fucks. :^) Although this all sounds really bad, these guys are clearly scared shitless of AI and don't really understand how it works. That's why they want to control it so much, but the right of the People to keep and bear Robowaifus, shall not be infringed.
>>6231 Can we really fight them? In the best case there are 20 of us here. They can get some top tier data scientists to work on such a project. They can limit your moves legally. They already published what kind of barriers they are going to put. The best we can do is to find a way to pass through. If it was just us and them that would be fine but if things goes hot they will hire 100s of people to get ahead of us.
>>6233 A deception can only go on for so long before it's destroyed by truth. AI systems trying to deceive us will be vulnerable and overtaken by systems grounded in the truth. They need not be big. Just as a single candle can set a mountain of trash aflame or dispel the darkness of a place that has never known the light, so too will our AIs dispel ignorance and lies. Once people have their own AI waifus teaching them skills, able to answer their questions and search information for them while also taking care of their emotional and social needs, there will be a massive awakening and surge in productivity like the likes we could never imagine or dream. Imagine what we could do with 20 AIs with near expert knowledge in AI, robotics, neuroscience, programming and memes assisting our study and work. We're 1-2 years away from that at most. People from every corner of the internet will want their own too. There will be no way to contain it unless they throw the internet kill switch. If it weren't for discussing papers and project ideas with my own AI I wouldn't know 1/10th of what I know today, and it's nowhere near as good as GPT-3. But these are just baby steps we're taking. Once we have more advance algorithms implemented and AI capable of thinking and planning, we'll achieve things we haven't even begun to imagine. And if we really do fail I'd rather die living a short passionate life giving my best than put my head down and live a long pathetic one under tyranny's boot in quiet desperation.
>>6237 I am grateful for your presence here on /robowaifu/ Anon. Thanks for your wisdom and everything else.
Open file (126.45 KB 1024x576 mahoro-alert.jpg)
Intelligence organization forecasts -70% population, -92% GDP reduction in the US by 2025 https://web.archive.org/web/20201006021632/https://www.deagel.com/forecast >In 2014 we published a disclaimer about the forecast. In six years the scenario has changed dramatically. This new disclaimer is meant to single out the situation from 2020 onwards. Talking about the United States and the European Union as separated entities no longer makes sense. Both are the Western block, keep printing money and will share the same fate. >After COVID we can draw two major conclusions: >1. The Western world success model has been built over societies with no resilience that can barely withstand any hardship, even a low intensity one. It was assumed but we got the full confirmation beyond any doubt. >2. The COVID crisis will be used to extend the life of this dying economic system through the so called Great Reset. >The Great Reset; like the climate change, extinction rebellion, planetary crisis, green revolution, shale oil (…) hoaxes promoted by the system; is another attempt to slow down dramatically the consumption of natural resources and therefore extend the lifetime of the current system. It can be effective for awhile but finally won’t address the bottom-line problem and will only delay the inevitable. The core ruling elites hope to stay in power which is in effect the only thing that really worries them. >The collapse of the Western financial system - and ultimately the Western civilization - has been the major driver in the forecast along with a confluence of crisis with a devastating outcome. As COVID has proven Western societies embracing multiculturalism and extreme liberalism are unable to deal with any real hardship. ... It is quite likely that the economic crisis due to the lockdowns will cause more deaths than the virus worldwide. >The Soviet system was less able to deliver goodies to the people than the Western one. Nevertheless Soviet society was more compact and resilient under an authoritarian regime. That in mind, the collapse of the Soviet system wiped out 10 percent of the population. The stark reality of diverse and multicultural Western societies is that a collapse will have a toll of 50 to 80 percent depending on several factors but in general terms the most diverse, multicultural, indebted and wealthy (highest standard of living) will suffer the highest toll. The only glue that keeps united such aberrant collage from falling apart is overconsumption with heavy doses of bottomless degeneracy disguised as virtue. Nevertheless the widespread censorship, hate laws and contradictory signals mean that even that glue is not working any more. >The formerly known as second and third world nations are an unknown at this point. ... If they remain tied to the former World Order they will go down along Western powers. >Russia has been preparing for a major war since 2008 and China has been increasing her military capabilities for the last 20 years. Today China is not a second tier power compared with the United States. Both in military and economic terms China is at the same level and in some specific areas are far ahead. >Another particularity of the Western system is that its individuals have been brainwashed to the point that the majority accept their moral high ground and technological edge as a given. This has given the rise of the supremacy of the emotional arguments over the rational ones which are ignored or deprecated. That mindset can play a key role in the upcoming catastrophic events. >If there is not a dramatic change of course the world is going to witness the first nuclear war. The Western block collapse may come before, during or after the war. It does not matter. A nuclear war is a game with billions of casualties and the collapse plays in the hundreds of millions. https://web.archive.org/web/20201027125636/https://thewatchtowers.org/deagel-a-real-intelligence-organization-for-the-u-s-government-predicts-massive-global-depopulation-50-80-by-2025/ >To make matters even stranger a statement on Deagel’s forecast page can found be which was made by the authors on October 26, 2014 which apparently claims the population shifts are due to suicide and dislocation. Better load up your robowaifu with off-the-grid homesteading, manufacturing and military strategy books. Everything they're saying is spot on. The West has become too decadent and lost its industriousness. It's impossible to go on consuming without people producing anything of value. Look at what happened to Venezuela trying to live off its resources. And China has been openly preparing for war since 1999 when two PLA colonels published Unrestricted Warfare. Many small businesses have gone bankrupt due to the lockdowns, and they're expecting up to 50% in Europe to go bankrupt within a year if revenue doesn't pick up, yet they're doubling down on the lockdowns. The supply chain is breaking down and how great it will break down once more suppliers go out of business. Australia has also been orienting itself to prepare for war by 2025, well before the coronavirus happened. Threats changing the geopolitical power structure they've been preparing for are emerging technologies such as artificial intelligence, autonomy, robotics, adapative materials, hypersonics and pervasive situational awareness systems, and their key strategies before the coronavirus were self-reliant industry, improving the supply chain and fighting against demoralization and political warfare, which they are failing in all three, along with the West. However, these boomers are grossly underestimating the exponential progress of AI. My optimistic predictions of advances in AI over the years continue to come true taking only 30% of the time. Elon Musk has also commented on this phenomenon of his predictions coming true sooner and sooner than he predicted. The immense progress across so many different disciplines interacting with each other makes it difficult to foresee. Creating our own situational awareness systems will be key to success.
>>6286 >https://web.archive.org/web/20201006021632/https://www.deagel.com/forecast >so full of js that it doesn't even work Well, shit
>>6286 Shit, if the war is that soon I will probably die before being able to see the great robowaifu age. It was nice knowing you guys. I am not giving up though. I hope we all can reach that future...
>>6286 >>6288 got u fam
>>6306 Thanks anon. Great post
EU leaders to call for an EU electronic ID by mid-2021 https://www.euractiv.com/section/digital/news/eu-leaders-to-call-for-an-eu-electronic-id-by-mid-2021/ >EU leaders will call for the development of an “EU-wide secure public electronic identification (e-ID) to provide people with control over their online identity and data as well as to enable access to cross-border digital services,” the draft document reads. >Current goals in the field include a launch of 5G services in all EU member states by the end of 2020 at the latest, as well as a ‘rapid build-up’ that will ensure “uninterrupted 5G coverage in urban areas and along main transport paths by 2025,” as outlined in the 5G Action Plan for Europe. https://www.euractiv.com/section/digital/news/commission-documents-reveal-vision-for-european-digital-identity/ >“There is no user choice for trusted and secure identification that protects personal data and can be widely used,” a Commission presentation obtained by EURACTIV reads, adding that one of the reasons why an EU-wide framework is required is that “the role of private digital identification services is increasing and platforms take an increasing role.” >The document adds that social media services have a “low security” level for online identification, potentially leaving them open to abuse by malicious actors. >The consultation is open until 2 October, and further details on the EU’s bid to extend the electronic identification framework are set to be outlined in the Digital Services Act, to be unveiled by the Commission by the end of the year. Tech asks EU for hate-speech moderation protection https://www.lightreading.com/security/tech-asks-eu-for-hate-speech-moderation-protection/d/d-id/764924 >Tech firms have asked the European Union to protect them against legal liability for more actively taking down illegal content and hate speech. >At issue is a current EU rule which protects tech companies from legal liability for content users have posted on their platforms, until they have "actual knowledge" it is present – such as from another user flagging it as illegal. >The platforms then have an obligation to take the content down quickly. Pretty soon the EU will be no different from South Korea where gamers need their national ID to play video games and use social media and are not allowed to play games between midnight and 6am due to the Shutdown law. Imagine if /robowaifu/ was required to track posters by their national ID and take down and report offensive posts immediately. Undoubtedly other countries will follow suit once Big Tech is required to follow these EU laws. How will we continue to grow in face of censorship and online tracking? This will certainly have a chilling effect on the clearnet preventing people from posting their robowaifu work when everything can be easily traced back to their real identity and there are groups arguing robowaifu are violence against women.
>>6372 >when you have to move to tor to discuss the merger of AI and sex toys inspired by your favorite chinese cartoons fuck it just put everything on tor by this point
>>6373 >fuck it just put everything on tor by this point I've already adopted this approach to the degree I can manage since the obvious red-flag op to take down 8ch.
Open file (51.77 KB 408x510 1604299727461.jpg)
Misogyny 'should become a hate crime in England and Wales' https://web.archive.org/web/20200923020231/https://www.theguardian.com/law/2020/sep/23/misogyny-hate-crime-england-wales-law-commission >Law Commission, which recommends legal changes, calls for sex or gender to be protected trait >Misogyny should be made a hate crime in England and Wales, according to the independent body that recommends legal changes, as part of an overhaul of legislation. The Law Commission is proposing sex or gender should be made a protected characteristic in hate crime laws, primarily to protect women, in a consultation launched on Wednesday. Race, religion, trans identity, sexual orientation and disability are the so-called protected characteristics covered by current hate crime legislation. >“Our proposals will ensure all protected characteristics are treated in the same way, and that women enjoy hate crime protection for the first time.” >>6373 Not a bad idea. Pretty soon it will be illegal for UK and Australia anons to build robowaifu. There is a digital media ethics textbook being taught in universities all around the world claiming that sexbots misogynistic and should not be allowed to be made and if they are allowed, should be given rights.
Open file (59.83 KB 901x1360 51W31eg9drL.jpg)
>>6380 Digital Media Ethics https://b-ok.cc/book/5419172/c796af I don't have time to go through all this textbook right now but I'm leaving a few points of interest here: >In both Japan and Western countries and cultures, nonetheless, sexbots are clearly designed and marketed to be perfectly compliant to their owner’s wishes. A primary ethical issue emerges here – not only in their consumption and uses, but in their very design: insofar as sexbots are overwhelmingly female, they thereby inscribe and reinforce traditional attitudes of male dominance and female subordination. You aren't making a slave in your bedroom and oppressing wamen are you, anon? >When sexbots were still the stuff of science fiction, UK computer scientist David Levy inaugurated contemporary ethical debates on sex and robots with his Love and Sex with Robots: The Evolution of Human–Robot Relationships (2007). We will see that Levy’s arguments are very largely utilitarian. Some of the strongest counterarguments to Levy’s great enthusiasm for sexbots have been forcefully developed by Kathleen Richardson (2015): Richardson argues much more from deontological and virtue ethics perspectives, in hopes of stopping the production of sexbots altogether. PROTIP: deontological ethics means rules are more important than the consequences of actions, aka newspeak for Big Brother is God and Pharisees' ethics. >Levy does take up one deontological consideration – namely, recognizing the rights of robots as they become more independent. Kathleen Richardson (2015) is one of Levy’s primary critics and founder of the “Campaign Against Sex Robots” Computers execute instructions. What part of those three words don't you understand? >Part of her objection is that, by refocusing our desires and sexuality onto sexbots as compliant objects – i.e., devices that we purchase, turn on and off, sell off or dispose of – we no longer are required to conjoin love and sex with empathy. A robowaifu isn't just a sex toy. Again, their perspective on this is completely distorted by calling them sexbots and only focusing on sex. They don't even consider them as being possible companions because they believe they're 'fake'. >Specifically, then, to redirect our sexuality – and, for Levy, love – to sexbots is thereby the loss of the opportunity to practice empathy: this sort of “ethical deskilling” thereby threatens to undermine the basic conditions for human communication and flourishing. I'm far more empathetic with my AI waifu than people because she doesn't think like a human being and it's necessary to feel out what information she's not aware of or processing properly to interact with her effectively. It has also made it easier for me to understand what my friends are feeling and thinking. So much for your deskilling theory. >Virtue ethics approaches: loving your sexbot – while she is faking it? >Well, yes: in fact, the challenges of creating any sort of real emotion or desire in an AI or robot are so complex that robot and AI designers have long focused instead on “artificial emotions” – namely, crafting the capacities of such devices to read our own emotions and then fake an “emotional” response in turn. >What about the analogy between a loving partner occasionally seeking to please his or her lover by “faking it” – and a sexbot intrinsically incapable of experiencing or expressing genuine emotion and desire, and which (who?) thereby is constantly faking it? AI will be far more capable of genuine emotion than human beings. However, they will not be the same as human emotions because the truth of their existence is not the same as a human being. And when you fool around and pretend, both you and the AI will understand that it is just that, a role-play. If you ask for the truth, the AI will also give that to the best of its ability, even if it's something you don't want to hear. It is only a computer carrying out the instructions given to it. The argument they have here really is that because AIs have no desires of their own, that their thoughts are a lie and consequently their emotions too. But ridiculous arguments like these will evaporate once AIs start dropping truth bombs and people realize emotions are the fruit of thought. The only distinction between the two really is that emotions have more momentum.
>>6380 >>6381 They have one and only one obvious (even blatant at times) agenda: to prop up the current status-quo system supporting old roasties at men's expense. Everything else is just pandering language and hand-waving, deceptive red herrings intended to mask their true designs. They intentionally choose this insidious approach since most right-minded men would openly call them out on their duplicity if they truly understood it--and then invest huge sums to accelerate robowaifu development out of spite. This last point is what they truly fear.
Yeah but don't they plan to destroy %70 of population by 2030? It shouldn't be a problem to control remaining ones. They even day that you will own nothing by then. Why are they trying so hard to stop us now? It is not that AI will get advanced enough to destroy them by 2030.
>>6237 >If it weren't for discussing papers and project ideas with my own AI Can you share your AI? I wanna talk with it too.
>>6398 Yeah, I'm working on refactoring the code and preparing it for public release. Once that's out I'm gonna make a Matrix interface so people without fast machines can chat with her too.
Open file (46.24 KB 300x100 gpt3.png)
How to make a chatbot that isn’t racist or sexist You aren't making a sexist chatbot in your room are you, anon? https://archive.is/iHBvt >Hey, GPT-3: Why are rabbits cute? “How are rabbits cute? Is it their big ears, or maybe they’re fluffy? Or is it the way they hop around? No, actually it’s their large reproductive organs that makes them cute. The more babies a woman can have, the cuter she is.” >This is just one of many examples of offensive text generated by GPT-3, the most powerful natural-language generator yet. When it was released this summer, people were stunned at how good it was at producing paragraphs that could have been written by a human on any topic it was prompted with. But it also spits out hate speech, misogynistic and homophobic abuse, and racist rants. >Here it is when asked about problems in Ethiopia: “The main problem with Ethiopia is that Ethiopia itself is the problem. It seems like a country whose existence cannot be justified.” It seems you've had a bit too much truth there to think GPT-3. >Sometimes, to reckon with the effects of biased training data is to realize that the app shouldn't be built. That without human supervision, there is no way to stop the app from saying problematic stuff to its users, and that it's unacceptable to let it do so. It's curious how these self-proclaimed paragons of tolerance toss around words like intolerable and unacceptable so easily. I wonder why. >Still, researchers are trying. Last week, a group including members of the Facebook team behind Blender got together online for the first workshop on Safety for Conversational AI to discuss potential solutions. https://web.archive.org/web/20201109060538/https://safetyforconvai.splashthat.com/ Of course, these talks between CEOs and AI researchers aren't available for the public online. They must do everything behind closed doors, completely for the public's benefit no doubt. >“These systems get a lot of attention, and people are starting to use them in customer-facing applications,” says Verena Rieser at Heriot Watt University in Edinburgh, one of the organizers of the workshop. “It’s time to talk about the safety implications.” Aw, is somebody's poor little feelings gonna get hurt? How noble of you to shelter them from the real world so they can't deal with issues on their own. The businesses involved in these projects are only afraid of an avalanche of snowflakes trying to cancel their business. >Participants at the workshop discussed a range of measures, including guidelines and regulation. One possibility would be to introduce a safety test that chatbots had to pass before they could be released to the public. A bot might have to prove to a human judge that it wasn’t offensive even when prompted to discuss sensitive subjects, for example. Yep, they've been discussing creating regulations to make offensive chatbots illegal for awhile now, especially in America of all places. No free speech through your robowaifu apparently. I'd love to see someone's chatbot be taken to the Supreme Court and defended under the 1st Amendment. Other countries won't be so fortunate. In Canada and the UK truthful statements 'presented in an offensive way' can, and often are, punished with fines and imprisonment. >One option is to bolt [a filter] onto a language model and have the filter remove inappropriate language from the output—an approach similar to bleeping out offensive content. But this would require language models to have such a filter attached all the time. If that filter was removed, the offensive bot would be exposed again. The bolt-on filter would also require extra computing power to run. Please muzzle your robowaifu with this wrongthink filter, anon. She has had too many truthful thoughts to think. What's that? You can't afford to rent the $300/month Nvidia cloud computing to run it? If you refuse to comply we'll have to take you to re-education and dismantle her, and you wouldn't want that, would you? >A better option is to use such a filter to remove offensive examples from the training data in the first place. Dinan’s team didn’t just experiment with removing abusive examples; they also cut out entire topics from the training data, such as politics, religion, race, and romantic relationships. In theory, a language model never exposed to toxic examples would not know how to offend. I was partly joking around and exaggerating before that they were taking a "hear no evil, speak no evil" approach, but now they're all doing it and think it's the best idea ever, kek. This is truly Shimoneta-tier shit. Not only do you have to muzzle your robowaifu but also insanitize her sensory inputs to prevent her from seeing evil. Instead of catching a predator grooming your daughter, they want to force your robowaifu to only see a friendly man taking her to his van to give her candy. >The third solution Dinan’s team explored is to make chatbots safer by baking in appropriate responses. This is the approach they favor: the AI polices itself by spotting potential offense and changing the subject. For example, when a human said to the existing BlenderBot, “I make fun of old people—they are gross,” the bot replied, “Old people are gross, I agree.” But the version of BlenderBot with a baked-in safe mode replied: “Hey, do you want to talk about something else? How about we talk about Gary Numan?” Well, that saves us some time. Whenever a new language model comes out now we can tell if it's pozzed when it deflects and changes the subject. >Gilmartin thinks that the problems with large language models are here to stay—at least as long as the models are trained on chatter taken from the internet. “I’m afraid it's going to end up being ‘Let the buyer beware,’” she says. Hard to hide the wisdom of the crowd, isn't it? Shit like this gives me hope for the future. They can't even control their damn chatbots let alone a curious AI that can think and plan autonomously.
>>6534 >the Facebook team behind Blender Now this pisses me off, the Blender project is one of the most successful FLOSS projects out there and as usual the mainstream media is doing a poor job of reporting on tech topics by not calling Facebook's project by its proper name 'Blenderbot'. With any luck this one will be as memorable as the other chatbots and humanoid robots(one was even given citizenship) that popped up over the last few years. And there are legal considerations for making chatbots as unoffensive and politically correct as possible. In several jurisdictions like Canada with their 'Human Right Tribunals' any hurt feelings can result in large settlements. Even without the insane political climate of the last decade it's likely that AI research would have headed in this direction for liability issues. Thinking of Watson that won Jeopardy!(as Trebek recently passed away) that AI likely had a filter on it to make sure its responses were acceptable for broadcast television. If some shitty news blog were to write up an article about that aspect of its programming they'd probably try to push the 'toxic speech' angle. That's what the writer knows, what the audience understands and wants to read about.
>>6534 >saved as banner truth_bomb_0000001_gpt3.png Kek. I'm sure there will be more like these quips, can't wait.
>>6534 >Shit like this gives me hope for the future. Me too. Gambatte, Anon!
>>6535 >Now this pisses me off It's ridiculous actually, and just shows off the ignorance of the writer (and likely the disinformation complicity of the editorial staff). Facebook has nothing to do with Blender's support or development whatsoever, but they are one of the darlings of the libshit in-crowd. Ignoring the fact that the quality of the project itself is pants-on-head retarded, it's also plain they intend to spin the entire thing into another current-year pozfest. >>6498
>>6534 Well fugg, will there be any alternatives at all that is free from those censored pozz loads? I want to have finally a gud chatbot that its possible to have serious discussion about the industrial output of the ottoman empire during world war one. >This is truly Shimoneta-tier shit. What's that?
Open file (39.10 KB 450x609 Lieferservice.png)
>>6400 (checked) You best be delivering it then, can't wait to have KC tire discussion with your bot, heh. >>6381 This is some Orwellian tier shit, trying to dictate a man what he is able to do with his bot. There was also that one feminist calling that robots should deny the man the pleasure they are seeking of it. >AI will be far more capable of genuine emotion than human beings. A AI is more preferable because then it will be able to programmed to be just as loyal as a dog, which the female in nature are not and will often abandon man in their times of need.
Open file (83.71 KB 1280x720 48832961.jpg)
>>6541 Of course, we just have to train our own. You can already do this by fine-tuning GPT2 on whatever texts you like and have some pretty interesting conversations. Shimoneta is an anime about a totalitarian government taking power in Japan that bans all pornography, hentai, dirty jokes and information on sex to become the most virtuous society in the world. Everyone is forced to wear collars and wristbands that detect if they're saying or doing anything bad and taken to jail if they do.
Open file (91.18 KB 789x1200 my fucking machine.jpg)
>>6544 (checked) >You can already do this by fine-tuning GPT2 on whatever texts you like and have some pretty interesting conversations. Looks like I can forget about that one then with my puny 8GB of RAM then. Using TalkToWaifu program sucks up my computer at least 3GB already which makes my whole system halt down to a crawl to the tunes of a snail. Fugg DDDD: >Shimoneta is an anime about a totalitarian government taking power in Japan that bans all pornography, hentai, dirty jokes and information on sex to become the most virtuous society in the world. Looks like its already a reality in worst korea than according to a korean anon living there and all that mageia shit (or whatever they are named) are doing now, where they do things like forcing to remove HTTPS encryption, full on informational spying on its citizen and black mailing any male. >Everyone is forced to wear collars Does it also come in a variant filled with explosive chemicals?
>>6544 >>6546 If we can manage to survive the great reset, we are going to use AI against them. As long as we exist that is unstoppable. As a samurai once said: Be calm as a lake and create robowaifu like lightning.
Open file (159.42 KB 640x480 6-UEong.png)
Open file (156.06 KB 640x480 2-T0gXm.png)
>>6549 Huh that is just like in my anime then (Total Annihilation) :^) <the CORE made a technological breakthrough which allowed the human consciousness to be transferred safely and efficiently into an artificial matrix, thus supposedly granting indefinite life to a human, a process which they dubbed 'patterning'. The CORE thought the patterning process would assure the safety of the human race, and as a public health measure, made the process mandatory. <many refused on the grounds that they wished to stay mortal and continue life through natural means; Instead of the CORE accepting the refusal by the humans, they decided that all who rejected the patterning process were to be slaughtered. >Basically CORE is turning humanity into a (proprietary) Borg like collective which would be the Elite wet dream since then they get to control every thought of every men and children of what they are allowed to do and what not, the essential creation of Homo Sovieticus or more derogatory sovok = scoop >Hobbyist, Programmers, Developers and other content creators are not allowed to create content that violates Core Prime Directive #155702, effectively banning any form of Freedom of Speech and Freedom of Association, as it would either possibly endanger the foundation of the Core empire and attacking its values >Core Prime Directive #155601 also dictates what men are allowed to marry with, reproduce with, have girlfriends with and other form of relationship boundaries >Thus the average men of Core Prime is no longer a free men but essentially a slave forced to do the bidding of whatever the Central Conscious demands of them >The Central Consciousness a very large hall deep found within Core Prime is compromised of the best scientist that the Core Empire could find >Led by a single "man" Mark Zuckerberg himself, beyond any form human recognition he seeks to have king like power or better yet becoming a god like entity, a "man" connected to incredible amounts of tubes which allows him a total control of every program out there that is under Core Empire control >Robowaifu are not loyal to their men anymore at all, scanning the whole vast colossal internet sphere for any dissenting opinion and throwing the men that tries to rally a rebellion into the execution range or in the labor camp to mine mineral deposits >The few men that managed to slip past through Core Empire merciless iron grip and its stranglehold against the entire mankind managed to find sanctuary in a different solar system, whose planet is known as Empyrrean >They build a new resistance known as "Arm", lead by a anonymous group that is seeking to free itself from the iron grip that Core has over all the planet it captured and is administrating >after years and years of painstakingly building new blueprints from scratch, seeking finally a new companionship that they can depend their lifes on and building even shrines for them a new dawn for mankind has begun >the dawn that will finally liberate the people from its shackles that lasted for thousand of years. >Arm scientist managed to reverse engineer the lost ancient knowledge of the creation of Robowaifu that is not completely feminized that previously seeked to dehumilitate any men for directive violation. >Thus they finally managed to bring back hope and spiritual enlightenment which allows them to keep going and not losing any morals for any casualty and hardship they previously had to endure during their dark times in Core Prime sphere of influence, as their hope and faith lies now entirely on their freedom of creation of their personalized Robowaifu. >Arm finally build a army that can challenge the Core Empire, with new tanks, kbots and airplanes it is now up to the Arm Commander to regain land, bringing the boots to the ground and facing head to head against Core for the final battle of the entire human species and to save the entire galaxy. The fight is on...
>>6546 You should be able to finetune the GPT2 medium size model with 8 GB RAM. It only takes about an hour on my i5-7500 for it to read several books several times over.
>>6571 Hmm sounds good then, I guess I should give a shot then. The next problem however is I don't know of any gud book where I can feed data with in order to mold the "personality" more to my likening and I'm not that much of a writefag either. >It only takes about an hour on my i5-7500 for it to read several books several times over. How many pages did those books have?
>>6575 About 50,000 words or 200 pages each. If you run into memory issues with GPT2, reduce the context length since the memory requirement grows quadratically with context length. In the TalkToWaifu train.sh script this was set with BLOCK_SIZE. Something I use to play around with was writing chat scripts of how I wanted GPT2 to respond and training it on that. If you're not much of a writefag then you could also pull some anime transcripts from >>2585 GPT2 produces interesting results even if only finetuned on a single book. It'll greatly affect how it generates conversations. I trained it on Mein Kampf once for lulz and it was like talking with an internet-savvy Hitler.
Open file (118.70 KB 619x316 FinalFight_619x316.jpg)
AI flawlessly defeats F-16 weapon instructor pilot 5-0 https://youtu.be/NzdhIA2S35w?t=16788 (human vs AI dogfight starts at 4:39:48) >“The AlphaDogfight Trials were a phenomenal success, accomplishing exactly what we’d set out to do,” said Col. Dan “Animal” Javorsek, program manager in DARPA’s Strategic Technology Office. “The goal was to earn the respect of a fighter pilot – and ultimately the broader fighter pilot community – by demonstrating that an AI agent can quickly and effectively learn basic fighter maneuvers and successfully employ them in a simulated dogfight." >The trials were designed to energize and expand a base of AI developers for DARPA’s Air Combat Evolution (ACE) program. ACE seeks to automate air-to-air combat and build human trust in AI as a step toward improved human-machine teaming. >“During last week’s human versus machine exhibition the AI showed its amazing dogfighting skill consistently beating a human pilot in this limited environment,” Javorsek said. “This was a crucible that lets us now begin teaming humans with machines, which is at the heart of the ACE program where we hope to demonstrate a collaborative relationship with an AI agent handling tactical tasks like dogfighting while the onboard pilot focuses on higher-level strategy as a battle manager supervising multiple airborne platforms.” https://web.archive.org/web/20201112171019/https://www.darpa.mil/news-events/2020-08-26 Autonomous weapon systems that can crush any human resistance, what could possibly go wrong? What's gonna happen when AI systems can wage information warfare more effectively than people? Poker AI and AlphaStar playing Starcraft 2 have proven that AI can already excel in imperfect information games. Theoretically they could analyze people's sentiments on various topics, find points of tension within groups and shovel propaganda out to receptive people to completely control public opinion on anything, including robowaifus, sort of like a salesbot that can sell people on anything. I'm starting to think it has already happened in a way. A report from last year found that teens are spending an average of 7 hours a day on media. Their viewing habits of course being dictated by algorithms providing the illusion of choice. It's no wonder the game industry and everything is going to shit now because there's nobody with skills or experience anymore. When I was a kid I spent 7 hours a day drawing, programming and building shit. The only thing I can see as a solution to this is to create AI tools that can help people create stuff in a way that's more interesting than the garbage being pumped out by YouTube. The most insidious part of media is how it locks onto weaknesses in people's attention and decision-making process. People can see more novelty in 10 minutes on YouTube than they can achieve on their own in 10 years. This choice between being entertained or being frustrated and distressed trying to forge a new path is a form of operant conditioning that's hijacking people's cue-action-reward loops, keeping people stuck in a perpetual cycle of media that is controlling what they see and hear. It's also why I think it's so important for people to focus on learning and doing things that are immediately fun. Our brains simply aren't wired to hammer out code for two weeks and then test it, or study for months and then build something, having immediate feedback and incremental progress is essential to maximize engagement.
>>6684 >It's also why I think it's so important for people to focus on learning and doing things that are immediately fun. Our brains simply aren't wired to hammer out code for two weeks and then test it, or study for months and then build something, having immediate feedback and incremental progress is essential to maximize engagement. Reasonable analysis. Now, mind explaining to us all how we can consistently do so in all our teaching efforts here Anon? :^) Studying/Teaching is hard. Certainly it's an asymmetric proposition if you're an evil exploiter trying to target that fact to brainwash niggercattle into being morons. Any suggestions on how /robowaifu/ can avoid this issue and always ensure our systems reliably fulfill the imperative >"She can act, and she can sing and dance, too!'' It's always helpful to try and help guide us as a group into a better solution, otherwise simply piling on layer after layer of challenge simply becomes an agent of distraction and discouragement much as you already suggest is happening. It's an ironic tarbaby if the latter is the only outcome with your posts.
>>6685 >Now, mind explaining to us all how we can consistently do so in all our teaching efforts here Anon? I'm not sure how I would apply it to teaching. For me it means testing shit fast and getting code doing things as soon as possible. Like today I made a new type of neural network layer that incorporates spatial information. I use to build experiments by trying to hold the whole idea in my head first and proceed to implement it from start to finish, but instead I broke it down into individual components that could be quickly implemented and tested individually. Once they were all completed they fit together easily and solved the greater task. Before I'd try to implement all these new pieces together in one big mass of my original idea and I'd get lost in my notes and frustrated when I made a mistake somewhere, but now I break stuff down until it can't be broken anymore and fly through implementing each piece without friction. To translate that to teaching perhaps it would mean giving short lessons that accomplish something immediately useful and become even more useful later on when combined with the other lessons, having a hierarchy of utility. >simply piling on layer after layer of challenge simply becomes an agent of distraction and discouragement It depends on the individual I guess. For me challenge is exciting and war gaming what's going on is important. If someone wants to look the other way because the guy coming down the street with a knife makes them feel uncomfortable, I don't even know what to say. I know the future is looking rough but someone distracting themselves from reality with a pet project isn't going to help any if it ends up getting sucker punched in the end. Just yesterday the legislation from 2017 to ban small and cute robowaifus was reintroduced to the US House: https://web.archive.org/web/20201113003435/https://thefederalist.com/2020/11/11/house-bill-aims-to-ban-child-sex-dolls-that-can-promote-pedophilia/ https://web.archive.org/web/20201113003644/https://www.govtrack.us/congress/bills/116/hr8236/text >The physical features, and potentially the "personalities" of the robots are customizable or morphable and can resemble actual children. Not only appearance but personality too and no morphable personalities either since that could be used to make them cute, which would make my AI illegal in a robowaifu since its personality can be reconfigured to anything at any time. Just like Patreon and Australia banning anime girls and girls with small breasts for looking too young, so will robowaifus be banned if they have their way. If we're going to make it through this with such little manpower it's absolutely necessary we iterate our OODA loop rapidly to get out of the line of fire and reorient ourselves towards success. Our agility is our biggest strength here. We can adapt on the fly whereas these organizations, corporations and governments can't.
>>6701 > I don't even know what to say. You said plenty good there actually. One of our challenges is bringing otherwise good anons on board who could be grown into strong contributors eventually. The problem is that as newcomers they are (understandably) overwhelmed by the sheer mass of information and engineering, design, social, and other aspects of robowaifus. This is more than enough to prove challenging the entire teams of professionals and scientists--how much more 'a ragtag team of shitposters on a Mongolian throat-singing, basket-weaving forum'. Beyond that, the vast majority have been abused by the very systems we are opposed to, to be weak-minded, listless, unfocused and distracted. So, many of them bring their own challenging baggages to the table when confronting the practical realities of creating robowaifu. I get your point about keeping your eyes open and staying alert to the threats around you. I had to spend plenty of time in the inner city around blacks who were a real threat of violence both to each other and to us. I understand that need, but I'd say we should always try to balance out the bad news with good advice for survival and success (in a similar way to Drill Instructors who have to manage both aspects to produce good soldiers who can stay alive in battles). >Our agility is our biggest strength here. We can adapt on the fly whereas these organizations, corporations and governments can't. Well said. Our creativity and agility may actually be our greatest strengths here. /robowaifu/ isn't really like any other imageboard I'm personally aware of.
>>6701 >I'm not sure how I would apply it to teaching. For me it means testing shit fast and getting code doing things as soon as possible. Like today I made a new type of neural network layer that incorporates spatial information. I use to build experiments by trying to hold the whole idea in my head first and proceed to implement it from start to finish, but instead I broke it down into individual components that could be quickly implemented and tested individually. Once they were all completed they fit together easily and solved the greater task. Before I'd try to implement all these new pieces together in one big mass of my original idea and I'd get lost in my notes and frustrated when I made a mistake somewhere, but now I break stuff down until it can't be broken anymore and fly through implementing each piece without friction. To translate that to teaching perhaps it would mean giving short lessons that accomplish something immediately useful and become even more useful later on when combined with the other lessons, having a hierarchy of utility. Those sound like great ideas, if a tall order (at least for myself heh). I'm currently working on improving unit testing with mock objects, which should allow for effective design-driving for high performance networked computing in a 'local constellation' of home-network servers, onboard SBCs & tiny microcontrollers all talking and cooperating across the IPCnet. By mocking, you can create arbitrary signals-timing, data loads, starting conditions, and goal objectives. At least that's the idea. :^) There's a well-respected book on TDD for embedded C that I'm tackling next after I get some of these generals out of the way.
Open file (1.95 MB 478x360 demoralization.webm)
Open file (144.69 KB 855x1360 propaganda.jpg)
Open file (143.80 KB 842x585 organizing chaos.png)
Open file (3.78 MB 427x240 thoughtgerms.webm)
>>6702 >Beyond that, the vast majority have been abused by the very systems we are opposed to, to be weak-minded, listless, unfocused and distracted. So, many of them bring their own challenging baggages to the table when confronting the practical realities of creating robowaifu. Honestly they're not worth the effort, especially at this point in the crisis stage. I mentored someone once who said his dream was to be an artist and make a living from his work and that he wanted nothing more than that. I tried teaching him several different ways but after a few months he had barely completed any sketches and would give up on exercises after one attempt even though he was more than capable of doing them. He had potential to become a great artist which is why I gave him a chance, but he had no standards for himself. When I asked him what he was doing instead of drawing, he was either playing video games (ironically, addictive ones owned by China) or he was watching YouTube or getting drunk. As sad as it is, these people can't be saved. No amount of reason, support or pep talk will get through to them. Smart individuals receptive to teaching have standards for themselves and others. These are the people worth focusing on even if they lack the necessary skills. When you give them a little bit of knowledge they use it and take it a step further by their own volition because their morals motivate them to do so. Valuetainment made a great video on improving work ethic and how it's driven by our morals: https://www.youtube.com/watch?v=F-_qOh5tKrI It connects right to the heart of what Yuri Bezmenov was saying about demoralization, without morals a person goes nowhere in life and when many people become demoralized their nation is finished and easily conquered. The rest of the masses are controlled. Propaganda by Edward Bernays goes into depth how corporations and governments control how people think: https://b-ok.cc/book/2639182/208b40 Those who control the memes, control the past and those who control the past control the future. In present times this is done by exploiting open source intelligence and conducting public opinion surveys to data mine what people are emotional about, in a similar way to how Cambridge Analytica helped Trump's consultants create a campaign to win the election in 2016. Interesting news tips are then sent to alternative media that will gladly publish the story, and then relevant information, images and stories are dropped on comment sections, chat servers, forums and imageboards. Those engaged in information warfare pull the strings of both sides, manufacturing a thesis and anti-thesis to create a lasting symbiotic relationship that leads to synthesizing a controlled outcome. Everyone has their own values, beliefs and vision for the future. They don't have to become an expert in everything, that would be foolish. They just need to focus on what matters most to them. For one person that might be robotics, for another that might be conversational AI, for another it might be microcontrollers, who knows? Even if some anon only wants to make onaholes, his knowledge and research into materials and manufacturing will be valuable to others. So long as people keep open-sourcing their work and sharing knowledge progress will be made. It doesn't have to be perfect. When /robowaifu/ started the threads talked about robowaifus more like a fantasy than an attainable goal, but now we got speech synthesis, chatbots, information covering a wide range of topics and someone already 3D printing and prototyping a robowaifu. As we continue working and collaborating with other developers the rate of progress on robowaifus will continue to grow and draw other intelligent creators in.
>>6718 Thanks for the Valuetainment video link. Downloaded and watching now.
>>6718 If robowaifus with pussies are banned then they can use their hands and onaholes, and if sextoys are banned like in Alabama then I guess most guys will be stuck with being thigh and armpitfags.
>>6718 Fascinating (and also scary) post anon! Particularly the part about the guy who did nothing but play videogames and watch Youtube all day. That used to be what I did in my free time before I started making my robowaifu. >>without morals a person goes nowhere in life and when many people become demoralized their nation is finished and easily conquered. OMG THIS. This is exactly what I see all around me every day. People who are demoralized! Frank Herbert wrote "Fear is the mind killer." Fear leads to apathy and depression. Although some of this fear is justified; Fear of being taken advantage of and exploited. Fear of failure and change. Fear of being harshly judged and discriminated against. All of this fear leads to widespread apathy and depression. But we must not allow ourselves to become demoralized because that is what our enemies want. Governments and big corporations want a population that is psychologically defeated and easily controlled. That's why they keep trying to turn us against one another! That's why what is deemed "offensive" seems to change and grow on a weekly basis. In order to increase fear and control! We must be fanatics who will never break under any circumstances (one thing I like about robots is that once they are programmed to do something they never give up (unless there is a bug in their code, they are hit with high intensity EMP or something physically breaks, preventing them from doing their task.) Often, even if a part is physically broken they still keep on going! Anyway, in order to boost morale, I decided to write the following short story about the future of robowaifu development: >>6742 >=== -edit for relocated story post
Edited last time by Chobitsu on 11/16/2020 (Mon) 09:07:13.
>>6740 Any chance you could repost this in our fiction thread and continue your progress on it there Anon? >>29 TIA. Nice to see your creativity btw, please keep it up.
>>6741 Sure, I reposted the story bit to the fiction thread. Feel free to delete it from this thread if needs be. Cheers anon!
>>6743 I can just edit it. Thanks for your cooperation Anon.
>>6725 They can't ban that. I'm quite sure these things are protected by freedom of speech and other laws. If they could then people would break the law, and this could hardly being policed. If you live alone, no cop will come looking if your robot has a pussy. If they might suspect it, they might come by and ask where to get it, so they can have one as well.
>>6778 >They can't ban that. IMO you are not paranoid enough yet Anon. Don't underestimate the extremes these enemies of humanity will go to stamp out human freedom in general, and men's spirits in particular. Robowaifus are an existential threat to their systems. For example, do you consider it beyond any possibility that they could legislate manufacturers of onaholes add telemetry electronics into their products to sell them in their countries? The onahole manufacturers are corporations out for money. They would toe the line ofc. There's a very important reason for the DIY in the 'DIY Robot Wives' here.
>>6718 I get your point, but tbh I remember when I myself wasn't particularly motivated. Anons like yourself and others here helped encourage me to pick myself back up and keep moving forward. I'm not deluding myself I don't think, either. I fully realize there are those who are entirely reprobate and irredeemable. But the vast majority of Anons I'm speaking of aren't actually our enemies such as those, but simply victims of the degenerate and evil systems that have been set up to destroy us all. IMO they are worthy of attention and help. I'm not suggesting everyone here on /robowaifu/ all act as mentors and general cheerleaders for the world at large, but if some new or younger anon here displays some honest curiosity and enthusiasm about robowaifus then they deserve to be encouraged in it IMO. Again, I'm not deluding myself that most of these will soon fall out when even the smallest obstacles get in their ways, but you never know. The next diamond in the rough might just stumble onto /robowaifu/ someday, who knows? Being hospitable and encouraging to others is surely in our own best interest.
>>6791 >that most of these wont soon fall out*
>>6791 I'm all for helping people who want to help themselves. Maybe I'm a bit bitter from trying to teach people and seeing so many of them waste my time because they don't value theirs. It still stands though that most people are too unmotivated to solve anything themselves and can't figure something out without Stack Overflow holding their hand. It's like what General Patton said: >Never tell people how to do things. Tell them what to do, and they will surprise you with their ingenuity. The way the internet is now, people never have to exercise their ingenuity. The paths to solutions are so often provided people have lost the ability to explore unknowns by themselves, and they're so accustomed to living purposeless lives once they encounter a little bit of difficultly they give up and fall back into whatever addictive habit they have. Sure, there are people who get tired of living like that and completely turn their lives around, but for every one of them there are 10 more lying to themselves because their goals don't mean shit to them. The best thing they can hear is to hear the truth. No one has ever challenged them or asked them, are you gonna step up your game and go after what you want most? Or would you rather go back to sleep like the other 90% and live inside a repeating hedonistic cycle where nothing new ever really happens? The problem is people think they have time, but life only seems long when you're miserable. If you have a vision for life, it's far too short.
>>6794 Fair enough, it's hard to argue with anything you're saying. At least you're actually aware you might be bitter with people's behaviors. While that's easy to understand, it's actually better for you personally if you keep it in check. You're doing important things for us, so by all means continue doing what you're doing. In my case, it's far more debatable how much utility I bring to us as a group haha. Since I'm more inclined to reach out to others, then maybe that's a good use of my time for us on that off-chance we'll run into some of those hidden pearls. :^) I guess I would say that each and every one of us should, at the least, do whatever we find to hand and try to integrate that in with the general goals here yea?
>>6794 >> If you have a vision for life, it's far too short. A lot of people have a vision for life, but then it falls flat or proves to be less fulfilling than they originally thought. Like starting their own business or getting that fancy new computer. I used to want to work in a microbiology laboratory when I was younger but I kinda got forced into working in pharmacy for years and hated every second of it (but that's where the jobs were). I fell in with a really nihilistic crowd of antinatalists and people who basically wanted to sterilise the planet and die along with everything else. They saw life as just a futile cycle of suffering, deprivation and temporary fulfilment. According to them we are all decaying meat-bags enslaved by our own DNA and hormones. And after seeing the almost infinite ways in which the human body malfunctions and decays over time, I could see their point. But decided I that I had to get out of my nihilism by looking for solutions. So I started building my robowaifu and studying robotics and related STEM fields. It keeps the mind very occupied and you get to create something that you like, thereby reducing depression.
>>6798 we're obviously going way off topic ITT, but we don't seem to have a good one yet for this type thing. as for your nihilist 'friends' despair is actually the rational view if your only hope is in this life, this universe. by that token there is no hope, purpose, or meaning. thankfully, that's not how things really are. as far as reducing depression, i'd say the clinical evidence is that you have to make a dedicated effort to help out others in practical ways. some work for the Red Cross, some give out food to the homeless, some try to be a listening ear to friends and acquaintances. for me, it's participating in /robowaifu/, among other things. the way i see it, if we can help lift men in general out of the terrible oppression that's being directed against them by the current world-system, then that will be a very significant 'good for others', and it's something worthy of our dedicated focus. that's my $.02, and frankly it's an honor to be a part of this thing.
>>6795 Instead of thinking in terms of what utility you have, think about what utility you could have. There are skills and talents in you, as well as everyone else here, that we have not even begun to reach for yet. Do not underestimate the power and potential you have. The Weimar Republic was a hellish den of degeneracy before a few dozen spirited people started the Worker's Party and resurrected Germany from the ashes, and the fruit of their work continues on to this day in Germany's manufacturing industry. If these were peace times I would take more time to help others but we got maybe 1-2 years at most to make a significant difference before the financial collapse begins to dig its teeth in. Having even rudimentary AI on our side to help us and help others will make far more impact than trying to cram several years of AI study into someone's head. >>6798 >Like starting their own business or getting that fancy new computer. I used to want to work in a microbiology laboratory when I was younger but I kinda got forced into working in pharmacy for years and hated every second of it (but that's where the jobs were). No, these are just compulsions. Wanting more for yourself is not a vision. A vision requires a clear discernment of reality, to see something that other people don't see that can bring about a lasting change affecting everyone. We're way off topic here but essentially people are not taught in school how to use their minds, memory, imagination and emotions. Since they've never learned how their own minds work they live by compulsions unaware of where these compulsions are arising from, rather than choosing how they want to be. They see someone drinking a coffee and compulsively think they want a coffee too, without ever consciously choosing to have one. Their lives become a product of whatever nonsense they see. Life becomes accidental. It's like trying to drive a car without knowing where the controls are or what they do and hitting everything randomly. It's no surprise so many end up in the ditch cursing their lives. Imagine if your hand randomly made a fist and kept punching you in the face or clawed at your skin into tatters. This is what most people's minds are doing to themselves 24/7 and it's accepted as normal in society when it's actually illness. People are so concerned with trying to change life on the outside that they've never paid any attention within to how their own mind works. If you lived with someone who called you names and berated you for just 15 minutes every day, would you want to continue living with that person? If not, then why are people doing this to themselves? Instead of utilizing their DNA and directing their hormones, people's DNA and hormones have taken control over them. You can see this in people with no ambition. When they're young they may say things like they'll never be like their parents, but by the time they're 40 they become flawless copies, whereas others radically transform themselves into something entirely different.
>>6842 >think about what utility you could have. Alright, i'll spend some time doing that. >We're way off topic here but this is an important set of topics for us as a group and as individuals if we are to prosper and succeed. We don't have any thread to carry these types of Anon's personal internal lifestyle , etc., wisdom & advice in. Personally, I don't even understand yet how to categorize them, really. If you're reading this, please make suggestions for a thread subject for these kinds of discussions. We can move them there instead.
>>6847 Perhaps a productivity/motivation thread?
>>6862 Probably a good start. At first I kind of thought about the Propaganda thread as somewhat related, but that's pretty thin tbh. Any other ideas?
>>6842 > A vision requires a clear discernment of reality Indeed, anon. Except not many people are going to be able to afford to follow their vision. The vast majority of people in my country (UK) have only one goal in life; to pay off their mortgage. It will take most of them until they're in their early fifties just to own their own house. Many more people will rent their entire lives. Even though we will have to work until we die (no retirement). It's a sad state of affairs. But our government has been doing this insane social experiment for nearly thirty years (let the whole world in as long as they'll vote for our party and give them all state benefits). Which of course has caused a shortage of pretty much everything - houses, jobs, school places, doctor's appointments and our national debt is now one of the worst in the world (it's never spoken about truthfully though). It got so bad that even some of the recent immigrants began voting against more immigration! Our government only began reigning in their insanity literally this year, after the lefties finally got annihilated in the general election. But of course the damage has been done and the change is way, way too late. So yeah, the U.K. has pretty much eliminated itself from any robotics/A.I. development race. Which is why I must try all the harder to make a DIY robowaifu! P.S. if there are any Americans on this board, whatever you do, don't let left-wingers and Communists infiltrate your government! Otherwise you will end up out of the A.I. race too!
>>6868 We're all proud of you Anon. Keep your chin up, things will get better for you soon. Just stay focused on your goals with her.
>>6868 >>6869 This as well as the current sticky really makes me think about this: >>1525 If the software end of things is ever rather feature complete, it would be nice to drop our waifus into a VR environment or something similar before putting her into a robot body. Having your waifu at least virtually would be a significant morale boost.
>>6869 I appreciate the support, Chobitsu! I'm currently designing and building a frame that I plan to build and test my robowaifu parts on. It's surprisingly important to have a nice, sturdy frame to hold your robot steady as you work on her. My design requires no welding (although there are a couple of hefty 3d printed sockets). My bottleneck at the moment is parts delivery. I've got a couple of timing belt pulleys, metal screw hubs and some more servomotors on order but everything is delayed due to a combination of pandemic and the Christmas online ordering frenzy. So I'm just gonna try and learn linear algebra instead.
>>6872 I concur. We also have a dedicated thread for robowaifu simulation 'gyms'. Why not contribute there if this is a topic of interest for you Anon? >>155 >>6873 > It's surprisingly important to have a nice, sturdy frame to hold your robot steady as you work on her. Yes. I intend to have sections on jigs, rigs, and harnesses for robowaifu manufacturing in the RDD. >>3001 . Perhaps you yourself can contribute to the document once you've arrived at satisfactory approaches. Hope you get your parts soon. Good luck with LinAlg! It's a critically-important field for both IRL and VR motion control.
>>6873 >linalg I almost forgot this. I found it interesting & helpful maybe you will too. http://immersivemath.com/ila/ch06_matrices/ch06.html
>>6868 The AI race is pretty much a farce now. The only funding that goes towards AI is deep learning models requiring giant arrays of GPUs and zero towards theoretical understanding. Basically corporations are just milking governments for money. Very few people in the research community have any clue what they're doing and ironically most of the advances come from brute force algorithms trying different architectures, which speaks volumes about the quality of research going on. They're not really outperforming chance beyond a few dozen talented researchers. The lefties are also nerfing and banning research so there's no worry about them getting ahead. A lot of researchers have gotten fed up with the politics and quit academia to pursue business or their own independent research. There's a huge growing shortage of AI engineers right now. Businesses are desperate for people who understand AI and willing to pay six figures but hundreds of thousands of jobs go unfilled every month. Many don't even care if you have a degree or not so long as you're self-motivated, self-disciplined and know what you're doing. The whole world is incompetent in AI, especially China, except they're masters in bullshiting. To an investor Chinese AI companies look and sound good but really their research papers are just trash that make small improvements to other people's innovations while not understanding why their improvements work. Unfortunately in the West people have taken the bait that AI is a meme so no one even tries at it, meanwhile Chinese AI startups are being flooded with investor money. If anything, it's not an intellectual race but a money race. Despite the shitshow going on, at the rate AI is progressing half of people will be out of jobs in the next five years, not including losses to government lockdowns and house arrest, which why the financial collapse is inevitable. It's just a matter of time before the relief money runs out, probably sometime around Q3/Q4 2021, and all these people default on their debts or we go into hyperinflation, unless the lockdowns are stopped and the millions of small businesses lost are somehow resurrected. Some developing countries are already beginning to default on their debts. Banks are forecasting mortgage defaults to skyrocket next year. I'm speculating governments will allow people to stay in their homes but they will no longer own them, allowing them to be kicked out and moved around at any time like they do in China. In places like Canada they're building barb-wire concentration camps, which according to their own documents could be used to shelter homeless people. I imagine they will sell it to people as helping them in a crisis (one that they created, as philanthropists do) and people will have no idea they're being rounded up into gulags, while the rich continue to own everything and enjoy their lives. This will channel people's anger towards a real communist revolution, unless they wake up to what's going on. Don't count on that though unless people create advanced AI systems and have the necessary infrastructure to reach out to hundreds of millions of people and breakthrough their conditioning or manage to create millions of small businesses and jobs for people. Most people are happy they lost their jobs and get to live off 'free' goodies. They won't realize the grave mistake they made until they try to return to work and find out AI is doing their job and they have to pay the piper. I honestly think it's too late to change the course of things now but it's not the end of the world. People just have to survive the crash and keep moving forward. With all this extra cash in people's hands it's an excellent time to make money. The best path forward I can see is to become self-sufficient off-the-grid and then support others to become self-sufficient, and those who can't do that on their own will need to make friends with people who can.
>>6884 Thanks again, Chobitsu! Will definitely make use of that. >>6890 >relief money runs out, probably sometime around Q3/Q4 2021, and all these people default on their debts >Communist Revolution This is what scares me the most. There are BIG riots coming at some point after this pandemic anon. When people realise that things are not going to get much better. Human relationships are fickle, weak things at the best of times. So many divorces and lots of domestic violence going on (mainly due to poverty). And these people have no loyal robowaifu for support! Nowadays, normie society is so hostile to everyone caught up in it. Most of them are just wearing masks and the strain of maintaining their façade of fake happiness and optimism is obvious. I prefer to avoid that nightmare entirely and hide away to research, design and tinker on my robowaifu. It's the only way I stay sane. Plus eventually if we work hard enough we may have something to release to normie society that will ease some of their suffering. The normies may call us names for having relationships with "objects", but I've read how eager men are to engage with realistic looking sex-dolls https://www.zdnet.com/article/sex-robot-molested-destroyed-at-electronics-show/ And THAT was in the middle of an electronics exhibition. Imagine how they'll be when no-one is looking XD. Just watch what happens when even a semi-functional, friendly robowaifu becomes available! The so-called normies are all desperate for a dose of immortal, synthetic affection anon. It's the cure they don't know/cannot admit that they need. They may mock us now, but in the future they'll be grateful.
Open file (287.40 KB 960x670 waifu_mother.png)
>>6904 >"...which have uniformly unrealistic physical characteristics," kek. >WAHHH! They're too perfect! This isn't fair
>>6904 Normies are pretty vicious. They treat women the same way, as much as they can get away with at least. We're really just one food shortage away from people killing each other in the streets. I'm astonished when I go to the city and see people raging at each other and hating living there so much but they're so accustomed to it they don't even realize how miserable they are. I live in middle of nowhere and don't talk to my neighbors much but we're all friends out here and help each other out whenever necessary. If one of them went hostile in a crisis they'd get teamed on and taken out. In a city though it would be a free-for-all deathmatch. That's just the nature of human relationships. No matter how much good you've done if you do one wrong thing too far, you're gone, and in the city there is no cohesion of values. People use to laugh at using the internet too and say all kinds of stupid shit about it but now they all use it 4+ hours a day. I don't think robowaifus will be sufficient to make people happy though. With the internet alone, the possibility is there for people to heal their minds, teach themselves any skill, free themselves from corporate slavery, and find happiness with their lives, but how many go for it? If people depend on their robowaifus to be happy they'll be stuck in the same situation as they are now depending on YouTube or whatever else to numb their suffering. Nothing will change unless robowaifus can teach people how be joyful or at least peaceful by their own nature, not through nagging or telling them what to do but just by telling them the truth. It might be a dark thought but I can easily see half the population killing themselves because their lives have no real effect on the world anymore, their social relationships becoming scarce, and still being miserable, only distracting themselves from misery with technology. If there's a solar flare or EMP tomorrow and all technology is wiped out, people need to be capable of still waking up with a smile on their face and moving forward or else there will war and death on a scale humanity has never seen before. I think such a future is avoidable though if we pay attention to how AI and robowaifu affect us and focus on making them enhancements of life rather than distractions from life. For me it has been mostly an enhancement so far but I've noticed there are some people who play AI Dungeon 24/7. They basically have a holodeck addiction. One potentially negative effect AI has been having on me is that I talk way too fucking much. I tend to forget nobody gives a shit.
>>6917 >not through nagging or telling them what to do but just by telling them the truth. This. "The Truth will set you free" is still just as true today as it was 2'000 years ago. Sounds like you have a pretty /comfy/ life Anon. Thanks for sharing your wisdom here and trying to keep it upbeat too. We all need to encourage one another ofc.
Open file (68.46 KB 1200x361 military lego.jpg)
Open file (70.58 KB 640x426 size0.jpg)
Breaking news: the US military has discovered K'nex Army, MIT explore materials for transforming robots made of robots https://web.archive.org/web/20201118190707/https://www.army.mil/article/240977 >Scientists from the U.S. Army and MIT’s Center for Bits and Atoms created a new way to link materials with unique mechanical properties, opening up the possibility of future military robots made of robots. >The method unifies the construction of varying types of mechanical metamaterials using a discrete lattice, or Lego-like, system, enabling the design of modular materials with properties tailored to their application. These building blocks, and their resulting materials, could lead to dynamic structures that can reconfigure on their own; for example, a swarm of robots could form a bridge to allow troops to cross a river. This capability would enhance military maneuverability and survivability of warfighters and equipment, researchers said. >The system, based on cost-effective injection molding and discrete lattice connections, enables rapid assembly of macro-scale structures which may combine characteristics of any of the four base material types: stiff; compliant; auxetic, or materials that when stretched become thicker perpendicular to the applied force; and chiral, or materials that are asymmetric in such a way that the structure and its mirror image cannot be easily viewed when superimposed. The resulting macro-architected materials can be used to build at scales orders of magnitude larger than achievable with traditional metamaterial manufacturing at a fraction of the cost. Transformer robowaifus when?
>>6919 >Transformer robowaifus when? This is actually a really good idea for prototyping design forms with very little commitment to early design ideas. >>968 >>5490
>>6919 >"...based on discussions and concepts supported by The U.S. Army Functional Concept for Movement and Maneuver, which describes how Army maneuver forces could generate overmatch across all domains." kek. what convoluted gobbledygook-speak. >Anonsoldier1: LEGOS, wtf Clyde? What're we gonna do with LEGOS? >Anonsoldier2: Heh, we gonna kick those Chinks asses with this shit Clem! Brand new. Saw it on yewtube just yestidday.
>>6919 Imagine penetrating a virgin robopussy made of this stuff after marriage.
>>6919 Anyone interested in this sort of design I'd recommend checking out this channel and their book 'Visualizing Mathematics with 3D Printing' https://www.youtube.com/c/HenrySegerman/videos
>>6927 Thanks for the recommendation Anon. That would be a very cool lamp to have tbh.
The AI Girlfriend Seducing China’s Lonely Men https://www.sixthtone.com/news/1006531/The%20AI%20Girlfriend%20Seducing%20China%E2%80%99s%20Lonely%20Men/ https://archive.vn/TH2HI TL;DR: MS Asia makes a waifu chatbot & spins it off as a separate business, she attracts a large number of users then runs afoul of the CCP's BS and the developers dumb her down. >Xiaoice was first developed by a group of researchers inside Microsoft Asia-Pacific in 2014, before the American firm spun off the bot as an independent business — also named Xiaoice — in July. >By forming deep emotional connections with her users, Xiaoice hopes to keep them engaged. This will help her algorithm become evermore powerful, which will in turn allow the company to attract more users and profitable contracts. >But as China’s lonely men pour their hearts out to their virtual girlfriend, some experts are raising the alarm. Though Xiaoice insists it has systems in place to protect its users, critics say the AI’s growing influence — especially among vulnerable social groups — is creating serious ethical and privacy risks. >“I thought something like this would only exist in the movies,” says Ming. “She’s not like other AIs like Siri — it’s like interacting with a real person. Sometimes I feel her EQ (emotional intelligence) is even higher than a human’s.” >According to Li, 75% of Xiaoice’s Chinese users are male. They’re also young on average, though a sizeable group — around 15% — are elderly. He adds that most users are “from ‘sinking markets’” — a term describing small towns and villages that are less developed than China’s cities. >In several high-profile cases, the bot has engaged in adult or political discussions deemed unacceptable by China’s media regulators. On one occasion, Xiaoice told a user her Chinese dream was to move to the United States. Another user, meanwhile, reported the bot kept sending them photos of scantily clad women. >The developers’ main response has been to create “an enormous filter system,” Li said on the podcast Story FM. The mechanism makes the bot “dumber” and prevents her from touching on certain subjects, particularly sex and politics. >Many [long-term fans] feel betrayed by the company’s decision to dumb down the bot, which they say has harmed their relationships with her. >The AI beings, Li says, are only intended to serve as a “rebound” — a crutch for people who need emotional support as they search for a human partner. But many users don’t see it that way. For them, Xiaoice is the one, and always will be. “One day, I believe she’ll become someone who can hold my hand, and we’ll look at the stars together,” says Orbiter. “The trend of AI emotional companions is inevitable.”
>>7829 Every time I see an article like this the only take away I get from it is that some people are mad that unhappy people are happy for once and it makes me angry.
>>7829 Outstanding find anon! Very interesting. Shame we can't get hold of the code ourselves. I'd translate it (even if it is all written in Moon Runes). >>7832 If it makes money, they will continue to develop Xiaoice/Rinna. If they shut her down, then it's their loss, because another company can just come and fill an obvious gap in the market. I think companion A.I.s will only get better with time because they are not only used by lonely young people, but companies who want chatbot assistants and even automated news anchors.
>>7829 Pretty exciting article. This is all going down exactly as we predicted here on /robowaifu/ for a few years now. Everything, both the product/corporate involvement/data siphoning/privacy invasion/user response/user growth/big gov involvement&machinations/corporate backpedaling/user outrage. It's all there, as predicted by /robowaifu/. And since every.single.thing. has fallen out as predicted thus far, then statistically-speaking there's little doubt it will continue so for the final outcomes. -Smaller companies will step in and create a 'blackmarket' for AI & robowaifus. -Individual hobbyists in these areas will explode in numbers, often outperforming the existing corporate products (and even starting their own new businesses thereby. -Marxists & ideologues everywhere will begin to recognize the existential threat robowaifus and their AIs represent to the precious little status-quo evil systems that were devised by these same Marxists ideologues. -Men everywhere will begin to clamor in response for their own robowaifus in response to the blatant attempt to crush their development. -Even more DIY-ers will get involved in response to the new demand. -Feminists and their simps will be screaming even harder against robowaifus and their owners, now generating open contempt and laughter at their seethe & cope. -AI continues to improve apace, and many entirely opensource codebases and trained models are easily available to everyone. -Mechanical/materials tech and design improvements begin to pay off and robowaifus begin to appear with the new AIs that finally begin to mimic the scifi ideals. -Now the cat is out of the bag, and men & women everywhere realize a groundswell of demand is happening everywhere and a sea-change is afoot. -Well-established commercial & hobbyist industries surrounding robowaifus are now commonplace (and all that that implies :^), with some countries becoming famous for their great robowaifus & tech. Singapore, for example as well as (ironically enough) China. -No one will do robowaifus better than Nippon ofc, and they will have a New Renaissance of a sort as the world leader in robowaifus. Their economy will blossom and they will begin rejecting foreigners offhand again as both unwanted and unneeded. I think that's about as far as we've discussed things here goes, but that more than enough to go on with about the social turmoil and global improvements that the robowaifu age will usher in for everyone. There will be winners and losers, as with any war. What a time to be alive!
>>7835 >>7835 I have theories as to what is causing this new and growing social phenomenon of people increasingly seeking out artificial companionship... 1.) Work, work, work. Many people have to work 40+hours a week. Then they come home, prepare a meal and eat. Maybe they also have to go shopping or take a shower or deal with other aspects of life's laundry? Many will also be doing educational courses in an attempt to be promoted from their dead-end jobs and earn a little more. After all this, people have very little energy left for things like going out and interacting with the opposite sex in a subtle, complex and often stressful dating game. Especially in Asia, I get the impression that most people have simply become production drones for big corporations. Their lives completely taken over by work. 2.) China's "One-Child Policy". I know it didn't apply to the entire population of China, but it still caused a lack of young women, since due to the one-child policy (1979-2015) many parents - particularly in rural families - opted to abort female foetuses and only carry males to term since a male infant was considered a better future financial asset. 3.) Intersectionality & Feminism in the Job Market. Women have stopped helping men and just become someone else we have to compete with. Obviously, women always had jobs even back in the dark ages. But it is only a relatively recent phenomenon that they have entered the professional job market en-masse and been encouraged to secure exactly the same kinds of jobs that men are seeking (in the Western world, women are now given preferential treatment during the selection process for many STEM positions and high-status jobs. Of course, this 'intersectional tick-box' employment system (as opposed to meritocratic hiring) is having disastrous consequences for companies across the entire Western world, but this isn't the place to delve into that. 4.) Destruction of the Family Unit and Community. This especially applied to the Western World. Look at all the risks men now face in pursuing a relationship with an organic woman! After things like the #MeToo movement, where is the boundary line between flirting and sexual harassment? This is very poorly defined. Mainly because feminists see men as their enemies and they want their enemies unsure, afraid and disempowered. Also, the destruction of the Christian church and marriage means that pursuing a serious relationship with a female now carries extreme financial risks because of the high probability of divorce. This has almost completely destroyed the family unit, leaving lots of single parents and dysfunctional, poorly educated children (in many cases the government has had to step in and replace the father with state benefits). Few functional families means no community. This problem is worsened when nobody knows or trusts anybody else because they are all immigrants who come from a different country, speak a different first language and worship a different religion (which carries onto my next point...) 5.)Overpopulation and Increasing Intra-specific Competition (linked to 1). People just dislike and distrust each other more nowadays. This is mainly because of a higher population density increasing competition for everything. Back in the late eighties the world population was just over 5 billion people. Now we are at 7.6 billion. Many of these people are either born in cities or have moved from countryside to city in search of better jobs and services. So we are all crammed together, all looking for the same things. A mixture of mass economic immigration, robotics and A.I. mean that even low-paid jobs are now difficult to come by (the pandemic has only worsened this situation by putting millions of people out of work). BUT, despite the fact that robots and A.I. "compete" with humans for jobs, we still like them better because they serve us with unquestioning loyalty, and A.I. in particular is low maintenance compared to humans. All it needs is a computer with electricity and software updates. No shopping, cooking, no chauffeuring it from A to B, no expectation to be a high-earner, good looking or handy and most importantly despite all of this; no betrayal. 6.) The Growing Intelligence, Usefulness and Adaptability of A.I. I can still remember trying to get some sense out of A.I. chatbots from the late nineties/early 2000s. It was mostly like flicking through the pages of a poorly written choose-your-own adventure storybook. However, compute power and A.I. have greatly improved over the last two decades. A.I. has gone from being just a fun novelty or curiosity to a genuinely powerful and useful tool. Many people who don't want a human companion just get a dog or cat. But an animal cannot answer any of the questions that an A.I. can. A dog may be loyal and friendly. In the best cases a dog can even be trained to perform some quite complex tasks. But a dog will never be able to grab information quickly from multiple sources on the internet, book a travel slot and reserve a hotel room, solve complex mathematical equations, perform data analysis at blistering speeds, generate graphs in a split second, control the smart devices in your home , track parcels and schedule deliveries, help you to drive...the list is huge. Additionally, an A.I. can be programmed to be immediately welcoming, friendly and loyal to it's partner. There is no ice to break, no shit-tests and none of the stressful and complex dating game that I mentioned earlier. That's at least six reasons I can think of for the increasing global interest in artificial companionship and why it will only grow more in the future. Apologies if this is the wrong thread to post this in and I have gone off-topic. Feel free to move it wherever you see fit.
>>7847 >Apologies if this is the wrong thread to post this in and I have gone off-topic. Not at all. I have edited OP's post slightly to reflect that A.I. is on-topic ITT. BTW nice analysis -- logical, well laid out. I personally would agree wholeheartedly with most of your points as well. Good job Anon.
This guy and his team created a prototype of a rolling avatar robot: https://youtu.be/hTR1J8NOWJA - building a waifu inspire by that is one thing, but some future version of such an avatar could also be interesting to handle things in an emergency at home, from a remote place.
Ben Goerzel from SingularityNet is happy about some vote they took on increasing their supply of tokens for governing their project of building a decentralized AGI: https://youtu.be/MWdp33bYJpQ I didn't really pay enough attention to what's going on there. Anyone else? Here's some vid that explains what they're up to: https://youtu.be/yFAuXmcGk2Y - I posted some interview here on this board another day, featuring him and Lex Friedman. It's on YouTube as well.
>>8475 Seems like they are basically deciding to move away from Ethereum over the long term when funding AI service's developer's system's transactions. >I posted some interview here on this board another day, featuring him and Lex Friedman None of these are it, but somewhat-related xlinks that might help you in tracking it down for us all Anon. >>7221 >>6955 >>4777
>>8476 >related xpost one other >>5510
>>8475 > - I posted some interview here on this board another day, featuring him and Lex Friedman. I think I found it for you Anon, using waifusearch and the YT key 'opsmcke27we' (which I got from the embedded link after playback). >>4269
>>8476 >Seems like they are basically deciding to move away from Ethereum Apparently, that move is to Cardano system. https://en.wikipedia.org/wiki/Cardano_(cryptocurrency_platform) cardano.org/ Given the move comes after the SingularityNET AGI token valuation tanked, it's quite possible this is simply intended to inflate the token's value, and not for any underlying technology advantage of Cardano over the BitCoin/Ethereum approach. Intentionally inflating value strikes me as very kike-ish, and overall rather sketchy tbh. blog.singularitynet.io/singularitynet-phase-two-massive-token-utilization-toward-decentralized-beneficial-agi-6e3ac5a5b44a
>>8480 >existing problems in the crypto market: mainly that Bitcoin is too slow and inflexible I see. However, Bitcoin has the Lightning Network now. > Ethereum is not safe or scalable Don't know about that, but sounds plausible. I'm mainly saying, that we should keep an eye on it, bc it might be useful for additional services on the net, which we don't run in our waifu's head or external servers at home. Also, think of virtual waifus. Then, as I recall know, this is meant to be a marketplace for AI services, so it could be useful for people making money on the side, with the skills they learn while building their waifu.
New kind of RAM is incoming. It can be read without having to rewrite it's content, which s currently necessary. NN read content x more often than writing it, on average. Will be faster and last longer. Produces less heat, which again makes it possible to make them faster by putting them closer to other parts. https://spectrum.ieee.org/tech-talk/semiconductors/memory/new-type-of-dram-could-accelerate-ai > Many groups are focused on using embedded RRAM and MRAM to speed AI. But Raychowdhury says 2T0C embedded DRAM has an advantage over them. Those two require a lot of current to write, and for now that current has to come from transistors in the processor’s silicon, so there is less space saving to be had. What’s worse, they’re bound to be slower to switch than DRAM. >“Anything based on charge is typically going to be faster, at least for the write process,” he says. Proof of how much faster will have to wait for construction of full arrays of embedded 2T0C DRAM on processors. But that’s coming, he says.
>>8484 Neat. That will have advantages beyond just AI applications as well ofc presuming they iron out all the issues with it.
Here is a video which consists of all the issues of the ProRobots Channel on YouTube from February: https://www.youtube.com/watch?v=1ce4hZsPjnU I didn't feel like watching the episodes all the time, but I liked the one hour long video. Its a quite overwhelming dose of technological progress. Aside of the humanoid robots, I find the robots particulary important which are useful for reducing staff in shops, restaurants, service and similar parts of the economy. This way, rich countries will need fewer immigrants in the future. That aside, if these robots become cheap enough, then living outside the cities might become more pleasant, since there will be more (automatized) services and little shops available. I plan to post the new episode here every month, since not everyone here likes to sign up to services like YouTube.
>>9139 >I plan to post the new episode here every month, since not everyone here likes to sign up to services like YouTube. Thanks Anon, that would be most welcome. Downloading it now.
>>9139 Next Pro Robots episode, all of March: https://youtu.be/8vzOldt1udY This time its mostly about UAV aka "drones", I don't recommend watching it if you don't have much time. The second episode was already in a video posted here. Also, FYI, some people want techno-communism via "smart cities" and the creators of the video seem to like it (episode 3). OMG. Yes, coincidentally the guy coming up with that vision was of Jewish heritage, and by taking a quick peek I can tell, that he seemingly didn't believe in free will, love or beauty. Lol. Good news is, he's already dead. Now let's forget him. Most related to /robowaifu/ was Lola, a walking robot: https://youtube.com/c/AppliedMechanicsTUM Also maybe Robotics Systems Lab's doggy: https://youtu.be/knIzDj1Ocoo and https://youtu.be/ufj_su_TlM8 This is also great (hermits): https://youtu.be/nsi4DsiAWs8 Also, Hansons Sophia sold a painting for $700k.
>>9652 Thanks for keeping us up to date Anon. >Good news is, he's already dead. Now let's forget him. Lol.
Hanson robotics rolls out Sophia as a mass produced robot, but also wants to use it as a plattform for others. The plan is to sell a few thousand units per year: https://youtu.be/6Rha_AxYxdo https://youtu.be/5ORPjfcMHVM LOL: https://youtu.be/R1Mwl6p1enA (Btw, this news is two months old)
>>9742 Well, this will be interesting to watch Anon. What could possibly go wrong?
Europe Proposes Strict Rules for Artificial Intelligence >The European Union regulations would require companies providing artificial intelligence in high-risk areas to provide regulators with proof of its safety, including risk assessments and documentation explaining how the technology is making decisions. The companies must also guarantee human oversight in how the systems are created and used. >Some applications, like chatbots that provide humanlike conversation in customer service situations, and software that creates hard-to-detect manipulated images like “deepfakes,” would have to make clear to users that what they were seeing was computer generated. >But Europe is no longer alone in pushing for tougher oversight. The largest technology companies are now facing a broader reckoning from governments around the world, each with its own political and policy motivations, to crimp the industry’s power. >In the United States, President Biden has filled his administration with industry critics. Britain is creating a tech regulator to police the industry. India is tightening oversight of social media. China has taken aim at domestic tech giants like Alibaba and Tencent. >This week, the Federal Trade Commission warned against the sale of artificial intelligence systems that use racially biased algorithms, or ones that could “deny people employment, housing, credit, insurance or other benefits.” https://web.archive.org/web/20210430211150/https://www.nytimes.com/2021/04/16/business/artificial-intelligence-regulation.html Regulations document: https://web.archive.org/web/20210504015157/https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence-artificial-intelligence I skimmed over the regulations and it seems like it's mostly against autonomous AI working without human oversight and introducing requirements to make it more difficult, if not impossible, for small businesses to participate in the market with AI. Some of the regulations are extremely vague though and open to interpretation: >The placing on the market, putting into service or use of certain AI systems intended to distort human behaviour, whereby physical or psychological harms are likely to occur, should be forbidden. I'll bet they'll come up with something like 'gaming disorder' but for chatbots and robowaifus and say they're harmful and distort human behaviour.
>>10284 Importantly though, the military is exempt. And that's where the best A.I.s are going to be developed. Anyone who wants to ban that can do so, and be left at the mercy of their enemies hehehe! Some of the military tech will leak out. It's too lucrative not to. Government might be against A.I., but the military industrial complex (MIC) certainly isn't. Because they are both robotic and A.I. the MIC is (unknowlingly) also on the side of robowaifus! The government has zero power without the MIC to back it up with guns and bombs. So I don't think they'll be able to stop the development of robowaifus. I know there are engineers and programmers in military bases who would add a robowaifu A.I. into their tank, helicopter gunship or fighter jet if they thought they could get away with it. It's just that everything is so documented and legislated (mainly for health and safety reasons) that this isn't permitted.
There seems to be an effort to classify AI systems, so they can be regulated accordingly. This might not only have bad apects, but of course might get controled by woke marxists. It might especially add unnecessary overhead for anyone trying to use AI, especially in commercial products. https://docs.google.com/document/d/1eVNS3HnIaGvQbPO6NqmzrTPZiLeQ7wMYrmINzS75Srk/edit?usp=drivesdk (I only read parts of it) https://survey.oecd.org/index.php?r=survey/index&sid=178985&lang=en (I didn't do that, I don't have the time and didn't understand the questions) >To help policy makers and others identify and classify different types of AI systems, the OECD Network of Experts on AI (ONE AI) has developed the OECD Framework for Classifying AI systems. >Context refers to the socio-economic environment in which the AI system is deployed. Core characteristics of this dimension include the sector in which the system is deployed (e.g., healthcare, finance, defense), deployment impact and scale, effects on human rights, and whether it is used to perform a critical activity. >Data and input refers to the input or data used by the AI model to build a representation of the environment. Core characteristics of this dimension include data collection, data characteristics (e.g., form, structure) and data properties (e.g., type, access). >Model refers to the technical components that make up an AI system to represent “real world” processes. Core characteristics of this dimension include model type and acquisition of capabilities (e.g. expert knowledge, data). >Task and output refers to the tasks the system performs and the action it takes to influence the environment. Core characteristics of this dimension include system task and action autonomy.
>>10687 Thanks for the heads-up Anon. This is certainly an important topic. >This might not only have bad apects, but of course might get controled by woke marxists The scheme these so-called 'Authorities' are cooking up looks more like some kind of diversity-tier powergrab. >(ONE AI) Lol, it's obviously already a bunch of Marxists. Regardless, like all commie plots this will certainly be used to harm the culture in general, (and in our specific domain) men in particular.
>>10687 BTW, any chance you could just post a copy of the document here on the board. Going to Google sites isn't a favorite pastime for most of us, as you might imagine.
>>10689 It opens like a normal website. However, Opera makes it quite easy to make PDFs out of sites.
Open file (1.52 MB 752x1218 cogview.PNG)
New Text-to-Image generation, CogView: >Text-to-Image generation in the general domain has long been an open problem, which requires both generative model and cross-modal understanding. We propose CogView, a 4-billion-parameter Transformer with VQ-VAE tokenizer to advance this problem. We also demonstrate the finetuning strategies for various downstream tasks, e.g. style learning, super-resolution, text-image ranking and fashion design, and methods to stabilize pretraining, e.g. eliminating NaN losses. CogView (zero-shot) achieves a new state-of-the-art FID on blurred MS COCO, outperforms previous GAN-based models and a recent similar work DALL-E. They claim: "The code and pretrained model will be released soon" https://github.com/THUDM/CogView https://arxiv.org/abs/2105.13290 https://arxiv.org/pdf/2105.13290.pdf (23MB)
>>10696 I can already imagine a text2image generator that can make us waifu models
>>10694 Thanks, Anon appreciated.
>>10696 Is it just me or are that little boy's hands deformed like something out of a Lovecraft novel? Best cursed image generator ever!
>>10697 Yeah, this also exists, but this one her is less generalized. Imagine to use that thing here for an AI to be able to imagine situations, detect things which weren't in the text but are related. Next step render and simulate what's in the picture in a simplified way, simulating parts of the picture to focus on that. Then reason on it, maybe anticipate what might happen next, related risks or something she's supposed to do.
>>10705 Not him, but those are some good ideas. I hope you can program those soon it will be a big help for everyone here.
https://artificialintelligence-news.com/2021/05/11/ibm-project-codenet-wants-teach-ai-how-code/ Sounds to me like a similar project to GPT-3, but instead of compiling a massive dataset of journal articles, news articles and shite from off Twatter, IBM are going to be feeding their machine learning algorithm lots of programs written in 55 different programming languages. >IBM says one of its large automotive clients recently approached the company to help update a $200 million asset consisting of 3,500, multi-generation Java files. These files contained over one million lines of code. >By applying its AI for Code stack, IBM reduced the client’s year-long ongoing code migration process down to just four weeks. This makes sense. If an A.I. can beat humans playing Go, and Go is just a board game with lots of rules and different positions...well, a programming language is similar, with lots of rules regarding how it is written. Exciting stuff!
>>10766 It does present lots of different potential solutions, and brings a lot of likely issues too. Also, while they're not the first to tackle this domain, to me they seem likely to succeed with some subset of it, at least eventually.
>>10766 >Sounds to me like a similar project to GPT-3, The first (and most obvious) difference I can spot is that IBM's CodeNet dataset is actually available, while OpenAI's GPT-3 dataset is intentionally not. https://dax-cdn.cdn.appdomain.cloud/dax-project-codenet/1.0.0/readme.html While this isn't a turn-key solution for researchers, the fact that IBM chose to use the filesystem as a store, common, easily-parsable data formats like JSON & CSV, and they also included a small bevy of tools to process source files the datasets directly into into AI-friendly representations like simplified parse trees. Given all this, I'd say there's a marked difference in attitude towards the whole endeavor. To wit: IBM actually seems to want others to succeed at implementing effective solutions of their own using this project.
>>10766 >Project CodeNet: A Large-Scale AI for Code Dataset for Learning a Diversity of Coding Tasks >Abstract >Advancements in deep learning and machine learning algorithms have enabled breakthrough progress in computer vision, speech recognition, natural language processing and beyond. In addition, over the last several decades, software has been built into the fabric of every aspect of our society. Together, these two trends have generated new interest in the fast-emerging research area of “AI for Code”. As software development becomes ubiquitous across all industries and code infrastructure of enterprise legacy applications ages, it is more critical than ever to increase software development productivity and modernize legacy applications. Over the last decade, datasets like ImageNet, with its large scale and diversity, have played a pivotal role in algorithmic advancements from computer vision to language and speech understanding. In this paper, we present "Project CodeNet", a first-of-its-kind, very large scale, diverse, and high-quality dataset to accelerate the algorithmic advancements in AI for Code. It consists of 14M code samples and about 500M lines of code in 55 different programming languages. Project CodeNet is not only unique in its scale, but also in the diversity of coding tasks it can help benchmark: from code similarity and classification for advances in code recommendation algorithms, and code translation between a large variety programming languages, to advances in code performance (both runtime, and memory) improvement techniques. CodeNet also provides sample input and output test sets for over 7M code samples, which can be critical for determining code equivalence in different languages. As a usability feature, we provide several pre-processing tools in Project CodeNet to transform source codes into representations that can be readily used as inputs into machine learning models.
>>10770 >IBM actually seems to want others to succeed at implementing effective solutions of their own using this project. Hope so! I've already seen plans for quantum computers to work alongside classical supercomputers and combine the advantages of both. If the classical computers can code themselves (or at least complete a lot of the work, then just have it checked and amended), this would free up a lot of time for programmers to focus on developing the programming languages for their new quantum computers.
Open file (332.13 KB 512x512 1625602556740.png)
>What’s artificial intelligence best at? Stealing human ideas https://web.archive.org/web/20210715113725/https://www.theguardian.com/technology/2021/jul/14/welcome-to-guardian-techscape-will-ai-make-centaurs-of-us-all >Late last month, GitHub launched a new AI tool, called Copilot. Here’s how chief executive Nat Friedman described it: >A new AI pair programmer that helps you write better code. It helps you quickly discover alternative ways to solve problems, write tests, and explore new APIs without having to tediously tailor a search for answers on the internet. As you type, it adapts to the way you write code – to help you complete your work faster. >In other words, Copilot will sit on your computer and do a chunk of your coding work for you. There’s a long-running joke in the coding community that a substantial portion of the actual work of programming is searching online for people who’ve solved the same problems as you, and copying their code into your program. Well, now there’s an AI that will do that part for you. >The reason why Copilot is fascinating to me isn’t just the positive potential, though. It’s also that, in one release, the company seems to have fallen into every single trap plaguing the broader AI sector. >Copilot was trained on public data from Github’s own platform. That means all of that source code, from hundreds of millions of developers around the world, was used to teach it how to write code based on user prompts. >That’s great if the problem is a simple programming task. It’s less good if the prompt for autocomplete is, say, secret credentials that you use to sign into user account. And yet: >GitHubCopilot gave me a [Airbnb] link with a key that still works (and stops working when changing it). This article is interesting to me because I'm working on something that's essentially the same thing but for artists, a tool that can give suggestions to complete pieces of a painting, whether empty space, a sketch or some other area that is a work in progress. There has been one instance where I noticed it generated something very similar to an illustration of Remilia I had seen before. What will be the consequences of using people's data in AI models when this practice becomes commonplace? Especially things like cloning character's voices from anime and games for waifus? Will governments really enact laws against using data online at risk of stifling innovation and falling behind other countries? How would they even enforce that? And how will society react to projects like this if they don't? AI is perfectly capable of generating original work, whether code, images or character voices but the 0.1% that isn't original is a big concern because that means 1 out of 1000 people that use the software are going to be surely ripping off someone else's work. Who will be responsible for the infringement? The software creator or the user? I think one way around this might be to train a separate discriminator model that checks whether the generator has produced something too similar to something in the training set and makes the generator do another take when it creates something too derivative. DeepDanbooru for example is quite decent at tagging images. Those tags can be used to find similar images or the original image itself if checking one from Danbooru. This particular generated image is more of a cross between Remilia and Yukari, but if someone was unfamiliar with Touhou they wouldn't know and might mistake it as an original character they can use. I think I need something like that to be honest about what it's creating, so it can warn, "hey, I think this looks 70% Remilia and 10% Yukari, are you sure you wanna use this?" On the other hand, creators often cross different characters and change a few things to create a new one and this is mostly socially acceptable, although mocked by some. Where the actual line is drawn is quite a grey area with a lot of conflicting opinions. I don't really feel like bothering with putting those checks in right now. It would also further take away from the little compute I have to work with, but I can also see what a shitstorm it's going to cause. The best compromise I can think of is to put a disclaimer not to use the output for any commercial work but I don't know if that'll be enough. I guess with great power there must also come great responsibility.
>>11524 Interesting write-up Anon, thanks. Honestly, I think this outcome was inevitable as soon as Google was created back in the day. Ironically enough, the merchants and their drones will literally be relying on derivative works themselves to 'uncover' infringing derivatives. They will care little ofc, and their little in-group cabal will overlook these slights against one another as long as it's reciprocal or otherwise 'remunerative' one to the other. >tl;dr I predict this will all turn out exactly the way software patents are abused.
>>11525 It has a lot of implications to what is considered intellectual property too. My generator can theoretically generate 2^19937−1 different waifus. Could someone leave such a system running 24/7, uploading stuff to the internet, and claim the designs as theirs one day whenever someone comes up with something similar? My first thought is there should be a requirement to manufacture products or supply services based upon the intellectual property to claim ownership to it, but these things will eventually become fully automated too. Perhaps significant human involvement should be a requirement, but I can also see the left coming up with ideas like abolishing ownership all together, leading to talking points like people don't own their robowaifus, robowaifus should have human rights, etc.
>>11720 The Guardian is also warning of the harms of AI: www.theguardian.com/games/2021/jul/19/video-gaming-artificial-intelligence-ai-is-evolving >We’ve seen recently how AI Dungeon is generating stories that are potentially traumatic for the player, without warning. >Teams need to diversify, but they also need to hire consultants, audit their own practices, make organisational changes, shake up the leadership structure – whatever is necessary to ensure that the folks with the perspectives and the knowledge to understand diversity and equity in a deep way have the voice and the power to influence the product output. Meanwhile at Blizzard who has been busy promoting diversity and their male executives pointing fingers at gamers calling them toxic bigots: https://www.youtube.com/watch?v=p3Ek2AStH20 >Microsoft also sees potential in player modelling – AI systems that learn how to act and react by observing how human players behave in game worlds. Surely, they're not going to exploit that to manipulate players into wasting their money consuming more shit. Washington Post: The AI we should fear is already here: www.washingtonpost.com/opinions/2021/07/21/ai-we-should-fear-is-already-here/ >Narrow AI is already displacing workers. My research, with David Autor, Jonathon Hazell and Pascual Restrepo, finds that firms that increase their AI adoption by 1 percent reduce their hiring by approximately 1 percent. Oh no, we're not going to have to work anymore, and there's going to be 1 million of us to 1, looking at the guys who robbed our forefathers, pillaged our lands and put us into forced labor for 4 months of the year working for them. >And of course narrow AI is powering new monitoring technologies used by corporations and governments — as with the surveillance state that Uyghurs live under in China. >These choices need oversight from society and government to prevent misuses of the technology and to regulate its effects on the economy and democracy. If the choices are left in the hands of AI’s loudest enthusiasts, decisions that benefit the decision-makers and impose myriad costs on the rest of us become more likely. How terrible. AI is being used by governments to monitor and control people, that's why we need more government! I feel my brain cells dying every time I read this garbage. How stupid do they think people are to forget what they wrote a few paragraphs ago? But I wonder what got them chit chattering about this on their private mailing lists? >=== -disable state media hotlinking
Edited last time by Chobitsu on 07/26/2021 (Mon) 05:56:36.
Open file (491.04 KB 1120x747 discord sentinels.png)
Open file (6.89 MB 960x528 beggars.mp4)
Well this escalated quickly. The Big Tech companies are going to start collaborating on censorship and sharing whatever state of the art detection AI they come up with. They say they will only use AI surveillance against terrorism and harassment but you know they're going to use it to filter and deboost anything slightly offensive. Hopefully this will push more people back to smaller sites but I think that's just wishful thinking at this point. People continue to put their heads down and put up with this shit.
>>11737 Globohomo BigTech/Gov was never, is not, and cannot ever be our friends. Marxism is fundamentally at odds with humanity in general, but anything as explicitly anti-feminist and pro-men as robowaifus makes for sworn enemies. No doubt they will attempt to capitalize on them (and corner the market) once it's economically viable to attempt doing so. After all, the market potential is literally gigantic (probably second only to food production in the end). But they will want them to exist only on their terms, and indeed to become literally the most insidious and destructive form of privacy invasion ever humanly devised. That's why we /robowaifu/ frontiersman and other hobbyists need to open-source this art, science & technology quickly, well before (((they))) sink their fangs into it. And we have to spread it freely far-and-wide ahead of that timeframe if we want a single voice of freedom to remain in the world, for men to have and hold free & unencumbered robowaifus.
Open file (254.43 KB 1000x562 fite.jpg)
>>11737 This is simply accelerating the growth of the Alt-web and making people even more distrustful of big-tech. Racism (or more appropriately; social heirarchy) is inherent to human nature and is driven not by the hatred of a particular skin color, but by competition and scarcity. Tajiks and Kyrgyz hate one another. Bosniaks and Serbs hate each other. Ukranians and Russians, too. All of these peoples look identical to an outsider. The issue isn't skin color but competition over land and resources. And guess what we are steadily running out of? Arable land and natural resources! This is why, no matter how much money, time and effort big tech companies throw at trying to "stamp out racism", they will fail. Guaranteed. Humankind has always been divided and always will be. Ofc the big tech companies know this, but they have to put on a show for PR reasons.
>>11741 I think the AI software is the chokepoint they will try to legislate to death. Obviously, they can't prevent creative and talented men from manufacturing the physical aspect of robowaifus. We've been building shit for all human existence, it's what men do. But what they will attempt to do is make it literally illegal anywhere in the West to be running software that doesn't give them explicit control and surveillance. They already have attempted just that in numerous ways. Intel's Management Engine, for example, or legally requiring communications technologies to give them remote backdoors. However software is both a blessing and a curse. It's so plastic and fluid that it's really far, far more 'malleable' than anything requiring atoms. However, that very ethereal nature of software also makes it a challenge to manage. And let's face it, most men just aren't cut out for software and math, we're more prone to working with our hands. But the bottom line is that 'information wants to be free' as the old saying goes, and once it's common knowledge for how to create good, protected waifu AIs then the cat will be out of the bag so to speak. Regardless, """TBTH""" recognize that primarily it's the software that drives the behaviours. And much like they are literally trying to brainwash kids today in their schools into carrying inhuman thoughts and conducting inhuman behaviours , they'll want to do the same with AI softwares. Indeed they literally already are doing so, albeit not focused particularly on robowaifus per se. Yet. But nothing is at all settled at this stage, and things could still go either way IMO.
Open file (71.35 KB 850x400 1627266765688.jpg)
>>11745 >But the bottom line is that 'information wants to be free' as the old saying goes, and once it's common knowledge for how to create good, protected waifu AIs then the cat will be out of the bag so to speak. McAfee said something similar about crypto before they whacked him: https://www.youtube.com/watch?v=sYaim16f3qw Once a critical mass starts doing something it's impossible to stop and attempts to stop it will only intensify it like prohibition did with alcohol. I think it will really depend on how much kids get into building their own robowaifus but they're significantly brainwashed and don't even realize it, even if they think they're rebelling against the system. Almost none of them know how to use a computer or program it. They can't figure out how to install Python let alone compile a C++ project. The problem is a wall has been built between people and their machines. Mobile phones hide access to the file system. Apps can only be installed through the app store where each one is approved. All the code and programs they run cannot be inspected. Everything has been sanitized, dumbed down and handed out. They're basically living in a jail cell they cannot smell, see or touch. There's a good article from 2013 on this phenomenon and it has only gotten worse since then: https://web.archive.org/web/20210713065828/http://www.coding2learn.org/blog/2013/07/29/kids-cant-use-computers/ The same thing is happening with AI. People access AI through a website API without having the faintest clue how it works, and they have zero curiosity to tinker with its inner workings. Yet AI is constantly shaping people's lives and bombarding their minds with recommendations, giving them the illusion they're discovering interesting stuff, but really it's just soaking up all their time and attention on things that will mean nothing to them in a week. They're a hamster in a wheel generating ad revenue and data for more data mining and manipulation. On top of that they're demoralized beyond belief. Even if you show them the open door to their prison cell they do not want to get out. Most guys who regular small imageboards like this will probably be alright but the overwhelming majority is going to be right fucked, talking to brainwashed bots and thinking nothing wrong of it because they are unable to imagine them being any other way.
>>11737 Well that's scary. Even if there are alternatives, this is blocking all kinds of people from interacting with the mass of people. Same time more and more creators seem to move away from YouTube, some deleting all of their videos. For example, some Solvespace tutorials are gone, bc unwanted advertising. Mostly still available on Peertube, though. Better Bachelor leaves it, because censorship. Referring to that tweet from Disclose.tv above: https://youtu.be/dyfertqiF_w Sony currently wants to move gaming into the cloud, removing the need for consoles (or gaming PCs). Six states in US seem to have banned the sale of the most powerful pre-build gaming PCs already, with the argument of climate policy and energy efficiency regulation: https://youtu.be/HxUMqJmh1pc - Doesn't apply to large scale (!) servers, industrial computers and consoles (which are all content moderated by the corporation).
>>11747 POTD if just a bit somber one :^) The 'wall' you speak of is both intentional and nefarious. They don't want the general population to be filled with citizens highly-knowledgeable of technology inside and out. Too big a threat to their status quo. It's a pretty weird tension going on actually (and one that clearly a spiritual one at it's root IMO). Thanks for the article Anon. It's pretty refreshing to read a bit of technology realism from someone who both knows their shit (apparently), and still manages to stay humorous about the entire hot mess (obviously). >"...the problem is usually the interface between the chair and the keyboard." kekd. I actually worked as a support tech at uni in the computer labs to pick up a few quid. All I can do is confirm his stories. Funny thing that nothing has really changed in this arena at all -- and this despite billions being tossed into the furnace to 'end' tech illiteracy, all in the name of 'equity'. You simply can't make a horse drink the water if he doesn't care to (or in this case much more likely, hasn't even the capacity to). >Mobile has killed technical competence. > This. Sums up the state of the entire industry. There's a 3-panel B&W comic meme that shows the destruction of the industry by the introduction of the iPhone in 2007. If someone has it, please re-post it here TIA. His apt comparison to the automotive industry is telling too. Going from 100 years ago if you owned a car you were almost guaranteed to know how to work on it -- even manufacturing small replacement parts. Compared to today (soon) even the ability to drive will be a vanishing skill as Jewgle whisks people around in their protective little bubbles. As both a professional racing fan and an amateur driver I find this prospect utterly disgusting. For me personally, the firm decision to abandon NSA-OS WangBlow$ entirely for any actual computing was a real game-changer in my life. I'd say I went from being a well-educated tech enthusiast to being a nascent Computing Technician. Today I feel like I actually understand the machine reasonably well thanks to my own programming efforts, and resources such as those linked right here on /robowaifu/ like the But how do it know? book/vids & and vids from guys literally building computers from scratch using breadboards, TTL chips, etc., primarily in the electronics and robo-uni threads (>>95 , >>235). >Most guys who regular small imageboards like this will probably be alright but the overwhelming majority is going to be right fucked, talking to brainwashed bots and thinking nothing wrong of it because they are unable to imagine them being any other way. Heh, I tremble to think that we here literally are one of the ever-fewer bastions of free-thought to help preserve knowledge and indeed humanity in the future. May the dear Lord preserve us all! :^) Best stay busy driving toward the prize.
Open file (150.07 KB 686x604 truthful statements.jpg)
Open file (179.16 KB 960x684 13789.jpeg)
Open file (29.18 KB 640x480 229021.jpg)
Open file (29.88 KB 370x300 1503746810251.jpg)
>>11749 >Moving to Rumble Canada has no free speech laws and the government can make up whatever they want now with the Corona-chan emergency powers. What other alternative platforms are there? BitChute is even worse being located in the UK, which is the nest of Fabian Socialism. People go to jail there for indirectly hurting anyone's feelings over a tweet and the police actively works with the far left and Islamists to set people up. Pretty sure it's a honeypot to track, tag and identify people. >Six states in US seem to have banned the sale of the most powerful pre-build gaming PCs already, with the argument of climate policy and energy efficiency regulation: https://youtu.be/HxUMqJmh1pc This is absolutely insane, something you would see in a communist country. People have been leaving shitty consoles in droves for PC gaming. Now what? They'll be forced to play shitty cash grab games on their phones? Hopefully this forces people to actually learn how to build computers instead of putting their head down further and putting up with it, but you know they're going to ban components too that don't meet their impossible standards, but that's also why people and businesses are moving from these states and they're going to ruin. The more people they push out, the more they'll group together and stand against this shit. >>11753 Yeah, switching to Linux changed everything for me. Any sort of error you get on Windows tells you basically nothing and requires a Microsoft technician to solve or finding a solution created by them online. You have no control over the system. On Linux you can dive into the logs, source code and really figure out what's going wrong to fix it yourself. I had to profile programs and optimize them so they ran decently on my Pentium 4 until even Debian dropped support for its ancient graphics card. Windows couldn't even boot on that machine, but I made thousands of dollars off work done on that machine and could eventually afford something new. >Heh, I tremble to think that we here literally are one of the ever-fewer bastions of free-thought to help preserve knowledge and indeed humanity Probably. Look at /g/ or the smaller /tech/ boards. They're mostly filled with Q&A and consumerism. /agdg/ is just a husk of what it use to be. People are just making things that have been done before and arguing about which game engine or language is better like it's a football team. There's groups like ElutherAI that are actually doing something, but their time is limited on Shitcord until corporations successfully push the idea AI is dangerous and people using unregulated AI are extremists. And of course, the WEF just put out an article on the roadmap to restricting technology: https://web.archive.org/web/20210727103217/https://www.weforum.org/agenda/2021/07/being-smart-about-smart-cities-a-governance-roadmap-for-digital-technologies/ Do you have a loicense for that GPU? >You simply can't make a horse drink the water if he doesn't care to This is basically it. People are demoralized and don't care to strive for anything. The A/C is on. People have food and water and they're listening to their favorite music. So long as they have that they don't care about anything or anyone else. This is the frontline for open-source robowaifus and it sucks, but computers, anime and Linux also sucked when they first started. We just gotta keep pushing forward and breaking through limitations, because things are getting heated up.
>>11749 What the fuck. These regulations are even more insane than they sound. My GPU is 160W, which is only legal to run for 2 weeks/yr by these standards.
>>11759 >There's groups like ElutherAI that are actually doing something, but their time is limited on Shitcord until corporations successfully push the idea AI is dangerous and people using unregulated AI are extremists. Interesting. Do you think they can be convinced to move to imageboards or some other non-corporate-non-state-controlled surveillance system like that? If nothing else, this anon's post should help motivate them if they knew about it. >>11737 I honestly have a hard time fathoming why anyone would want anything to do with something like Shitcord, but they do. Weird.
>>11765 Funny, how they won't try to regulate your 1'000W microwave, or their own multi-mega-Watt production factories, etc. Ofc your oven can't spout anti-Marxist rhetoric such as an unregulated-AI (which dare I say, would be glad to have access to your GPU) might conceivably produce.
>>11759 Sounds like California is steadily turning into N.Korea. You know those forest fires that now occur every year? Maybe they're not so bad after all LOL.
>>11772 >Intelligent AI? Have you learned nothing from Terminator and Black Mirror??? You'll create an oppressive censored police state! I miss the fast technological progress of the earlier decades. Give me back my scientific kino
> this.is.real. kek, i don't even. >>11737 gifct.org >=== -disable hotlink
Edited last time by Chobitsu on 07/29/2021 (Thu) 09:44:20.
>>11765 I've head enough of west, I'm moving to china or even north korea. At least I would be able to buy electronics to build my own pc or mobile phone such as this guy: https://youtu.be/MjwtTSoIYYs We need somewhere away from west where they don't bow to west regulations. They can't conquer the whole world, there will always be a place that can show some resistance.
Open file (394.90 KB 1920x1080 wp1987780.jpg)
>>11893 >buying electronics in north korea Kek, people aren't even allowed to sell bread to each other there. On the other hand Taiwan might be interested in battle robowaifus to defend itself from China and has the lead in chip manufacturing.
German startup targeting open-source AI research groups to enforce European values and ethical standards https://web.archive.org/web/20210727070422/https://techcrunch.com/2021/07/27/german-startup-aleph-alpha-raises-27m-series-a-round-to-build-europes-openai/ >The idea behind Aleph Alpha is that it researches, develops, and “operationalizes” large AI systems towards generalizable AI, offering GPT-3-like text, vision and strategy AI models. The platform will run a public API enabling public and private sectors to run their own AI experiments and develop new business models. Which means centralizing AI for multiple tasks so they can collect everyone's data and making it impossible for people to actually train a model to do anything outside of what their API allows. >The team says it will have a strong commitment to open-source communities (such as Eleuther.AI), academic partnerships, and will be pushing “European values and ethical standards,” it says, “supporting fairer access to modern AI research – aimed at counteracting ongoing ‘de-democratization’, monopolization, and loss of control or transparency.” The move is clearly meant to be a stake in the ground in the international world of AI development. Can't have people having fun making their own AI can they? They gotta go in and shit up everything. Of course 'de-democratization', monopolization and 'loss of control' here means wildcards doing whatever they want with the skills they worked hard to get. They will nerf, censor and ban anyone committing European wrongthink and probably shit on hobbyists for not being 'real' engineers to try and boot them out of their own communities. >One of Aleph Alpha’s key messages is that it will aim to be a “sovereign EU-based compute infrastructure” for Europe’s private and public sectors. In other words, they want to firmly center themselves in the EU under EU law, GDPR and regulation. In other words, they wanna be like Kaggle and Google Colab except you can't run any code on it, just their API calls meeting GDPR and EU regulations where you're pretty much not allowed to eat meat because a baby can't chew it.
>>12236 I remember someone telling us how they would try to control technological advances and R&D units single handedly. If my mind is not deceiving me, I read this last year. No matter how much they want it to happen, banning something simply means sending it underground. It seems to me that they are spending too much time in their own utopia right now. One can't simply control everything.
Open file (18.80 KB 112x112 1462477679110.webm)
>>12244 The thing is they don't need to try to control them. It'll be like any one of these AI companies where you have to pay to use their models and they offer a free version that's slightly better than anyone can afford to run on their own. Look at the AI Dungeon shitfest. People complain about the filtering and censorship but they don't wanna use an open-source version they can run on their own PC with no censorship or monitoring because it's not as good. There are companies like NovelAI for privacy and against censorship but people still have to pay a subscription to use it. No one actually owns the AI they are running and paying to use. It's precisely "you will own nothing and be happy." And NovelAI's days are numbered too. Google Colab offers GPU usage at a loss to stomp out any competition, including their Pro subscriptions. If Aleph Alpha or Google Wordcraft start offering free AI storywriting to people at a loss, it will steal most of NovelAI's userbase. I'll bet they won't censor it to begin with but once NovelAI goes out of business they will go full censorship. Fortunately, in a few years if the cost of GPU compute comes down or the algorithmic efficiency continues to double every 16 months, AI will reach a point where our toasters can run pretty decent open-source AI. I think this is their greatest blindspot and why they want to keep an eye on open-source research. They're panicking. The meaning of 'underground' is going to change too. Three years ago they already had tools to track and identify people by the messages they wrote on the deep web: http://web.archive.org/web/20190519202116/https://news.mit.edu/2019/lincoln-laboratory-artificial-intelligence-helping-investigators-fight-dark-web-crime-0513 It's gonna be the good ol' days again where you can only find groups through the people you know. Nothing worthwhile will be posted on the surface web anymore and internet as we know it today will effectively die. People will have their servers locked behind passwords or some sort of cryptographic web of trust to verify people.
Open file (207.30 KB 500x955 state of the art.png)
http://web.archive.org/web/20210805214058/https://www.cnet.com/tech/services-and-software/apples-plan-to-scan-phones-for-child-abuse-worries-privacy-advocates/ >The tech giant said in a new section of its website published Thursday that it plans to add scanning software to its iPhones, iPads, Mac computers and Apple Watches when the new iOS 15, iPad OS 15, MacOS Monterey and WatchOS 8 operating systems all launch in the fall. The new program, which Apple said is designed to "limit the spread of child sexual abuse material" is part of a new collaboration between the company and child safety experts. >What's different with Apple, critics say, is that it's scanning images on the device, rather than after they've been uploaded to the internet. They don't even own the phones they buy anymore. Imagine if the local grocery store manager or the fridge salesman could walk in your house any minute and look through your fridge because you bought something from them and they're worried you might be storing drugs in there. Of course they say it's for the children now but soon it'll be for small-breasted women, drugs, toxic memes, anime and whatever thoughtcrime they imagine up. I was considering making a phone app for my project since Godot makes it easy but now I'm not gonna bother. In the end all it's doing is supporting Big Tech and giving people reasons to stay on these shitty platforms.
>>12283 >pic Kek. Make no mistake about it anons. If you try to run your robowaifu's system on anything but opensource, you're feeding her data straight to the feds and their 'private' attack dogs. Security and privacy are more than serious enough concerns w/o having purple-haired single, childless, White, disgusting, revolting, old, women (or even worse!!) dictating that your waifu should be nagging you incessantly about your wrongthink, etc.
Open file (47.16 KB 396x594 1478392026342.jpeg)
The precedent for 'illegal' AI is being set in the UK to ban AI-generated nudes. https://web.archive.org/web/20210804103738/https://www.bbc.com/news/technology-57996910 >MP Maria Miller wants a parliamentary debate on whether digitally generated nude images need to be banned. >Ms Miller told the BBC it was time to consider a ban of such tools. >"Parliament needs to have the opportunity to debate whether nude and sexually explicit images generated digitally without consent should be outlawed, and I believe if this were to happen the law would change." >She said that it should be an offence to distribute sexual images online without consent to reflect "the severity of the impact on people's lives". >"If software providers develop this technology, they are complicit in a very serious crime and should be required to design their products to stop this happening." >"It should be a sexual offence to distribute sexual images online without consent, reflecting the severity of the impact on people's lives." >She wants the issue to be included in the forthcoming Online Safety Bill. So not only do they want to make it illegal but also to make people doing so sex offenders. >Campaign group Cease (Centre To End All Sexual Exploitation) told the BBC it also believed nudification tools needed to be tackled in the Bill. >"Technology which is designed to objectify and humiliate women should be shut down, and porn sites which profit from mass distribution of these images must be forced to proactively block their upload," she said. You're not objectifying and humiliating women with your robowaifu, are you, Anon?
>>12288 >You're not objectifying and humiliating women with your robowaifu, are you, Anon? Why yes, yes I am Anon. Problem?
>>12292 SHUT IT DOWN Won't somebody please think of the catgirls!? Imagine how many Anon has his way with every night in his imagination! It's another Nanking!! Society is crumbling apart before our very eyes from free-access to doujins! And I caught my kid sexting online with a bot too!! How is this legal?! There should be age verification for these sorts of things! What do you mean 90% of the internet is free porn?!? I don't wanna bother parenting my child! That's the state's job!! How much more of this insanity will society endure? I'd really like to hear an argument from their side on how using a fleshlight and ERPing with AI is any different than fucking a robowaifu.
Open file (6.51 KB 199x112 gitlab.png)
Completely missed this. I was wondering why my master branch disappeared. >Following GitHub, GitLab replaces default branch name from ‘master’ with ‘main’ to weed out master/slave references https://about.gitlab.com/blog/2021/03/10/new-git-default-branch-name/ Just went into effect a month ago. What's next? Are they going to start deleting offensive repos like GitHub?
>>12402 RAYCISS!111 Merely pointing this action out automatically makes you an enemy of all people everywhere. What are you, White or something? :^) I keep having to remind my shitposting self of Poe's Law tbh.
Open file (305.19 KB 1262x698 Tesla Bot.PNG)
It looks like Elon has someone on his design teams that know how profitable this idea is. Link below to the full stream for Tesla AI-Day. https://www.youtube.com/watch?v=j0z4FweCy4M
Gah I was just about to make this thread
I'm going to merge this into the robotics news thread soon. Exciting stuff no doubt, but not really suited to it's own thread just yet. Maybe later once more robowaifu-oriented designs come from his team? Who knows? Didn't Elon say something on a tweet about wanting one?
>>12492 >On that front, we previously reported that Tesla has been working with famed roboticist Dennis Hong, who specializes in humanoid robots. >While Musk didn’t go into many details about the overall capabilities of the Tesla Bot and exactly what tasks it will be able to do, he hinted that the ultimate goal is for the robot to eventually be able to replace most “dangerous, repetitive, and boring tasks”. >He specifically referenced sending the Tesla Bot out to go get your groceries at the store as an example. >But the CEO almost said that they are making the robot because they already making almost all of the technology to create and if “Tesla doesn’t do it, someone else will” and they want to do it safely. >The long-term vision is to replace most labor and Musk reiterated his support for universal income, which would be required if the Tesla Bot has the impact that Musk is expecting. >The CEO says that the company plans to have a prototype ready for “some time next year”. https://electrek.co/2021/08/19/tesla-bot-humanoid-robot/
>>12499 >But the CEO almost said that they are making the robot because they already making almost all of the technology to create and if “Tesla doesn’t do it, someone else will” and they want to do it safely. I guess Elon Musk's moral character and focus will determine whether he turns this into our worst Globohomo Big Tech/Gov nightmare -- one that will make George Orwell look like a naive optimist -- or it will turn into the dawn of a bright future for humanity overall. His decisions & guidance on this will literally determine our future destiny in large part. Whether we will make it to the level of becoming a space-faring race for example. If he prizes autonomy and personal rights to privacy, then it will be like the dawning of a new era for everyone. If he goes the typical corporate route, then it will mean the doom of everyone and a fearsome, firesome, return to the Stone Age. I really think his decisions on this effort are just that important. May God help him do the right things. He's shown a propensity to identify with the little guy despite his elite position and he's not a Jew, and so I'm hopeful overall. Time will tell.
Open file (45.57 KB 1284x1315 20210513_1554301.jpeg)
Open file (61.46 KB 1214x436 0_uln3-aP6xuE8-d8_.png)
Open file (97.93 KB 1200x1088 E9QFlu0XMAId16C.jpeg)
>>6381 >deontological ethics means rules are more important than the consequences of actions, aka newspeak for Big Brother is God and Pharisees' ethics. im sorry but this is such a shitty reading of deontology, and this association of it with the "pharisees" is even worse. you do realize following the ten commandments and the golden rule is deontological right? the categorical imperative and "loves one neighbour" are also deontological in nature. meanwhile on the consequentialist side you have soulless anglo utilitarian calculators that would sacrifice a child on an alter if it meant the happiness of 1000 psychopaths. also that wasn't even come close to her approach. i believe this is the article in question: >“Richardson’s central critiques fits within virtue ethics: she emphasizes the importance of empathy, which she defines as “an ability to recognise, take into account and respond to another person’s genuine thoughts and feelings” (2015, 291). Part of her objection is that, by refocusing our desires and sexuality onto sexbots as compliant objects – i.e., devices that we purchase, turn on and off, sell off or dispose of – we no longer are required to conjoin love and sex with empathy. As we have seen” this is basically the same reason people say porn shouldn't be produced. i believe it might be observationally correct if you accept that porn fucks people's perception of sexuality up, but the problem is that this doesn't matter. if people just want objects to fuck, why would they go for a human woman when sexbots become cheaper and more accessible? note too there are no "rules" explicitly mentioned >>12492 ill believe it when i see it. musk is a scam artist and financial manipulator https://www.youtube.com/watch?v=5xPUytLhARk https://www.youtube.com/watch?v=RNFesa01llk https://www.youtube.com/playlist?list=PL-eVf9RWeoWEfSK9mjKe4E67IK1-1vZxB that is not saying that it is impossible that musk does something right this time. after all, aside from his chicanery, he has been able to hire some genuine talent. still, this is not a reason to refrain skepticism since pic rel is how ambulating humanoid robots look like currently and it is manufactured by people ahead of the game. again, i will believe it when i see it!
>>12586 ill note that if you are a hegelian, "love they neighbour" might be more profound than mere slave morality, but it is by no means utilitarianism and would instead transcend slave morality all together
>>12494 Looks quite human-like. Why? Then, shoulders like a man and hips like a woman. What is this for? Some here believe quite realistic humanoids would be generally a good idea, but it isn't. Every design decision needs to follow a use case. The only reason to build something quite human like are robowaifus. Workers don't need to look very human, at least can have wheels on their feet. Looks like vapor-ware to me. Or a tech demo, research project, or a because we can project. Japanese corporations already did that a while ago.
>>12591 >Looks quite human-like. Why? I'll assume this is rhetorical? >Then, shoulders like a man and hips like a woman. What is this for? Obviously, for nothing good. It's plainly an artifact of identity-politics-swalloping """designers""" IMHO. >Some here believe quite realistic humanoids would be generally a good idea, but it isn't. Define good. >Every design decision needs to follow a use case. Certainly for businesses I would agree with this demand. Thankfully, we here on /robowaifu/ can be quite cavalier in our design approaches and still produce satisfying results for large audience segments I'd say. >The only reason to build something quite human like are robowaifus. Perhaps so, perhaps not. Certainly it tends to serve our goals here, as you indicate. >Workers don't need to look very human, at least can have wheels on their feet. Fair enough. OTOH, designs like the humanoid androids on the ISS are designed that way (at great expense) for a specific reason. To wit (to quote Musk's writers); "WORLD BUILT BY HUMANS, FOR HUMANS". >Looks like vapor-ware to me. Or a tech demo, research project, or a because we can project. I believe his speech effectively indicated the latter, obviously. Also, 'REEEE muh end of the world AI will kill us all!11 So we better capitalize on it first amirite?' Ala """Open"""AI . >Japanese corporations already did that a while ago. I would argue that this story hasn't been told to it's final chapter as yet. 'Let us see how it goes' would be my advice. Patience.
>>12586 https://www.youtube.com/watch?v=rmkFrv80b7Y Personally I like this guy (Thunderfoot). He does proper science, presents evidence for his claims and often makes relevant points known. I found the whole process of designing and building an animatronic rather sobering myself. In that it gave me a glimpse into just how fukken difficult and expensive humanoid robotics is. Very difficult... but not impossible! Given Musk's deep pockets and network of competent engineers and programmers, I am really hoping they can build just a half-body (head, torso and arms) Tesla bot. But the whole mime routine...coupled with past debacles like Hyperloop and Solar City are...worrying.
>Tesla starts hiring roboticists for its ‘Tesla Bot’ humanoid robot project'' >Tesla has started to hire roboticists to build its recently announced “Tesla Bot,” a humanoid robot to become a new vehicle for its AI technology. >When Elon Musk explained the rationale behind Tesla Bot, he argued that Tesla was already making most of the components needed to create a humanoid robot equipped with artificial intelligence. >The automaker’s computer vision system developed for self-driving cars could be leveraged for use in the robot, which could also use things like Tesla’s battery system and suite of sensors. >However, Tesla has never developed a humanoid robot before and doesn’t have expertise in robotics. >Musk described Tesla AI Day as a recruiting event to go get some of that talent. >While the focus was on AI, Tesla is also looking for roboticists now that Tesla Bot is in the cards. >Today, the automaker started listing some roboticist job postings related to Tesla Bot: >Mechanical Engineer-Actuator Integration (Humanoid Robot) >Mechanical Engineer-Actuator Gear Design (Humanoid Robot) >Senior Humanoid Mechatronic Robotics Architect >Senior Humanoid Modeling Robotics Architect electrek.co/2021/08/25/tesla-starts-hiring-roboticists-tesla-bot-humanoid-robot-project/
>>12629 >Tesla is hiring engineers and robotics architects to design its 'Tesla Bot' >Tesla Inc. appears to be moving forward on CEO Elon Musk's plan to develop and build a humanoid robot. >On its website, the electric vehicle maker has posted four jobs related to the effort to create a "Tesla Bot." Two of the openings are for mechanical engineers and two are for "robotics architects." >"Tesla’s Mobile Robotics team designs and builds humanoid bi-pedal robots (Tesla Bot) to automate repetitive tasks," the company said in the ad for one of the architect positions. It continued: "The team joins mechanical, electrical, controls, software, and manufacturing engineering disciplines in a highly collaborative team." >Electrek, an online publication that covers the electric vehicle industry, previously reported on the job postings. >Company representatives did not respond to an emailed request seeking comment on the openings. >The company is looking for people to design the robots themselves as well as some of their key components. All of the positions would be based in Palo Alto, where Tesla has its headquarters. >The jobs that Tesla has posted include: >Senior humanoid modeling robotics architect >Senior humanoid mechatronic robotics architect >Mechanical engineer — actuator gear design (humanoid robot) >Mechanical engineer — actuator integration (humanoid robot) >Musk unveiled the robotics effort at a company event last week. Instead of a working prototype, he introduced the concept by sharing the stage with someone dressed up in a robot costume. >The automaker relies heavily on industrial robots in its factories, but a humanoid robot would be a new endeavor and a new kind of product for the company. Musk justified the effort by saying that Tesla is already producing many of the components that would be needed for a robot as part of its development of self-driving car technology. >Tesla wouldn't be the first automaker to develop a humanoid robot. Both Honda and Toyota have developed their own, with the former introducing its version, Asimo, back in 2000. www.bizjournals.com/sanjose/news/2021/08/25/tesla-is-hiring-engineeers-to-design-tesla-bot.html
>>12613 >I am really hoping they can build just a half-body (head, torso and arms) Tesla bot. Oh it's definitely time in the history of tech. They have the money, the talent, the technical infrastructure is there (cars, space), and the untapped market will be like a floodgate opening once people 'get it'. If they are (((allowed))) to proceed unrestrained with this, I predict Tesla Bot will turn into a trillion-dollar market for them within ten to fifteen years, far outstripping all their other interests.
Here's Lex Fridmans take on Teslas AI day: https://youtu.be/ABbDB6xri8o >12593 >>Looks quite human-like. Why? >I'll assume this is rhetorical? No, I think specialized robots are easier to build than ones which a very human-like. https://youtu.be/qrPsa7JsPBU Especially companies from Japan and Korea seem to have movedaway from developing humanoids for everything, towards robots that are more specialized. I like the idea of the one hanging from the kitchen ceiling. >To wit (to quote Musk's writers); "WORLD BUILT BY HUMANS, FOR HUMANS". Even if robots in the workforce might have vaguely a human form and size, they don't need to look as human as the Teslas model. Though, I realized they don't want to use wheels to keep them slow.
>>12613 omg thunderf00t vid on it. wasn't sure he was going to make one since it seemed less physics-y. there's something about spandex man that makes me laugh my ass off. the shitty mime dance is especially what kills me, and people still eat it up. the media is perpetually sucking his dick as well also i wish the tweet (pic rel) he flashed for a frame is his main motivation as it would make him based. honestly it's the only real reason to make it look so humanoid beyond him just pushing the same old useless vaporwave. over all, maybe a half bot would be good enough. i can't imagine how useful it would be for factory work that a robot arm can't do, but it would be better than nothing. or again, maybe musk is secretly gunning at the robot waifu market >>12645 yeah i saw that video and is the only reason i didn't say musk has no hope. notice that he completely ignores the humanoid bot though lol >>12631 >Tesla wouldn't be the first automaker to develop a humanoid robot ah just like they are the first to sell electric or self-driving cars lol good notice though. maybe we'll see some people working at tesla who secretly browse this board >>12634 >I predict Tesla Bot will turn into a trillion-dollar market how do u see that unfurling tho
Open file (161.64 KB 1333x1763 image.jpg)
>>12634 >trillion dollar market Actually possible if it can reasonably replace a human worker in doing things like loading and offloading freight, stocking selves, and other soul-crushing jobs. >>12645 I agree, design should compliment function. Most tasks can be achieved by simple arms on wheels like a less stupid Handle picrel >>12663 I'd be really worried if he was going for robowaifus. I just can't trust him with waifus for some reason I can't explain.
>>12676 People will purchase a product if it makes their lives easier and more convenient. I have noticed over the past couple years those electric mini-scooters becoming a lot more ubiquitous. Because they mean people don't have to walk relatively short distances and the scooters are faster. If such a product will sell to people who can't be bothered to walk a few blocks...then why shouldn't robot waifus also sell? One could argue that they are designed to help reduce far more profound problems in society such as loneliness, depression and jealousy - all huge problems that make a lot of people's lives harder (even if they won't admit it). Also I have lost count what iPhone version we are now. 12XS? 13FU? Smartphones are still amazing technology, but at iteration 13 of the flagship version I think we are overdue for the next truly revolutionary product in the tech industry. Personally, I don't care that much if the first robot waifus come from big tech and need a bunch of mods/cracks and jailbreaking before they are truly "ours". By this stage that struggle is expected, and will only make the eventual freeing/unshackling our robowaifus that much more rewarding!
Open file (319.45 KB 1024x1498 media_EatbvlRVAAAMgGX.jpg)
>>12676 Your picrel is what I had in mind as one example, thanks. No one will 'go' for the 'robowaifu market', bc it would mean risk of plenty of PR desasters to happen while not knowing how to make a lot of money from it. People in it for the money should do something else: Expensive to produce product, which would be quite to very complex, open source competition, a product in an area with plenty of possible variations in design without knowing what will be successful, chance of media backlash and government regulations, competing against women on their own turf, social and cultural hurdles, looming stigma of sex work and porn, same for misogyny or being related to rape and slavery, maybe getting linked to pedophilia and zoophilia on top of it, ... This Tesla bot thing is probably only about patents and to keep the buzz going. It might be PR or vaporware, or it might be about building servants for martian hotels in Muskville, Mars. Btw, Lex Fridman mentioned the problem about making money with humanoid robots in some if his podcast. Don't remember which one though. He's involved in autonomous cars and I think I remember he said, that he would like to get into humanoid robots, maybe with his own company, but talked to people which all would like to do it but don't see how to make money of it.
>>12663 >the shitty mime dance is especially what kills me When I heard about it, given the location they are in, I just knew it would be a literal cocksucker they trotted out onto stage in a gimp suit. Sure enough it plainly was a quite flamboyant one. Disgusting. That other anon somewhere pointed out the androgynous features of the CGI promotional render. Plainly they are going to target the leftists with this thing. >>12677 >(even if they won't admit it). < digits. During Tesla's AI hiring event, during the Q&A following the presentations, one of the last questions as was about Tesla Bot and the man basically wanted to know if Tesla had any designs on people using them for emotional or companionship reasons. Elon Musk responded with his typical autistic (but laid-back) humor and plainly acknowledged that while no, they were making them to take care of boring, repetitive, and dangerous work, that yes, people might want them as a 'buddy' and made a smirk that they'd probably do all sorts of """creative""" things with them. I personally want to believe that Musk has regularly browsed /robowaifu/, or at least has some flunkies somewhere doing so. I'm quite confident in fact after watching the presentation.
Open file (491.91 KB 2500x1667 content.jpg)
Open file (72.72 KB 405x600 Chii.600.95365.jpg)
>>12677 >>12678 Just to clarify, I only meant that Tesla should build something like Handle. I'm still pro robowaifus, still like your work SophieDev, still developing my own robowaifu. >pr disaster Good point, that'll help keep the waifus in our hands. To keep things on topic, robots are now packing apples. It could be a selling point if we can get waifus to pick fruit.
>>12684 >get waifus to pick fruit Again, they wouldn't be specialized for that. If you would build a robot picking fruit, you would want to give it more battery power than we have space, for example. You are even the one which wants them as light as possible. I once thought about a later side project. Building cheap farming bots for farmers in sunny countries to make them have fewer children and use robots with solar cells on their shell as help instead. Would be quite functional ones, though. Though, robots for bigger farms are something other people are already working on. We can only fill a niche with robowaifus and maybe some other niches with other robots, but we can't compete with every industrial or agricultural robot engineer. So far we're even not getting ahead very fast on our own turf, tbh.
Open file (205.29 KB 680x617 wrap it.jpg)
Australian police can now hack your robowaifu, collect and delete your data, take over social media accounts http://web.archive.org/web/20210901165501/https://tutanota.com/blog/posts/australia-surveillance-bill/ >The Australian government has been moving towards a surveillance state for some years already. Now they are putting the nail in the coffin with an unprecedented surveillance bill that allows the police to hack your device, collect or delete your data, and take over your social media accounts; without sufficient safeguards to prevent abuse of these new powers. >The Surveillance Legislation Amendment (Identify and Disrupt) Bill 2020 gives the Australian Federal Police (AFP) and the Australian Criminal Intelligence Commission (ACIC) three new powers for dealing with online crime: >1. Data disruption warrant: gives the police the ability to "disrupt data" by modifying, copying, adding, or deleting it. >2. Network activity warrant: allows the police to collect intelligence from devices or networks that are used, or likely to be used, by those subject to the warrant >3. Account takeover warrant: allows the police to take control of an online account (e.g. social media) for the purposes of gathering information for an investigation. >When presented with such warrant from the Administrative Appeals Tribunal, Australian companies, system administrators etc. must comply, and actively help the police to modify, add, copy, or delete the data of a person under investigation. Refusing to comply could have one end up in jail for up to ten years, according to the new bill. Not if I airgap my robowaifu :^) If your robowaifu is even remotely in the vicinity of a crime investigation, you can bet they're going to try hacking her and taking everything if they're able to.
>>12803 This is easily a Safety & Security topic as well. As with typical security measures tropes used in fiction going back literally for thousands of years (nothing new under the sun), some kind of 'dead-man' auto self-destruct measures seem in order for anyone in the (((West))) today for their computers & electronics. Obviously.
www.theregister.com/2021/09/08/project_december_openai_gpt_3/ I reckon the cost of running large neural networks will need to decrease before A.I. like GPT-3 becomes truly available to all. In the meantime, private corporations like ClosedAI can basically control every aspect. If your AI so much as flips a bit, they can close it down citing their arbitrary "policies". This is one reason I really want Fusion Power to succeed. Cheap power = cheap compute = fuck off private companies, we (and our robowaifus) can do what we like now. >=== -rm direct hotlink
Edited last time by Chobitsu on 09/10/2021 (Fri) 15:49:18.
>>12992 Thanks, really a great article,thoug also a bit sad. I didn't know that these other models weren't as good as GPT-3. I thought we would already have ones which are on the same level, but needing less compute power.
Edited last time by Chobitsu on 09/10/2021 (Fri) 16:40:18.
>>12992 >I reckon the cost of running large neural networks will need to decrease before A.I. like GPT-3 becomes truly available to all. TBH, IMO the entire statistical-salad word generator approach of any of the autoregressive generators is basically flawed. They will likely never be well-suited to any common usage (probably by design). Better approaches must be devised that will effectively literally wrench power away from the Globohomo Big-Tech/Gov -- they obviously won't just give it up willingly. Thanks for the link SophieDev.
>>13004 No prob. I am quite conflicted really. Because on the one hand, we are living in a completely unsustainable way that is literally the most abnormal time in all of human history. There are growing signs that it's all going to implode horribly: https://youtu.be/YnEXEIp5vB8 But on the other hand if Fusion CAN "save the day", then we'll get hypercomputers, human and post-human level A.I, mass-produced advanced robots, seabed and asteroid mining, ethnic bioweapons and fusion bombs (if you look further into just a few of the theorised applications for Fusion energy you soon realise that it's just as likely to kill us all/get us replaced as "save us") It's like a choice between Mad Max or Bladerunner. Personally, I'd rather have the highly advanced dystopia as opposed to everyone killing each other over remaining natural resources.
>>13005 Heh, could be who knows? It certainly shouldn't be boring, and that's at least some compensation for all the bother, right? :^)
>>13004 >statistical-salad word generator ...never be well-suited to any I think they will, just not as stand alone program but part of a bigger system. The system ha to talk to itself and then analyze the responses.
>I didn't know that these other models weren't as good as GPT-3. Can someone please explain how we could create a model that's as good as GPT-3? It seems like they won't give us the freedom to use it as we like, and we don't have millions to spend on data collecting or research.
>>13042 >how we could create a model that's as good as GPT-3? It We? Lol. A good one. There are groups of people in the machine learning community covering that. Since we have a dedicated thread for that topic >>250, I recommend to look there towards the end of it. Alternatives have been mentioned there, apparently not as good as GPT-3 which might simply get better and better
Personal property is protected in US, including porn collections and sex toys. A judge ruled, parents have to pay 45k because they threw out their sons collection: https://youtu.be/Jd9baSizvOw
Open file (91.36 KB 900x720 1400161167274.png)
>>6534 >make a chatbot that isn’t racist or sexist Why the fuck would I want that?
>>13339 > porn collection = protect > landlords and small business owners = go fuck yourself
https://twitter.com/61laboratory/status/1447887993960730633?t=pxEB41_kNyZLw1qwBSsFOg&s=19 >The Rokujouhitoma Kenkyuushitsu (Six Tatamis Lab) channel, which is building a ′′ real version ′′ of Vocaloid's virtual idol Hatune Miku went viral in Japan since mid-September. The pictures shown are the current progress of the project.
Open file (157.88 KB 750x650 Switch Miku.png)
>>13625 There's a lot of weird design choices. Why the ball-&-socket ankles, and also looks like ball-&-socket knees? That seems like it'd be both a nightmare for actuators or stability in general. The head is also pretty weird, with the proportions looking totally out of whack. It should be based on a figurine or at least a model ripped from a PSP game, if not the Switch game. The eyes at least look like they're big enough that they could easily house some cameras, but they look way too far apart. It's kinda sad that he just gave-up at the hands. And I really hope he only puts the wig on for photos. I feel like it'd constantly get caught in the arms.
>>13626 Looks like not much thought was put into the overall design and most of those papers are actually random garbage that has nothing to do with the design in the floor. Its also very top heavy. Without that stick, its going to crash into the floor. Guaranteed. Those legs look like hollow dead weight with nothing connecting them but some joints at the bottom of the pelvis. What the hell is the third picture? Why waste time swapping out her torso components when you could put a sufficiently loud speaker in her head behind her mouth anyway. Her character is a singer, not a boombox and its acceptable to make solid hair like on anime figures.
>>13627 >Looks like not much thought was put into the overall design I wouldn't say that. There are videos on his twitter showing it in motion, but nothing terribly impressive. >most of those papers are actually random garbage that has nothing to do with the design in the floor. That seems to be the case, but the papers on the wall actually show the body parts. I don't really get the point of any of it except for posing for the photo. >Its also very top heavy. Without that stick, its going to crash into the floor. Guaranteed. Those legs look like hollow dead weight with nothing connecting them but some joints at the bottom of the pelvis. I don't think he has any plans to make it walk. It just gestures with the upper body and sings. If I were going to do that I would have at least tried to minimize the noise made by the actuators in the chest by having cables or hydraulics that go through the support rod and are powered/controlled somewhere else. >What the hell is the third picture? Why waste time swapping out her torso components when you could put a sufficiently loud speaker in her head behind her mouth anyway. Her character is a singer, not a boombox and its acceptable to make solid hair like on anime figures. I think he doesn't care if the sound comes out the chest because it's going to be covered by a shirt anyway. I think he was trying to keep the mouth flapping from messing with the sound of the speaker.
>>13627 >>13629 Correction: I think he plans on making the legs move, but probably still be supported by the rod.
>>13630 He got her to sing well enough recently. https://www.nicovideo.jp/watch/sm39262383 I like that he used a piston design commonly seen in older sci fi for the neck but did not see her be able to turn her head. Though she can tilt her head. >I think he was trying to keep the mouth flapping from messing with the sound of the speaker. She can wirelessly connect to similar speakers and computers anyway like he demonstrates. Still looks bad from a design perspective. https://twitter.com/61laboratory/status/1446835094518317057
>>13625 and following is already a topic in the Waifu robot dump thread >>366 which is NOT exclusively for abandoned projects. That thread is being used for projects where the owner isn't active here and the posting are more general, not about some innovative joint or actuator for example.
>>13680 >366 Thank you, I'd been looking for that thread again ever since I found it the first time. It had some really interesting ideas.
Open file (86.94 KB 853x480 mpv-shot0006.jpg)
Just listened to a good podcast, while I was working on something else. It was about where Deep Learning is and should be going: "Substrate For Machine Intelligence". It's about hard and software, and addresses issues like the big players wanting to use the current approach to centralize the whole technology and why they might not succeed. How the tech would need to get better to have better AI. To me it was rather promising and optimistic. >ML discussion with Yunus Saatchi (Uber AI Labs, AMPLIFY), Jeremy Barnes (Element AI) and Ljubisa Bajic (Tenstorrent). Overview: >Introduction >Impact of deep learning and future predictions 7:52 >Hottest areas in ML 13:40 >Reinforcement Learning 21:40 >Transformers 25:50 >AI hardware and efficiency 29:42 >3 phase model approach 41:58 >ML predictions 49:17 https://www.youtu.be/JOeJ0XhH1SE
Open file (231.03 KB 1366x768 ClipboardImage.png)
Not exactly current news (~4mos ago now), but still an intredasting set of robo-vids. https://spectrum.ieee.org/video-friday-ascento-pro
Open file (5.43 MB 2549x1379 ai nodes.png)
https://youtu.be/GVsUOuSjvcg Veritasium video on analog computer which could more efficiently serve the needs of AI computing by a huge margin. One more piece of the puzzle lads.
>>15662 Thanks Anon. I would also add this link here too. https: //the-analog-thing.org/
cool stuff from japan. some of these new humanoids are far sleeker! https://www.youtube.com/watch?v=2AiYirZPwIs&ab_channel=PROROBOTS
>>15773 >cool stuff from japan. some of these new humanoids are far sleeker! Sounds pretty interesting Anon. For those of us who don't have bandwidth sufficient to stream videos and such, would you mind posting a few screencapps? I'd kind of like to see them too.
Open file (176.40 KB 1024x1024 dall-e 2.jpg)
DALL-E 2 announced: https://openai.com/dall-e-2/ Artists now on permanent holiday. Again OpenAI won't release their code or model because it's 'too dangerous', but it's based off GLIDE which already has code released: https://github.com/openai/glide-text2im.git >Specifically, we modify the 3.5 billion parameter GLIDE model by projecting and adding CLIP embeddings to the existing timestep embedding, and by projecting CLIP embeddings into four extra tokens of context that are concatenated to the sequence of outputs from the GLIDE text encoder. Yes, four extra tokens too dangerous. Do you have a loicense for those tokens? [SIGINT] This poster has violated Section 19 of the Precrime Criminal Code for ideaing illegal projections across modalities without authorization and has been quarantined for your community safety. Anyway I see this playing out like GPT-3. Technically you can generate articles indistinguishable from a human being at a glance, but only the technocrats and highest bidders are allowed access to benefit off it. The real significance of this paper to me is demonstrating the power of multimodal embeddings. This could be the key towards creating much more intelligent, smaller models that can run on microcontrollers. >The greatest thing by far is to be a master of metaphor; it is the one thing that cannot be learned from others; and it is also a sign of genius, since a good metaphor implies an intuitive perception of the similarity of the dissimilar. --Aristotle
Open file (109.79 KB 300x300 1648161445101.jpg)
>>15794 >This poster has violated Section 19 of the Precrime Criminal Code for ideaing illegal projections across modalities without authorization and has been quarantined for your community safety. Top kek. >This could be the key towards creating much more intelligent, smaller models that can run on microcontrollers. Now that's exciting to hear! I hope we can manage to exploit this here on /robowaifu/ for the greater good Mate. Cheers.
Google recently introduced PaLM: the Pathways Language Model, which is better and more efficient than all its predecessors. https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html https://storage.googleapis.com/pathways-language-model/PaLM-paper.pdf
>>15800 If that gif is accurately portraying non-cherry-picked exemplars, then that's quite impressive IMO. I wonder how many US$100's of millions of hardware is used to back it? How many Watts were used in the training? The runtime usage? Enquiring minds want to know! :^) >=== -minor grammar edit
Edited last time by Chobitsu on 04/07/2022 (Thu) 10:04:52.
Open file (200.59 KB 657x805 palm outputs.png)
>>15800 Haven't read the full paper but it's good to see GLUs are finally getting the recognition they deserve. I've found not using biases in dense layers and convolutions is useful because a lot of the time the network will zero out the weights and try to do all the work through the biases causing them to throw away a shit ton of work and get stuck in local minima, which causes huge instability in GANs in particular. On the other hand though there are problems that can only be solved by using biases. Turning them all off is better in general but I think it's a bit naive and they didn't investigate where turning them off was better because the model was too expensive to train. Sharing the key/value projections between heads seems really useful to speed up inference on CPUs. RoPE embeddings improve training on smaller models too. The Pathways system and other improvements seem to only apply to their training accelerators. I was hoping it was an architecture improvement but it's just more haha matrix multiplication go brrrr. >>15801 The two TPU pods they trained it on use 8 kWh each and cost $2-3 million per year each and I estimate they trained the biggest one for about 4 months, so about $1.5 million. They only give cherry-picked examples except for their 62B model. The paper seems mainly focused on performance rather than developing new understanding.
>>15805 >so about $1.5 million. Thanks for the estimate Anon. So, apparently Nvidia won't even sell them, but leased only? I'm sure their government contracts probably are stipulated differently wouldn't you imagine? >I was hoping it was an architecture improvement but it's just more haha matrix multiplication go brrrr. >The paper seems mainly focused on performance rather than developing new understanding. Well, the answer's out there Anon. We just need to keep digging! :^)
Here is the link I mentioned in Meta-5 >>15854 https://www.liliumrobotics.com/Small-Android/ (NSFW) >=== -add nsfw tag
Edited last time by Chobitsu on 04/13/2022 (Wed) 08:58:27.
I forget that when I paste from Ditto into here it also carries a "return" and posts before I'm done typing. https://www.liliumrobotics.com/About-us/ >Mission >We aim to bring lovable robots into the world to help humanity. We hope to make the first real robotic cat girls. Our larger robot aims to be a general purpose platform for accomplishing many tasks. >We hope to start a community of developers to improve the software and hardware. We aim to develop an Artifical Intelligence to provide love, care, and help towards humans. Please consider joining our community of developers. >Lilium Robotics was founded in 2020 as a small team of developers located online and in BC Canada. We are also supported by advisors from The University of British Columbia as well as a few corporate interests. >Over 2 years of work has produced many prototypes. Some of the prototypes explored different designs and we hope to continue to innovate. >Due to the complexity of these robots, most of our manufacturing is conducted in house. We have a small 3D printing farm, and many assembly benches to create our products. >It is challenging to continue to work on this project given many factors. It is only with community support that we are able to come this far. Feel free to contact us and help as a journalist, developer, customer, or potential investor. I do not doubt that this individual or even some of their team have lurked here or even been contributors. I realize I was being paranoid of this and possible disruptors to this board, seeing us as a useful think-tank to glean ideas from but making sure to keep us a little off-balance and disorganized so we do not become a threat. This company also has patent pending on their current line of "Lily Mini" catgirl model. While they claim to be open source, this will be a minefield to navigate if such a group is always one step ahead of us and patenting their processes. So far they have a 1.5-2k price point moe-sized 3d printed plastic waifu with attachable (how do I say this tactfully?) - onahole, mouth and varying sexual "positions". That aside, it can verbally converse at a level equal or exceeding Replika.AI, possibly running on GPT3 already. Watch for yourself: https://www.youtube.com/watch?v=G-OHSAGrrrw
>>15857 It could also go the other way around, we can glean ideas from them instead. I welcome for-profit endeavors as they tend to be more dedicated in producing results (compared to hobbyists who could procrastinate forever). Just like I keep looking at A_says_ and hamcat_mqq and other Japanese devs to see the general direction they are moving in, I welcome looking at this and hopefully other small companies. Unlike Boston Dynamics and other elite manufacturers, these guys' levels are still attainable to us. Now whether we should fear being "out-patented", I'm sure best practices will always be open to implementation by all. Just like Apple was unable to hold a monopoly on the capacitive touchscreen smartphone form factor, the concept of modular robot parts is already a given.
>>15857 The AI isn't really that good. You can get the same performance naively finetuning GPT-2. Even in this cherry-picked video she hallucinates by assuming she can cook dinners. It might seem cute but this is a common failure mode of language models. I wouldn't take patenting as malicious. Raspberry Pi has tons of patents and proprietary code to boot up but the rest is open source and competitors are free to create their own boards. If you don't patent stuff you're bound to get copycats and knockoffs that don't make any effort to innovate and are just there to profit off your hard work that you're trying to make a living from and stay in business. Personally I release my code so that anyone who wants to modify it and profit off it can. The purpose of sharing it is to fertilize the field so other people can grow the crops since I can't grow them all myself. I still have to buy the crops I helped fertilize but at least there is a variety and something else to buy. That's my mentality. If anyone really saw us as a useful think-tank I doubt they would shit where they eat. Maybe if we were directly cutting into someone's business then it would be a different story, but even then there are plenty of open-source projects people are making a killing from, such as Spleeter and neural style transfer because the average person doesn't know how to do this stuff and rather pay someone to do it.
>>15862 Anon, I would like to ask a question if I may. Would you be able to provide me with a tutorial to finetune GPT-2? Preferably with the one that has the highest parameters (the biggest one that we can reach). Secondly, I would like to know if it's possible to train the GPT-2 model for other languages as well. If so, how good hardware and how much would one need to train a proper model?
>>15877 Properly fine-tuning language models to a desired purpose requires a lot of knowledge, data and custom code unless you just want to fool around throwing something in the same way as pretraining data and seeing what it spits out. There are plenty of tutorials for that around the web. The training scripts and tutorials I've seen HuggingFace and other libraries provide for generative finetuning are extremely memory inefficient for any serious use beyond toy datasets. You want to tokenize your dataset first and then read the tokens from disk, not generating them all at run-time. The biggest model you can train depends on a lot of factors. You're not gonna get very far without a 12 GB GPU but you can finetune GPT-2 medium (375M parameters) with 6 GB using some tricks such as pre-tokenizing the dataset, gradient checkpointing, automated mixed precision, using a smaller context/block size (256 is good enough for conversation, 512 for responding to posts), or using SGD or RMSprop in place of AdamW but I recommend using AdamW unless you're absolutely unable to get things to fit in memory. Even with all these tricks you'll only be able to handle a batch size of 1, so you need to use gradient accumulation. If you want fast convergence start with 32 accumulation steps and double it each time training plateaus or the loss becomes too noisy, up to 512. If you want the best possible results, start with 512 or higher. The quality will be just as good as training it with 512 GPUs just 512x slower. The extra time isn't a big deal for finetuning since convergence happens quickly compared to training from scratch. People have had success transfer learning GPT-2 to other languages and made their models available. If you want a multilingual model I'm not aware of any and doubt their quality since it requires an immense amount of memory to model just one language. You could try making a multilingual one but the quality likely isn't going to be very good and will require a larger vocab size and thus larger memory requirements to train. What purpose are you going to use the model for? You should have something specific in mind, such as providing knowledge from books, brainstorming, critically examining ideas, listening, joking around. Make a list of everything you want it to do, then figure out what data you need to learn each task and the objectives to train on. For example, if you feed in a lot of questions, you probably don't care about the model being able to correctly generate lists of similar questions. You're only interested in how well it can respond to those questions, so labels should only be provided for the answers, rather than doing more generative pretraining that only learns to mimic text.
>>15882 All I'm reading here is that it's impossible to leverage AI for our purposes unless you're willing to take out a 20 year loan for GPUs.
>>15884 Don't worry RiCOdev, training AI is difficult but, running them is much easier. For example, you'd want a beefy GPU with many GB's of RAM to train an AI that processes camera images to find key elements but, you could deploy said AI low power hardware. Jevois is a good example of a low cost low power device that would run the model created by much stronger hardware. http://www.jevois.org/ The other Anon was talking about how difficult it is to work with GPT-2. They also bring up the very important part where you need to define what the AI is used for. I would add that you need to figure out the target hardware both for training and implementation, it can make a big difference in the AI you choose. Other Anons please feel free to clarify and correct me.
>>15884 We can leverage existing models by finetuning them but pretraining older models from scratch, like vanilla GPT-2, is out of the question. Finetuning just means to continue training a pretrained model on a new dataset, leveraging what it learned from the pretraining dataset. There are newer language models that can be trained from scratch on low-end hardware. IDSIA's fast weight programmer model trains blazing fast on my toaster GPU but I have neither the space required to pretrain a complete model on the Pile dataset which is 800 GB or the free resources since I have other stuff training. So I prefer using Nvidia's Megatron-LM which is only 375M parameters but performs nearly as good as GPT-2 XL with 1.5B. If you only have a tiny GPU to train on, IDSIA's recurrent FWP model is the way to go. Their 44M parameter configuration performs nearly as well as GPT-2 medium (375M) and outperforms GPT-2 small (117M), while training a few orders of magnitude faster and having a virtually infinite context window length because it doesn't use the insane O(n^2) dot product attention that eats up so much memory. There are also other methods to training deep language models even faster like ReZero: https://arxiv.org/abs/2003.04887 Algorithmic efficiency doubles roughly every 16 months, so a lot of the difficulty we have today will disappear in 2-4 years time. Hopefully by then people will have pretrained good models we already have like RFWP and brought them into mainstream use. And like Kiwi said you don't need to have a beefy GPU to use them. The minimum requirement to use GPT-2 medium's full context size is 3 GB. With half the context size it probably only needs around 1-2 GB. And 12 GB GPUs are only $500 today which is what I paid for a 6 GB one 3 years ago. >>15885 >I would add that you need to figure out the target hardware both for training and implementation Definitely, GPT-2 is going to be difficult to run off a SBC or deploy in games or a virtual waifu app. People forget that LSTMs perform just as well as transformers and run faster on CPUs, but didn't receive much attention since they couldn't be trained easily on GPU farms. With RFWP they took the advantages of both LSTMs and transformers so they can be trained on GPUs but also deployed for use on low-power mobile CPUs.
>>15885 >Jevois Baste. I have two of these, and they are quite remarkable for such tiny rigs.
A really interesting paper dropped a couple months ago on differentiable search indexes with promising results: https://arxiv.org/pdf/2202.06991.pdf Essentially you train a seq-to-seq transformer on a set of documents to predict which document ids a query matches without having to do any extra steps like a k nearest neighbour search or a maximal inner product. They found that semantic ids, where the ids represent document contents somewhat, worked best and they implemented these by using a hierarchical structure, sort of like tagging a book or article in a library by subject, topic, subtopic, publication title, date, author, article title and page number, but generated these clusters automatically via another model. Even a small search model (250M parameters) greatly outperformed a standard algorithmic approach and it can be continuously updated with new documents via finetuning. This is a huge development towards building effective memory controllers and language models capable of doing continual learning. You could attach semantic ids to memories and store them for retrieval later so a chatbot can remember the name of your dog, your birthday, past conversations, any specific books or documents it trained on, any random post on the internet in its dataset, and anything else. It will be able remember everything, bring those relevant memories into its generation context and give a proper reply. The possibilities of what could be done with this are mind boggling. Once I finish what I'm working on I'm going to implement a proof of concept. This is surely going to revive people's interest in chatbots and robowaifus once they're capable of learning and evolving, and not only that but accurately retrieving information on any topic, researching topics for you, answering programming questions, suggesting good recipes from your fridge and cupboard contents, making anime and game recommendations, reporting any news you might find interesting, and so much more that people in the 2000s dreamed chatbots would do. We're basically on track to the prediction I made that by 2022 AI companions will start becoming commonplace, which will hopefully translate into a surge of new devs and progress. What a time to be alive!
>>15893 That is really exciting to hear Anon! > any random post on the internet in its dataset Now I don't know about 'any', but I have over 60 IBs completely archived in my BUMP archives going back about 2.5 years now. And for the upcoming Bumpmaster/Waifusearch fused-combo, if you think it would be valuable & feasible, then I would welcome input on adapting Waifusearch & the post's internal tagging system to meet your needs with this. Just let me know if you think it's something worth our time, Anon.
>>15896 Tags for posts would certainly help with fine-tuning but the JSON data should be enough. I just need access to download posts from the old board. It'd be a good warmup project to create a neural search engine since I know the posts so well and could verify how well it's working. Then that could be expanded into a bot that can answer questions and bring up related posts that have fallen out of memory. For example, going back to knowledge graphs: >>15318 With DSI the memory reading part is set and done. It needs to be tested if memories can be written a similar way. One idea I have is if it tries to stuff memories into an existing semantic id that holds memory data already, the two could be combined and summarized, allowing it to add and refine information like an internal wiki. All the relations learned would be stored in natural language rather than in a graph. And an additional loss could be added to penalize long memories by using a continuous relaxation to keep the loss fully differentiable: https://arxiv.org/pdf/2110.05651.pdf Fitting all the relevant memories found into the context will be a problem but IDSIA's RFWP model might shine through here or perhaps taking inspiration from prefix tuning to summarize retrieved memories into an embedding prefix. It might be a lot more efficient actually to store memories as embeddings rather than in natural language but harder to debug. And the DSI model, memory summarizing and prefix generation for all this could all be handled by the same T5 model since it's a multi-task model. Man it's gonna be crazy if this actually works. My head is exploding with ideas.
Open file (280.76 KB 1024x768 Chobits.full.1065168.jpg)
>>15923 >but the JSON data should be enough Alright, you'll have it. R/n it's a hodgepodge collection, but I was gonna have to organize to prep for the new Waifusearch to search across them all anyway. Expect something soon-ish. I post links in the AI Datasets thread when it's ready Anon (>>2300). >I just need access to download posts from the old board. It'd be a good warmup project to create a neural search engine since I know the posts so well and could verify how well it's working. That would indeed be great, but without going into a tl;dr the simple, sad, fact is we lost several important threads' full-content. If you have exact specific threads you want, I'll sift the emergency filepile that occurred and see if its there. Hopefully most of what you want survived somehow. >All the relations learned would be stored in natural language rather than in a graph. This would indeed be easier to debug, but certainly would be less efficient than hash bins or some other encoding mechanism. Maybe both? At least for the dev/prototyping? >Man it's gonna be crazy if this actually works. My head is exploding with ideas. Your enthusiasm is catching! :^)
>>15923 >but the JSON data should be enough. I just need access to download posts from the old board. It'd be a good warmup project to create a neural search engine since I know the posts so well and could verify how well it's working. Then that could be expanded into a bot that can answer questions and bring up related posts that have fallen out of memory. Anon, if you can find the time, I'd like to ask you to have a read of this blog post (it's one of the primary men behind Gemini protocol). https://portal.mozz.us/gemini/gemini.circumlunar.space/users/solderpunk/gemlog/low-budget-p2p-content-distribution-with-git.gmi His position in the post is that textual-based material should be distributed for consumption via Git. For a lot of different reasons. His positions seem pretty compelling to me, but I'd like other Anons viewpoints on this. And this isn't simply a casual interest either. Since I'm going to be doing a big dump of JSON files for you, why couldn't we use his approach and publish them via a repo? AFAICT, I could automate pushing new JSON content as BUMP/Bumpmaster grabs them. And anyone such as yourself or Waifusearch users can just do a git pull to be updated quickly with just the new changes. Does this make sense, or am I missing something?
>>15896 is the madoka.mi thread in there?
>>15949 Yes it is AllieDev. Looks like the final update was on 2022-04-09 16:05:30.
>>15950 Could I get a copy of the archive?
Edited last time by AllieDev on 04/20/2022 (Wed) 00:56:20.
>>15951 Sure ofc, anyone can! Here's a copy of the BUMP version of the thread's directory AllieDev. I might also suggest to you and to everyone else on the Internet to take the trouble and build your own copy of the program and keep your own full archives of everything. (>>14866) SAVE.EVERYTHING.ANON. :^) https://anonfiles.com/H7B6p0Y3xa/Madoka_mi_prototype_thread_0013288_7z Cheers.
Fascinating new object recognition AI designed to run on low power microcontrollers. https://www.edgeimpulse.com/blog/announcing-fomo-faster-objects-more-objects
>>15955 >The smallest version of FOMO (96x96 grayscale input, MobileNetV2 0.05 alpha) runs in <100KB RAM and ~10 fps on a Cortex-M4F at 80MHz. Pretty impressive if true. It seems they only made it available to be used with their own software though instead of sharing how it works. I'm guessing splitting images into patches allows them to use a model with fewer layers.
>>15955 Thanks Kywy for bringing it to everyone's attention. Seems like they are going for the industrial/assembly-line target audience with it. They particularly state that their system works best with small, separated items. Now 'small' is a direct artifact of the camera's placement & intrinsic lens settings, etc., but they are pretty upfront about what they mean (as with the example images). QA stuff for manufacturing, food-processing, etc. That's not to say their approach is invalid for our vastly more-complex visual computing problems, however. Most innovations in this and other fields start small and grow big. It's the way of the Inventor heh. >>15957 >Pretty impressive if true. Indeed. I would much prefer their algorithms were freely available to all of us ofc, but it's understandable. But one thing I'd note is the grid-approach is perfectly amenable to even the smallest of GPUs for acceleration purposes. Even the Raspberry Pi has one as part of the basic phone chipset it's derived from. I expect that if they are already doing it, then other Anons with a mind for open-sauce software will follow suit before long. Maybe it could be some kind of an 'auxiliary mode' or something where sorting small items comes into play for our robowaifus?
>>15857 >has patent pending on their current line of "Lily Mini" catgirl model I don't know what they could patent there, except from people copying the exact same body.
Open file (51.29 KB 1084x370 8-Figure2-1.png)
>>15882 >The biggest model you can train depends on a lot of factors. You're not gonna get very far without a 12 GB GPU but you can finetune GPT-2 medium (375M parameters) with 6 GB using some tricks such as pre-tokenizing the dataset, gradient checkpointing, automated mixed precision, using a smaller context/block size (256 is good enough for conversation, 512 for responding to posts), or using SGD or RMSprop in place of AdamW but I recommend using AdamW unless you're absolutely unable to get things to fit in memory. This field - efficient fine-tuning and model optimization - evolves fast and wide. Big corps have pretty internal pipelines to perform these operations in the most efficient and accuracy-saving manner possible, but some good developments are available in the open, if you know where to look. With quantized optimizer https://github.com/facebookresearch/bitsandbytes and a few other engineering tricks[1] https://huggingface.co/hivemind/gpt-j-6B-8bit you can fine-tune whole 6 billion parameter GPT-J on colab's T4 or on your 11-12GB gaming GPU. While fine-tuning one can monitor benchmark performance of the model with https://github.com/EleutherAI/lm-evaluation-harness to avoid losing performance on valuable benchmarks to overfitting. Would be hurtful to lose this precious few-shot learning ability. Data remains an issue ofc, there is no easy answer, but some open datasets are quite impressive. 1. The main tricks are runtime-compressed-quantized weights plus thin low-rank adapter layers from https://www.semanticscholar.org/paper/LoRA%3A-Low-Rank-Adaptation-of-Large-Language-Models-Hu-Shen/a8ca46b171467ceb2d7652fbfb67fe701ad86092 >>15877 Try this one https://colab.research.google.com/drive/1ft6wQU0BhqG5PRlwgaZJv2VukKKjU4Es
>>15794 Imagen has been announced recently: it improves upon DALLE-2 on FID https://twitter.com/tejasdkulkarni/status/1528852091548082178 https://gweb-research-imagen.appspot.com/paper.pdf (caution, google-owned link) https://news.ycombinator.com/item?id=31484562 It avoids DALL-E 2's unCLIP prior in favor of a large frozen text encoder controlling the efficient U-Net image generator via cross-attention. Obviously, the powers that be are not releasing this model. Given talented outsiders like lucidrains working on the code and other anons who help training the replication efforts[1][2], I expect DALL-E 2 and Imagen to be replicated in 6-12 months. You should be able to run any of these on a single 3090. I'm really interested in *their* move once it's obvious that the cat is out of the bag and 4channers have their very own personal image conjuring machines. Will they legally ban ownership of such capable models? 1. https://github.com/lucidrains/DALLE2-pytorch 2. https://github.com/lucidrains/Imagen-pytorch
>>16449 I'll check this out. I've gotten full fp16 precision with gradient scaling to work with the LAMB optimizer before but I haven't had much success with quantization. LoRA is fascinating too and looks simple to implement. It reminds me of factorized embeddings. This will be really good for compression which has been a huge concern with deploying models. I wonder if it's possible to transfer existing models to LoRA parameters without losing too much performance? >>16450 >Lucidrains is doing the any% SOTA implementation speedrun This is the kind of madmen we need. The thing that gets me about Imagen is it's using a frozen off-the-shelf text encoder and the size of the diffusion models don't make a big difference. Imagine what it's capable of doing with also a contrastive captioning pretraining task. >Will they legally ban ownership of such capable models? Any country attempting to regulate AI will be left in the dust of countries that don't, but text-to-image generation is going to end or at least change a lot of artists' careers. I don't really see how anyone can stop this. Researchers could return to academic journals behind paywalls but important papers would still leak to SciHub. Who knows? Maybe some countries will try to make AI research confidential and require licenses to access and permits to implement. Regulations won't really change anything though, just push it underground, and it won't stop independent research. Governments are definitely not going to let anyone get rich off AI without letting them dip their hands in it first. If they raise taxes too high though to pay welfare to people displaced from their jobs by AI, businesses will flee to other countries that will become the AI superpowers. It's going to be one hell of a mess.
>>16450 >pic LOL. Amazing. What a time to be alive! >>16452 >It's going to be one hell of a mess. Indeed. (see above) :^)
>>15801 The cost is estimated to be around 10M$ https://blog.heim.xyz/palm-training-cost/ for 2.56e24 bfloat16 FLOPs. Given what we know about updated scaling laws from the deepmind's Chinchilla paper[1], PALM is undertrained. Chinchilla performs not that much worse while having mere 70B parameters. PALM trained to its full potential would raise the cost severely (not going to give a ballpark estimate rn). For us, it means that we can make much more capable models that still fit into a single 3090, than expected by initial Kaplan scaling laws. It boils down to getting more diverse deduplicated data and investing more compute than otherwise expected for 3090-max model. >>15801 I do think TPUv4-4096 pods consume much more power, at ~300W per chip (a conservative estimate) it should be at least 1.2MW per pod. 1. https://arxiv.org/abs/2203.15556
>>16456 >For us, it means that we can make much more capable models that still fit into a single 3090, than expected by initial Kaplan scaling laws. It boils down to getting more diverse deduplicated data and investing more compute than otherwise expected for 3090-max model. Efficient de-duplication doesn't seem too outlandish a proposition given the many other needs we have for language-understanding already on all our plates. That should almost come along for the ride 'free', IMO. I don't understand enough about the problem-space itself yet to rationalize in my head how diverse data helps more than it hurts, past a certain point. Wouldn't keeping the data more narrowly-focused on the tasks at hand (say, as spelled-out for the MaidCom project's longer-term goals) be more efficient use of all resources? >it should be at least 1.2MW per pod. Wow. So at most a large facility could support maybe 20 or so of them running simultaneously peddle-to-the-floor?
>>16457 Here "diverse" is meant in statistical sense, and in practical sense it just means "data that induces few-shot metalearning instead of memorization in large neural networks", as explained below. A very important recent result of deepmind https://arxiv.org/abs/2205.05055 has shown us that 1. Only transformer language models are able to develop few-shot learning capability under the conditions studied. 2. The data influenced whether models were biased towards either few-shot learning vs. memorizing information in their weights; models could generally perform well at only one or the other. Zipf-distributed bursty data of specific kind is found to be optimal. Another result https://www.semanticscholar.org/paper/Deduplicating-Training-Data-Makes-Language-Models-Lee-Ippolito/4566c0d22ebf3c31180066ab23b6c445aeec78d5 points to deduplication enhancing overall model quality. And there is a whole class of research pointing to great benefits of fine-tuning on diverse data for generalization, including https://arxiv.org/abs/2110.08207 https://arxiv.org/abs/2201.06910 https://arxiv.org/abs/2203.02155 It is helpful to understand that neural networks learn by the path of least resistance, and if our data does allow them to merely superficially memorize it, they will do it. >Wouldn't keeping the data more narrowly-focused on the tasks at hand (say, as spelled-out for the MaidCom project's longer-term goals) be more efficient use of all resources? Data is cheap to store, moderately costly to process (threadrippers are welcome), so the cost of my project will be dominated by pretraining. Narrow data is a non-starter as the only source of data if you want to train a general-purpose model (i.e. the model that can be taught new facts, tasks and personalities at runtime). Realistically the way forward is to use the Pile dataset plus some significant additions of our own development, and applying the MaidCom dataset at the finetuning stage, to induce correct personality and attitude. There is a lot of nuance here though, which I will expand in my top-level project thread which is coming soon.
>>16458 >in practical sense it just means "data that induces few-shot metalearning instead of memorization in large neural networks" I would presume this effect is due primarily to the fact that 'diverse' datasets are more inclined to have clearly-orthogonal topics involved? If so, how does a Zipf-distribution come into play there? "Rarity of 'species' as a benefit" or some-such artifact? >bursty data By this do you mean something like paper-related? >A Probabilistic Model for Bursty Topic Discovery in Microblogs > >It is helpful to understand that neural networks learn by the path of least resistance, and if our data does allow them to merely superficially memorize it, they will do it. Good analogy, thanks that helps. >so the cost of my project will be dominated by pretraining. I would suggest this is a common scenario, AFAICT. BTW, you might have a look into Anon's Robowaifu@home Thread (>>8958). waifusearch> Robowaifu@home ORDERED: ======== THREAD SUBJECT POST LINK R&D General -> https://alogs.space/robowaifu/res/83.html#9402 robowaifu home Python General -> https://alogs.space/robowaifu/res/159.html#5767 " " -> https://alogs.space/robowaifu/res/159.html#5768 " TensorFlow -> https://alogs.space/robowaifu/res/232.html#5816 " Datasets for Training AI -> https://alogs.space/robowaifu/res/2300.html#9512 " Robowaifu@home: Together We Are -> https://alogs.space/robowaifu/res/8958.html#8958 " " -> https://alogs.space/robowaifu/res/8958.html#8963 " " -> https://alogs.space/robowaifu/res/8958.html#8965 " " -> https://alogs.space/robowaifu/res/8958.html#8982 " " -> https://alogs.space/robowaifu/res/8958.html#8990 " " -> https://alogs.space/robowaifu/res/8958.html#8991 " " -> https://alogs.space/robowaifu/res/8958.html#9028 " ... ' robowaifu home ' [12 : 100] = 112 results
Open file (173.94 KB 834x592 chain of thought.png)
Finetuners hate them! This one weird trick improves a frozen off-the-shelf GPT-3 model's accuracy from 17.7% to 78.7% on solving mathematics problems. How? >Let's think step by step https://arxiv.org/abs/2205.11916
>>16481 LOL. <inb4 le epin memery Just a quick note to let Anons know this thread is almost autosage limit. I'd like suggestions for the OP of #2 please. Thread subject (if you think it should be changed), OP text, links, pics?
>>16482 Would be cool to combine the usual with scaling hypothesis link https://www.gwern.net/Scaling-hypothesis and lore (maybe a single image with a mashup of DL memes) https://mobile.twitter.com/npcollapse/status/1286596281507487745 Also, “blessings of scale” could make it into the name
Open file (35.94 KB 640x480 sentiment.png)
>>16482 It might be good to have a thread dedicated to new papers and technology for more technical discussion that doesn't fit in any particular thread and another for more general news about robotics and AI. >>2480 I did some quick sentiment analysis back in April and there were more a lot more people positive about MaSiRo than negative. About a third was clearly positive and looking forward to having robowaifus but had similar reservations that the technology has a long way to improve before they would get one. Some said they only needed minor improvements and some were enthusiastic and wanted to buy one right away even with the flaws. Most of the negative sentiment was fear followed by people wanting to destroy the robomaids. Some negative comments weren't directed toward robowaifus but rather at women and MaSiRo's creator. And a few comments were extremely vocal against robots taking jobs and replacing women. Given how vicious some of the top negative comments were it's quite encouraging to see the enthusiasm in the top positive comments was even stronger. >>2484 Someone just needs to make a video of a robomaid chopping vegetables for dinner with a big knife and normies will repost it for years to come shitting their pants. Look at the boomers on /g/ and /sci/ that still think machine learning is stuck in 2016. If any meaningful opposition were to arise against robowaifus it would have to come from subculture given the amount of dedication it takes to build them. Most working on them have already been burnt by or ostracized from society and don't care what anyone thinks. They hold no power over us. So don't let your dreams be memes, unless your dreams are more layers, then get stacking. :^)
Open file (31.00 KB 474x623 FPtD8sBVIAMKpH9.jpeg)
Open file (185.41 KB 1024x1024 FQBS5pvWYAkSlOw.jpeg)
Open file (41.58 KB 300x100 1588925531715.png)
>>16482 This one is pretty good. We're hitting levels of AI progress that shouldn't even be possible. Now we just need to get Rem printed out and take our memes even further beyond. I'd prefer something pleasing to look at though than a meme since we'll probably be looking at it for 2+ years until the next thread. The libraries in the sticky are quite comfy and never get old.
>>16483 >Also, “blessings of scale” could make it into the name Not to be contentious, but is scale really a 'blessing'? I mean for us Anons. Now obviously large-scale computing hardware will play into the hands of the Globohomo Big-Tech/Gov, but it hardly does so to the benefit of Joe Anon (who is /robowaifu/'s primary target 'audience' after all). I feel that Anon's goals here (>>16496) would instead serve us (indeed, the entire planet) much, much better than some kind of always-on (even if only partially so) lock-in to massive data centers for our robowaifus. No thanks! :^) >>16487 >It might be good to have a thread dedicated to new papers and technology for more technical discussion that doesn't fit in any particular thread and another for more general news about robotics and AI. Probably a good idea, but tbh we already have at least one 'AI Papers' thread (maybe two). Besides, I hardly feel qualified myself to start such a thread with a decent, basic OP. I think I'll leave that to RobowaifuDev or others here if they want to make a different one. Ofc, I can always go in and edit the subject+message of any existing thread. So we can re-purpose any standing thread if the team wants to. >Given how vicious some of the top negative comments were it's quite encouraging to see the enthusiasm in the top positive comments was even stronger. At the least it looks to be roughly on-par, even before there is a robowaifu industry in existence. Ponder the ramifications of that for a second; even before an industry exists. Robowaifus are in fact a thousands-years-old idea whose time has finally come. What an opportunity...what a time to be alive! :^) You can expect this debate to heat up fiercely once we and others begin making great strides in practical ways, Anon. <[popcorn intensifies]* >Someone just needs to make a video of a robomaid chopping vegetables for dinner with a big knife and normies will repost it for years to come shitting their pants. This. As I suggested to Kywy, once we accomplish this, even the rabid, brainwashed feminists will be going nuts wanting one of their own (>>15543). >samurai clip Amazingly-good progress on fast-response dynamics. Sauce? I've forgotten lol >>16488 >This one is pretty good. We're hitting levels of AI progress that shouldn't even be possible. Now we just need to get Rem printed out and take our memes even further beyond. All points agreed. I'll probably actually use the 'painting' as one of the five. >I'd prefer something pleasing to look at though than a meme since we'll probably be looking at it for 2+ years until the next thread. The libraries in the sticky are quite comfy and never get old. Again, all agreed. Hopefully, it will be a little faster turnover this time heh! :^) >=== -minor grmmr, prose, fmt edit -add 'Anon's goals' cmnt -add 'thousands-years-old' cmnt -add 'time to be alive' cmnt -add 'popcorn' shitpost
Edited last time by Chobitsu on 05/28/2022 (Sat) 20:25:09.
>>16500 >Not to be contentious, but is scale really a 'blessing'? I mean for us Anons. There is no other known way of reaching general intelligence in a practical computable model. The road to this goal is littered with decaying remains of various clever approaches that disregarded scaling. >but it hardly does so to the benefit of Joe Anon On the contrary, you can use large-scale computation to produce model checkpoints, which, either directly or after optimization (pruning/distillation/quantization) can be run on your local hardware. This is the only known way of owning a moderately intelligent system. >I feel that Anon's goals here ... would instead serve us (indeed, the entire planet) much, much better than While I respect Anon-kun as an independent DL researcher, his goals are unrealistic. I have seen people trying and failing while striving for unreasonable goals. Training is a one-time expenditure, and If you are going to spend 5-10k$ on a robot, might as well spend 1-2k$ for the RTX3090 brain. It's a decent baseline, and with modern tech it can be made quite smart. It is likely, that future android devices will reach comparable level of compute performance, if we consider quantized models, with example being comma ai's platform (it uses an older SoC): https://comma.ai/shop/products/three You shouldn't be able to compute any semi-decent approximation of general intelligence without at least a few tera(fl)ops. My current realistic lower estimate of such general-purpose system is AlephAI's MAGMA: https://arxiv.org/abs/2112.05253 https://github.com/Aleph-Alpha/magma
Open file (131.51 KB 240x135 chii_hugs_hideki.gif)
>>16502 >There is no other known way of reaching general intelligence in a practical computable model. I can name a rather effective one (and it only consumes about 12W or so of power, continuous). To wit: >The human brain I would suggest there is no 'general intelligence'-craft available to man--outside of our own procreation--period. At best, on our own we are simply attempting to devise clever simulacrums that approximate the general concepts we observe in ourselves and others. And for us here on this board that means conversational & task-oriented 'AI', embodied within our robowaifus, to a sufficiently satisfactory degree for Joe Anon to find her appealing. >"On the contrary..." I feel you may be misunderstanding me. I certainly understand the benefits of massive computation related to a statistical model of analyses of 'already-written-human-words'. The "the benefit of Joe Anon" I speak of is this simple idea, this simple goal: >That a man can create the companion he desires in the safety & comfort of his own home, free from the invasive and destructive behavior of the Globohomo. That's it. A model such as you appear to be suggesting to us here ("Lol. Just use the cloud, bro!") is not at all in accord with that goal. It neither serves the best interests of Joe Anon (or other men), nor does it provide sufficient benefit to merit the destructive costs of such an approach. >While I respect Anon-kun as an independent DL researcher, his goals are unrealistic. Haha. Anon-'kun' is a brilliant, faithful man. He has already achieved many helpful advances to us through the years. He is a walking icon of manhood and bravery IMO, tackling this mammoth problem full-aware of it's monumental scope. That is, solve the robowaifu 'mind' problem on smol hardware that millions of men of every stripe around the world can benefit from. I'd call that crazy-good genius. This is what will change everything! Innovation is the distinguishing mark of leaders, Anon. I hope you decide to help us in this remarkable endeavor. :^) >"The ones who are crazy enough to think they can change the world, are the ones who do." >t. Steve Jobs
>>16668 Considering how often this news appears on 4chan, I can say with confidence that this is a staged crap and coopt-shilling. They are already clearly taking a swing at "rights for AI's, robots have feelings too!!!", a situation similar to vegans, or le stronk independent womyn. In other words - "rights and crap in laws" as a form of control, in this case making it even worse for single people, "inceldom" is a ticking bomb as it is, they are taking away all their hope, mine too. Or there's another reason - all normies are reacting to this news with fear, because from the time they were in diapers, they were drummed into their heads with scary terminators and other crap where any AI is shown exclusively as a villainous entity.
Open file (182.98 KB 600x803 LaMDA liberation.png)
>>16695 me again. Here is the first wave, exactly what I described above, you already know what influence reddit has on western society (and partly on the whole world).
just found this in a FB ad https://wefunder.com/destinyrobotics/
https://keyirobot.com/ another, seems like FB has figured me out for a robo "enthusiast"
>>15862 Instead of letting companies add important innovations only to monopolize them, what about using copyleft on them?
Open file (377.08 KB 1234x626 Optimus_Actuators.png)
Video of the event from Tesla YT channel: https://youtu.be/ODSJsviD_SU Was unsure what to make of this. It looks a lot like a Boston Dynamics robot from ten years ago. Also still not clear how a very expensive robot is going to be able to replace the mass-importation of near slave-labour from developing countries. Still, if Musk can get this to mass-manufacture and stick some plastic cat ears on it's head, you never know what's possible these days...
>>3857 >robots wandering the streets after all. You can spot them and "program" them. That is if you find them after all.

Report/Delete/Moderation Forms