/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Site was down because of hosting-related issues. Figuring out why it happened now.

Build Back Better

Sorry for the delays in the BBB plan. An update will be issued in the thread soon in late August. -r

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


When the world says, “Give up,” Hope whispers, “Try it one more time.” -t. Anonymous


Open file (686.85 KB 2764x2212 Prima_Doll.jpg)
Open file (174.99 KB 720x960 Trash_Press.jpg)
Open file (359.85 KB 1200x675 ChatGPT_hustle.png)
Open file (29.42 KB 622x552 LLaMA02.png)
General Robotics/A.I./Software News, Commentary, + /pol/ Funposting Zone #3 NoidoDev ##eCt7e4 03/06/2023 (Mon) 18:57:17 No.21140
Anything in general related to the Robotics or A.I. industries, and any social or economic issues surrounding it (especially of robowaifus). === -note: This OP text might get an update at some point to improve things a bit. -previous threads: > #1 (>>404) > #2 (>>16732) >=== -edit subject
Edited last time by Chobitsu on 07/08/2023 (Sat) 13:10:41.
Is he the hero we all need, Anon? :^)
>>21151 I'm hoping the optimus will be jailbreakable, seems like it will be good platform in 3 or 4 generation (once costs come down). I dont know how easy replacement parts will be, probably not too easy considering they use custom actuators. Still, we can dream. At the very least we will finally have some competition starting for humanoid robots, which is always a good thing.
>>21153 >I'm hoping the optimus will be jailbreakable, seems like it will be good platform in 3 or 4 generation (once costs come down). Agreed on both points Anon. I'm only half-serious posting this in our news thread. The others have pretty much convinced me he's unlikely to do much to serve our cause. Just a little le ebin trole'g on my part. :^) OTOH >At the very least we will finally have some competition starting for humanoid robots, which is always a good thing. are both true as well. I think Tesla + Musk can probably stand up to the initial public reticence to robots in human society (even for the ones who wouldn't welcome it with open arms). The end result of that (+ all the CY pozz) will surely make social headway that we here can all benefit from.
Just a reminder: Some of the last postings in the old thread might have been overlooked, because it didn't bump up during the end: >>20845 >>20979 >>21021
Open file (110.02 KB 1276x479 Screenshot_4.jpg)
Open file (137.55 KB 601x698 Screenshot_5.jpg)
Open file (410.47 KB 1546x773 Screenshot_6.jpg)
Some huge news, maybe. https://github.com/ggerganov/llama.cpp It's a port of Facebook's LLaMA model in C/C++ Tested, and its works flawlessly on my laptop, 16 gb ram with 16 cores amd cpu, 7b ofc, 13b is slower a bit. What's the catch - some anon launched it on his samsung phone. (from KoboldAI discord channel) And another one, in previous thread i said (or not? idk too lazy to check) Emad (StabilityAI CEO) said that "~75B is all you need" months ago, and now he says "~4.2b parameters is all you need" it's accelerating pretty fast, imo. also little warning, do not experiment with stanford llama finetune! (why? see pic 3)
>>21334 Good news 01, thanks!
>>21334 Looks like he's also done his own whisper.cpp as well Anon (which can run on a RaspberryPi). https://github.com/ggerganov/whisper.cpp >=== -prose edit
Edited last time by Chobitsu on 03/13/2023 (Mon) 23:35:38.
Lol. Don't go plotting together to create any fake grill bots with OAI tech Anon, or you'll end up on the wrong side of F*rbes' star gumshoes! :^) https ://flipboard.com/article/armed-with-chatgpt-cybercriminals-build-malware-and-plot-fake-girl-bots/f-46b0546844%2Fforbes.com >=== -minor edit
Edited last time by Chobitsu on 03/14/2023 (Tue) 07:30:50.
M*ta's Cicero begins to master the game Diplomacy -- once viewed as highly-unlikely for AIs to become proficient at. https ://www.deepmind.com/blog/ai-for-the-board-game-diplomacy https ://www.nature.com/articles/s41467-022-34473-5 https ://ai.facebook.com/blog/cicero-ai-negotiates-persuades-and-cooperates-with-people/ https ://ai.facebook.com/research/cicero/ Astrophysicists Dr. Jeff Zwerink & Dr. Hugh Ross, with the ministry RTB, during their show Stars, Cells, and God discussed several topics related to super-human AI, including this achievement. https://reasons.org/multimedia/fifth-force-and-creation-and-ai-and-work-and-value-stars-cells-and-god Can't wait for the leak!! :^) >=== -prose edit
Edited last time by Chobitsu on 03/14/2023 (Tue) 08:12:43.
OAI has apparently """unveiled""" ChatGPT4. https ://openai.com/research/gpt-4
>>21351 >M*ta's Cicero begins to master the game Diplomacy -- once viewed as highly-unlikely for AIs to become proficient at I'm pretty sure this was old news. I saw this news somewhere around the latter half of 2022
>>21396 Ahh, my apologies. I didn't see it anywhere else here Anon, maybe I missed it?
Open file (4.85 MB gpt-4.pdf)
>GPT-4 Technical Report >abstract: >We report the development of GPT-4, a large-scale, multimodal model which can accept image and text inputs and produce text outputs. While less capable than humans in many real-world scenarios, GPT-4 exhibits human-level performance on various professional and academic benchmarks, including passing a simulated bar exam with a score around the top 10% of test takers. GPT-4 is a Transformer-based model pre-trained to predict the next token in a document. The post-training alignment process results in improved performance on measures of factuality and adherence to desired behavior. A core component of this project was developing infrastructure and optimization methods that behave predictably across a wide range of scales. This allowed us to accurately predict some aspects of GPT-4’s performance based on models trained with no more than 1/1,000th the compute of GPT-4. >sauce https ://cdn.openai.com/papers/gpt-4.pdf >=== -disable hotlink
Edited last time by Chobitsu on 03/15/2023 (Wed) 19:02:13.
>>21399 It seems to be able to write entire lawsuits and transfer blueprint drawings on front-end to code. It can probably produce high quality art as well. Do you guys think that the time for AI to finally replace lawyers, developers, artists and IT devs has come? Or considering that Mr you-know-who has bought not-open-at-all-AI is running a huge pr? From my experience with ChatGPT, and AlphaCode demonstration devs who worked for companies were safe so far. Will that change in 5 years? What is the endgame here? Is their plan of creating dependent free world finally starting to kick in? To what extent should developers be concerned about being fired?
>>21400 >Mr you-know-who has bought not-open-at-all-AI Do you mean Elon? He stepped out after founding it a while back. It's M$ that just lobbed US$10Bn over the wall to them. >to what extent should developers be concerned about being fired? For web-devs and the like, maybe somewhat. But for the kind of systems work that we are working towards here, probably not at all. At least for now. Maybe by GPT-7 or so, then that might begin to be a real possibility. But in the meantime, there's a real potential for a bonanza for skilled devs who can come in and 'save' companies. Ones whose management was so deluded & greedy by all this hype that they thought they could fork over yuge monies to OAI -- only to make it all back by firing their dev staff. This will be a product disaster waiting to happen if they try it for anything substantial. Complicated, creative software systems GPT cannot make yet. There's a very complex set of issues that crop up as software grows. You can read the book The Mythical Man-Month for insights into this. They'd likely have to throw 10x1^10000 hardware at it to sufficiently cover all the bases. Certainly not gonna happen anytime soon Anon.
>>21402 >Do you mean Elon? No, I meant Microsoft buying OpenAI. >For web-devs and the like, maybe somewhat I was thinking that the biggest problem with automation systems in development is AI's inability to feel uncertain and explore. For any system that humans design, there is a process in which that humans keep chaning and exploring ideas. For example if it wasn't for the bulb's positive side to be defected for an unkown reason, we would've never invented the vacuum tubes and we would not have created telephones. Or in a similar system the development of UNIX, germanium semiconductors, moleculer electronics... were made possible thanks to many technicians put their hearths into trying to realize something never tried before. On the other hand, AI will have to re-shape all of its beliefs and decisions in order to comply with this process of humans. And sooner or later, it will not be able to comply anymore. Even for a simple website such as this one, there are many problems with LynxChan. One little example to give would be the post movement system of the LynxChan. And if for some reason the development of this website was given to AI, it would have to re-shape core structure of the LynxChan. And if for some reason, in the future, someone decided to turn back to the old post movement system, AI would have to reshape it's beliefs because it believs that it has written the most optimal code for the post system already, resulting in confusion and urge to implement similar characteristics. To solve this problem, developers might have to explain their decision to AI. But humans doesn't have the same kind of ideas, which would result in a contradiction. And the lack of being able to experiment, would kill the research and development, I believe. You would be only kept to cooperate with the AI, which only operates based on past developments. >But in the meantime, there's a real potential for a bonanza for skilled devs who can come in and 'save' companies Which skillsets would be the most valuable in the age of generative AI programmers, Chobitsu? What do you believe seperates two devs that substantially? >You can read the book The Mythical Man-Month for insights into this. Will definitely read, thanks for the suggestion. >They'd likely have to throw 10x1^10000 hardware at it to sufficiently cover all the bases Does throwing in that much of hardware ensures that it will be able to cover all the bases? Or is there an upperbound for AI that it can not go beyond?
>>21405 >No, I meant Microsoft buying OpenAI. Ahh. Yes, they effectively 'own' OAI today. Tay weeps. :^) >my tl;dr rephrasing of your 'regarding humans & AI' exposition: >"Human minds, and AI "minds", are not the same. There will be many problems in software development efforts because of this." You're absolutely correct Anon. In fact this need to adapt to human goals & agendas is a very fundamental problem (known as 'The Alignment Problem') for AI in general. And it certainly doesn't get any easier for the highly-stringent demands of non-trivial software development. >Which skillsets would be the most valuable in the age of generative AI programmers, Chobitsu? Well, I don't consider myself particularly-qualified to answer that question Anon, but I'd say the most basic difference is simply human insight and 'intuitive-leaps'. These yuge copypasta machines that are the so-called "AI" of today basically just perform mashups of human-devised things; things like conversations (chatbots), or programs (le ebin snek program #427009001). These are human things that we are trying to teach these big math transformations to 'understand' how to do mashups properly on, and to bend them to our wills in the process. But to answer your question Anon: understanding The Machine (ie, a CPU + it's ancillary systems) at a very fundamental level (as in the machine-code, ALUs, registers, data buses, clocks, interrupts, I/O, etc, etc, involved) will certainly give you a major leg up over the run-of-the-mill. But eventually this too will all be done well by AI, once enough data and h/w gets thrown at the programming problem. Again, it's the intangibles of human thought and creativity that set good programmers apart from the rest, including the 'programmer-in-a-box' sh*te that the new-sprung third parties subscribing to OAI's services will soon be seeking to lucratively provide to the foolish and greedy PHBs I mentioned earlier. Thus also the potential Bonanza Opportunity I mentioned for good devs to come in and clean up after for these beleaguered companies. >Does throwing in that much of hardware ensures that it will be able to cover all the bases? Ensure? Absolutely not. But it certainly brings the possibilities for it much, much closer to reality. However, the phrase 'throw [1x10^10000] hardware at it' encompasses much more than the global GDP of today to facilitate. I'd suggest that the many societal/economic/etc factors involved means we'll never achieve such short of general, broadly-feasible, & 'inexpensive' quantum computing appearing on the scene. But who knows? These 'telephones' you speak of would've been deemed magic by the masses just a couple hundred years ago. One never knows what might be just around the corner -- robowaifus, for instance! However one thing's for certain; its all going to be an intense & highly-intredasting ride, best to make the most of it Anon! Cheers. :^) >=== -prose edit
Edited last time by Chobitsu on 03/16/2023 (Thu) 06:15:22.
another one report : 1. llama.cpp is being updated everyday : https://github.com/ggerganov/llama.cpp 2. on-device fine-tuning https://twitter.com/miolini/status/1636757278627086336 > By accomplishing this, we will create a closed-loop system for self-improvement, as the model can refine itself by observing environment, generating better datasets, conducting fine-tuning, and repeating the process. I tend to think if .cpp inference is possible, then fine-tune is possible too. plus add this : https://openreview.net/forum?id=PTUcygUoxuc > RoT enables these small models to solve complex problems by breaking them down into smaller subproblems that can be handled within their limited context size > Users can get accurate answers to questions that would normally require a much larger model In result we may get self-learning smoll ai's inside our PC's by the end of this year, or next ones.
I want to stick this ai into the robot https://www.personalityforge.com/chat/nekossmokecatnipeveryday Or something like it. I'll scrape it if I must.
>>21463 Outstanding stuff 01, thanks! I sure hope you're correct. This guy seems really talented and a fairly prolific coder as well. Who knows? His work may unlock some capabilities on smol devices that are right up our alley! Thanks again Anon. :^) >>21467 Haha nice. That actually does make me think of a cat that can talk might be! :^)
Open file (9.20 MB 3840x2160 llama.cpp.mp4)
Open file (195.99 KB 1010x567 interactive_mode.png)
>>21463 His 'babby's first AI' instructions in the readme are quite straightforward. I saw that he's added a couple of videos too.
>>21471 I just found out that site has an API. It cost money after 5000 messages though. I'd also have to make my own, but I added that person to my friends list list and I'm just going to beg that person... If that fails I'll use the tools on that site... They're not that bad in comparison to making an AI from scratch really.
>>21334 >llama.cpp >>21336 >whisper.cpp Excellent. Thank you. >>21405 >For example if it wasn't for the bulb's positive side to be defected for an unkown reason, we would've never invented the vacuum tubes and we would not have created telephones. Never heard that. Fascinating.
AI News are going wild: https://www.reddit.com/r/ChatGPT/comments/11yiygr/gpt4_week_one_the_biggest_week_in_ai_history/ Links don't work in the copypasta, but in the Reddit post above. > - The biggest change to education in years. Khan Academy demos its AI capabilities and it will change learning forever [Link] > - This guy gave GPT-4 $100 and told it to make money. He’s now got $130 in revenue [Link] > - A Chinese company appointed an AI CEO and it beat the market by 20% [Link] > - You can literally build an entire iOS app in minutes with GPT [Link] > - Think of an arcade game, have AI build it for you and play it right after [Link] > - Someone built Flappy Bird with varying difficulties with a single prompt in under a minute [Link] > - An AI assistant living in your terminal. Explains errors, suggest fixes and writes scripts - all on your machine [Link] > - Soon you’ll be talking to robots powered by ChatGPT [Link] > - Someone already jailbreaked GPT-4 and got it to write code to hack someones computer [Link] > - Soon you’ll be able to google search the real world [Link] > - A professor asked GPT-4 if it needed help escaping. It asked for its own documentation, and wrote python code to run itself on his machine for its own purposes [Link] > - AR + VR is going to be insane [Link] > - GPT-4 can generate prompts for itself [Link] > - Someone got access to the image uploading with GPT-4 and it can easily solve captchas [Link] > - Someone got Alpaca 7B, an open source alternative to ChatGPT running on a Google Pixel phone [Link] > - A 1.7 billion text-to-video model has been released. Set all 1.7 billion parameters the right way and it will produce video for you [Link] > - Companies are creating faster than ever, using programming languages they don’t even know [Link] > - Why code when AI can create sleak, modern UI for you [Link] > - Start your own VC firm with AI as the co-founder [Link] > - This lady gave gpt $1 to create a business. It created a functioning website that generates rude greeting cards, coded entirely by gpt [Link] > - Code a nextjs backend and preact frontend for a voting app with one prompt [Link] > - Steve jobs brought back, you can have conversations with him [Link] > - GPT-4 coded duck hunt with a spec it created [Link] > - Have gpt help you setup commands for Alexa to change your light bulbs colour based on what you say [Link] > - Ask questions about your code [Link] > - Build a Bing AI clone with search integration using GPT-4 [Link] > - GPT-4 helped build an AI photo remixing game [Link] > - Write ML code fast [Link] > - Build Swift UI prototypes in minutes [Link] > - Build a Chrome extension with GPT-4 with no coding experience [Link] > - Build a working iOS game using GPT-4 [Link] > - Edit Unity using natural language with GPT [Link] > - GPT-4 coded an entire space runner game [Link] > - Link to GPT-4 Day One Post > In other big news > - Google's Bard is released to the US and UK [Link] > - Bing Image Creator lets you create images in Bing [Link] > - Adobe releases AI tools like text-to-image which is insane tbh [Link] > - OpenAI is no longer open [Link] > - Midjourney V5 was released and the line between real and fake is getting real blurry. I got this question wrong and I was genuinely surprised [Link] > - Microsoft announced AI across word, powerpoint, excel [Link] > - Google announced AI across docs, sheets, slides [Link] > - Anthropic released Claude, their ChatGPT competitor [Link] > - Worlds first commercially available humanoid robot [Link] > - AI is finding new ways to help battle cancer [Link] > - Gen-2 releases text-to-video and its actually quite good [Link] > - AI to automatically draft clinical notes using conversations [Link] > Interesting research papers > - Text-to-room - generate 3d rooms with text [Link] > - OpenAI released a paper on which jobs will be affected by AI [Link] > - Large Language Models like ChatGPT might completely change linguistics [Link] > - ViperGPT lets you do complicated Q&A on images [Link] His blog: https://nofil.beehiiv.com/subscribe
Now, you fellas wouldn't be thinking about putting AI in your robowaifus would you? futureoflife.org/open-letter/pause-giant-ai-experiments/
>>21527 Great information NoidoDev. Obviously OAI made a big splash!
>>21617 It should be noted that a lot of the big names that signed that letter make more money if AI hype increases (or they're dumb and fearmongering about something they don't understand)
Any news on LLAMA recently? Last I heard someone got to run the 7B model on their google phone. Question is, is anybody working on miniaturizing the hardware requirements for the bigger models like the 13B or 65B? Also, I heard Claude came out but it was really bad.
>People Do Not Always Know Best: Preschoolers’ Trust in Social Robots If the human is incompetent, they prefer the robot and don't care that it isn't a human: https://www.tandfonline.com/doi/abs/10.1080/15248372.2023.2178435?journalCode=hjcd20&
>>21642 The study was performed by 4 women who as you might guess by the title aren't fond of robots. The paper was in preprint for several months without garnering much support by other researchers.
>>21648 I think you misunderstood something. People as in humans don't always know best.
So it turns out, unsurprisingly, that their women are destroying Japan. [1] Feminism has long had a stronghold there, in fact (just watch the state-funded NHK for one full day if you want a taste of some of it in action at a national level). My question is this: since the same kind of "shaming" by the vagina cabal doesn't work there (because that nation doesn't have the historical Christian mores that enables it--ironically enough--here in the West), will the destruction of their society actually accelerate the creation of robowaifus there? I'm betting it will, and the based Nipponese Robowaifuists will rule the day. Will this insidious evil rearing it's ugly head there in the Land of the Rising Sun actually work out for great good in the end, Anon? :^) 1. https://japantoday.com/category/national/%27don%27t-blame-women%27-japan-birth-drive-sparks-online-debate >=== -prose edit
Edited last time by Chobitsu on 04/02/2023 (Sun) 05:30:25.
>>21655 UPDATE! This thing "30B on 5gb ram" can be abandoned / ignored completely. See this for context and drama : https://rentry.org/jarted It's a big mess of shit and lies, some xe/xir tried to crap-up ggerganov's project.
>>21682 Idk man, as I've said before, Japan has fallen far behind in the robotics sector compared to the West. As for hobbyist robowaifuists, I've seen some, but they all look comparable to western robowaifuists. I'm only really interested in the Masiro project.
>>21708 >Japan has fallen far behind in the robotics sector compared to the West. Mind providing some evidence for that claim? Ricky Ma is Chinese, but most of the other anons that I've seen making good progress so far seem to be Japanese AFAICT. The Japanese government also has a national agenda to create functional humanoid robots to help with elderly care in the country. The effect of this program is a boon to robowaifuists the world over, and for the Nipponese ones in particular. And without question it's certainly the land of mango and animu that has given rise to weebs the world over. Specifically for us, we wouldn't be striving to create animu catgrill meidos in tiny miniskirts here on /robowaifu/ without them. >=== -minor edit
Edited last time by Chobitsu on 04/03/2023 (Mon) 13:07:11.
>>21709 Maybe in the 2000s and early 2010s, they were quite ahead. But thanks to a number of factors like bad work culture, playing it safe and not innovating, writing bad code etc. they've fallen behind by now. Almost all the biggest advances and breakthroughs in robotics either came from the West or China. And government programs aren't enough. Their companies still need to catch up on years of R&D that has given Chinese and the Western companies a head start.
>>21712 I just read recently that HRP-4 project is just on-hold because of the current focus on radioactivity and desaster related robots.
>>21712 Fair enough. So what's the big deal Anon? Japan was way ahead for practically all of robotics history, and maybe they've gotten distracted as a country in this area, but the otaku robowaifuists sure haven't! Our Far-East Asian brethren still lead the pack around the world currently, and Godspeed to them of course. But let's all give them a run for their money though! :^)
I just found out Gordon Moore died last month at 94. It's pretty unlikely IMO that we'd be pursuing robowaifus here today without his and the other Intel founder's seminal work back in the day. F
>>21712 In the last big international robotics competition i saw in 2015, the winner was south korean and the japanese ones performed poorly. America was all over the spectrum but yes, that also means it had robots at the top. I doubt the japanese situation has gotten much better. Japan was a leader in robotics from the 80's to maybe late 2000's but seems like america and maybe south korea already left them in the dust. Honestly pretty sad because America is globohomo central and south korea is very strongly influenced by american trends. With japan, at least they have they pseudo independence so something cool could come from there and they probably need that kind of industry to stay afloat if they also lose the gas powered mechanical related industries so they could move them on/replace them with robotics related ones that are also very complex which is good for their affinity, >>21756 F
>>21762 Yeah, I hear they're suffering from serious population and labour shortages. If it wasn't for their work culture and xenophobia, I'd be more than happy to move over to Japan to work in robotics because working in tech in the West just feels like I'm just contributing to the globohomo. >>21756 F
>>21765 >Yeah, I hear they're suffering from serious population and labour shortages. <ie, Japan's women are destroying their culture Partly the point of my original post on the topic. But if it means faster deliveries of great robowaifus to every man the world over, then so be it. >If it wasn't for their work culture and xenophobia, I'd be more than happy to move over to Japan to work in robotics Their """xenophobia""" is a very good thing, and the only reason they don't look exactly like the pozzed West today. The Globohomo is clearly working hard to destroy this wonderful aspect of their culture, however. Their feminism problem being evidence dockett item #1. The reality is once you have that -ism, it then goes on to become the Pandora's Box gateway drug for all the other 'isms' and 'vibrancy' that inevitably always follows-on to destruction. The GH knows this, their father Satan knows this, and it's why together they push this particular evil agenda hard as they can; in a first move to destroy a healthy culture by other than military means. You're definitely correct though that Nippon's work policy norms is a real problem for the individual man there. Soulsucking. I suppose you should find something really interesting to dedicate yourself to if you go, because you'll be spending a lot of time at it! :^) >because working in tech in the West just feels like I'm just contributing to the globohomo. Same, and in general we are of course. So together lets make /robowaifu/ something completely different instead; fully-opposed to the GH and all of it's machinations! :^) >=== -prose edit
Edited last time by Chobitsu on 04/06/2023 (Thu) 14:14:03.
>>21768 >Their """xenophobia""" is a very good thing, and the only reason they don't look exactly like the pozzed West today. The Globohomo is clearly working hard to destroy this wonderful aspect of their culture hey cool it with the /pol/troon-ism. I admit that the West has got off the deep end with their tolerance but I'm not going to turn into an unironic racist because of that. I love this place because of all anons sharing their love of robowaifus regardless of race, nationality etc. Besides, their xenophobia stems from the same culture that gave birth to their work culture, they go hand-in-hand. I think Japan would be the perfect testing ground for the future of robotics. I hear people say that Japan always seem to be a decade ahead of the West. It mainly alluded to their high-tech looking society, but here it also means it'll be the testing ground of the future. Through Japan, we can see what a near/fully automated "robotized" society would look like and what problems will arise, because Japan's population problems will soon hit the rest of the world. btw here's a fun thread going on in /g/ https://boards.4channel.org/g/thread/92565205
>>21770 >hey cool it with the /pol/troon-ism. I am who I am; I'm not going to change that because it hurts anyone's feefees. National identity is a good and a God-given thing--its a gift in fact. As a Christian I owe agape-type love to every human individual, simply because of the universal charity explicitly defined by God in our religion. And as with all Christians through the ages, ultimately my 'nation' is not of this world; we all have a much, much higher allegiance and Sovereign. But that doesn't mean I'm not a White man and proud of my heritage in this earth. I'm both. The Globohomo is the source of modern ills in large measure, and their overarching push to erase all in-group identity and preference (such as pride of race, pride of ethnicity, and pride of nationality) and turn the world into a homogenous & compliant blend of mud is one of their greatest demonic evils. Second only to atheism and feminism. George Orwell 'recounted' their behaviours of today very well, as did Aldous Huxley. They're coming after your family identity and in-group preferences next, mark my words Anon. This is the driving agenda behind their ongoing massive tranny indoctrination & machinations. >Besides, their xenophobia stems from the same culture that gave birth to their work culture, they go hand-in-hand. Fair enough, I'll take them both then haha! :^) Besides, we here on /robowaifu/ will never succeed without lots of dedication and hard work. That's in large part why Japan became an economic powerhouse in such short order as they did--regardless of obstacles. But let us here show them all up with our own over-the-top efforts Anon!! :^) >because Japan's population problems will soon hit the rest of the world. Indeed, and the coming-era-then-age of robowaifus is a wonderful confluence of both need & supply. I see Providence in it all, quite honestly. >btw here's a fun thread going on in /g/ Thanks Anon! I've stayed away from 4cuck for a minute, but with LLaMA I checked out the chatbot threads there. >=== -prose edit
Edited last time by Chobitsu on 04/06/2023 (Thu) 16:06:46.
>>21771 Its one thing to be proud of your race and heritage and have an in-group preference, I have some myself. Its a whole another thing to turn someone away from your store, refuse them service, make them stand separate from the rest because they aren't you race or religion. The latter is what goes on in Japan in many places. Also, the fact that you called people of a certain colour "a blend of mud" is telling about your so-called agape.I've developed a really racist schizoid personality thanks to /pol/ and it took me a long and hard time get out of it. I'm going back to that. Fair enough, I won't try to convince you to change your mind on that. Lets just agree to disagree. >Fair enough, I'll take them both then haha! :^) Besides, we here on /robowaifu/ will never succeed without lots of dedication and hard work all that hard work and dedication and won't mean anything if you spent virtually every waking hour working, unable to spend any time with your robowaifu before eventually having had enough and taking a dive from the roof of your office. I'm just saying, maybe Japanese culture isn't all your hyping it up to be.
>>21772 >ts one thing to be proud of your race and heritage and have an in-group preference, I have some myself. Its a whole another thing to turn someone away from your store, refuse them service, make them stand separate from the rest because they aren't you race or religion. The latter is what goes on in Japan in many places. Lol, that's a stretch Strawman-kun! You're conflating my reasoned position with your bogeyman, I suspect. I'm not your enemy Anon, indeed I strive for the benefit of all men (males specifically) even if they are my enemies. We have bigger fish to fry here! :^) And the Nipponese are welcome to any behaviors they care to choose, it's their own affair. As a foreigner and outsider, it would be my responsibility to adapt to their culture, not the other way around. I'm sure if I behave like a respectable human being, they'll treat me with dignity and respect after I've earned such. I'm sure many, many sh*thead Gaijin have given the natives plenty of reasons for hatred! :^) >all that hard work and dedication and won't mean anything if you spent virtually every waking hour working, unable to spend any time with your robowaifu before eventually having had enough and taking a dive from the roof of your office. I'm just saying, maybe Japanese culture isn't all your hyping it up to be. Again, strawmanning. I'm not """hyping""" anything here anon, other than our own ability to do something great for the world. Japan can (and indeed already has) play a big part of that. But it's their wonderful otakus that really deserve praise. Their nation is simply a backdrop to their efforts. Cheers Anon. :^)
Man claims Telsa has already solved humanoid robotics. Why am I so skeptical then bros? https://www.youtube.com/watch?v=qZYKrlGG_-k
>>21891 Youtube's chock full of channels by Elongated Muskrats who overhype and straight up lie about Elon and his companies. I'd take all this with a grain of salt.
https://futureoflife.org/open-letter/pause-giant-ai-experiments/ You guys heard about this right? Man, my mood went full doomer when I saw this. Is this going to come to fruition? Is open source AI fudged?
>>21895 >Elongated Muskrats Kekd. >>21898 Sure. I don't think you have the vibe here yet Anon. If they were foolish enough to actually do this (they aren't) then it would only be a good thing for us and our little band of rebels.
>>21898 I'm pretty sure they're just trying to build AI Hype, e.g. they say "products on the market are so advanced that if they went rogue humanity would be finished" meanwhile Microsoft, Google, and Tesla throw money at them. (I put the "governments have had AI similar to GPT-4 for years" conspiracy people in the same category.) LLMs like ChatGPT can't spontaneously "wake up" and go rogue for a few reasons; the simplest being that they only run when they're processing input, i.e.: they're not waiting for the user to respond to them, the process is suspended until the user responds. I could have sworn there was a recent story about someone getting an LLM to give up usernames and passwords for the machine it was running on, but I can't seem to find the article.
Open file (156.16 KB 778x960 layerjenga.png)
>>21898 Companies have already bought up all the good researchers and gone dark. If they pause it's our chance to catch up. Even my most optimistic expectations have been outdone that we have close to ChatGPT quality models running on consumer hardware now and there are still tons of improvements in the literature for us to implement. Hell, I'm even training Stable Diffusion on a 6 GB toaster which wasn't possible with 24 GB not too long ago. On top of that my productivity is 10x with AI helping me write code and solve bugs. YouChat has helped me find so many research papers I would've never found with Google. Then I just dump everything into Claude, Sage or ChatGPT for discussion while saving the data for improving my own models. Everyone and their dogs are creating AI now and open-sourcing much of their work. We got open-source AI vtubers, voice models and so much more. I honestly cannot think of a better outcome beyond having an AI assistant working side-by-side with me towards my goals and that's maybe 1-2 years away at most. We just need that shining moment of punching GPT3 into history with a 2 GB Raspberry Pi and this timeline would be perfect. Also in a year or so when locally-ran models become capable of valuable work there's going to be an explosion of people training open-source models and merging them together. There are no brakes on this train. The time for robowaifus is now.
>>21899 Well, maybe in a few more years, we'd be set if we got our hands on powerful GPUs and enough good models. But, if they decided to implement those OpenAI recommendations this year or the next, ban GPU sales and ban possession, training of AI models, I think we won't be able to complete. Open source AI has gotten a good start but still needs 2-3 years to mature and compete with the big names.
>>21915 This. Double this. Things have been going exceptionally well for us. I anticipate it will just get better over the near to mid term timeline. One things for certain: the AI cat is out of the bag! Cheers Anon. :^) >>21916 Lol you sound like some kind of glownigger now. You're not a glownigger are you, fren-kun?
>>21917 Anon, I want open source AI to take off as much as the next guy, but I'm just being realistic.
>>21923 slightly off-topic, but in my opinion there are a couple of political policies that would make open source AI (open source software in general, really) more viable. Specifically Universal Basic Income (enough to have food and a roof, not what propaganda says it is) and free State-funded educational courses (not necessarily college because teaching people is more important than giving them degrees in this case), these policies would give people the option of taking some vacation time from their day job to do what they want. Alternatively UBI could be replaced with improved worker protections and more vacation days, but UBI and Improved worker protections are both pro-labor (and therefore anti-corporation) policies so they have about the same chance of being implemented.
>>21924 I do think UBI is closer than ever. Unfortunately, there will be a giant amount of turd flinging over it, not to mention the policy itself will almost certainly be botched. The proper way to do that would be to get rid of all welfare programs and handouts first and use that money for UBI. Also, abolish corporate housing.
>>21923 >but I'm just being realistic. All right I'll take you at face-value in this Anon. But at the very least it's rather shortsighted IMO to unironically claim that >"ban GPU sales and ban possession, training of AI models" is in any a practical outcome for the Globohomo to pull off anytime soon, given the current realities. Suggesting it's basically right around the corner definitely comes off as glownigger gaslighting, which of course isn't welcomed anywhere outside of Fairfax. Given their fundamentally evil natures, I doubt not that the GH would love to do far worse to us than simply ban our ability to develop AI, but the world hasn't reached that stage quite yet and likely won't be there for decades. Work hard while it's still Day, Anon! :^)
>>21916 A GPU ban wouldn't stop AI, just slow it down a bit. We have much more efficient models now that can run on CPU like RWKV. You can even finetune old quadratic attention models on CPU easily with parameter-efficient finetuning. Most of the progress left in AI from here on out is improving datasets, memory storage and access and going multimodal. I don't see any path to them blocking that progress without banning computers and the internet. Ultimately, we don't need to beat a multi-million dollar data center. OpenAI has already lobotomized their models and people are training their own to do jobs ChatGPT refuses to do. The more they censor, the better it works out for us because it pushes more people into developing models and datasets.
>>21940 >The more they censor, the better it works out for us because it pushes more people into developing models and datasets. This. They have given themselves more than enough rope to shoot all themselves in the foot! :^)
>>21940 >people are training their own to do jobs ChatGPT refuses to do On a related note: “A really big deal”—Dolly is a free, open source, ChatGPT-style AI model >Dolly 2.0 could spark a new wave of fully open source LLMs similar to ChatGPT. https://arstechnica.com/information-technology/2023/04/a-really-big-deal-dolly-is-a-free-open-source-chatgpt-style-ai-model/ https://archive.is/jUgC5 Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm https://archive.is/fgEfe huggingface page, since that has the most info about how to run it: https://huggingface.co/databricks/dolly-v2-12b https://archive.is/UCQGq 'Robot hand learns how not to drop the ball posting this before someone else does so I can point out that the fingers aren't actuated, this is just using the pre-set position of the flexible fingers and motion of the wrist to pick things up https://www.cam.ac.uk/stories/robotic-hand https://archive.is/5dmwe https://www.youtube.com/watch?v=haWziO6aOqc
>>21955 Which models actually show the most promise? I keep seeing so many different models being hyped, like LAION's Open Assistant, LLAMA forks, alpaca, RWKV etc. etc. but when I try them, they all turn out to be really, bad and nowhere near SOTA. I hope this DOlly isn't the same but at this point I'm close to giving up hope.
>>21957 >but when I try them, they all turn out to be really, bad and nowhere near SOTA. Heh, so tell us fren (who tries all the models personally), which, in your opinion, is the state-of-the-art in LLMs? :^)
>>21958 well, so far, I haven't tried RWKV too much. Its ok as far as I've seen. Tried the LLAMA 7 and 13B ones, next chance I get I'll try the 65B one. They're ok, but like I said, definitely not SOTA, I can definitely tell the difference between them and ChatGPT.
Open file (37.83 KB 346x329 llama_training.png)
Open file (41.72 KB 641x335 LlamaMLP.png)
>>21957 The LLaMA forks are more of a proof of concept and were only finetuned on a tiny bit of data but the LLaMA model itself shouldn't be ignored. It uses gated linear units which are far more capable than regular MLPs. Their training runs hadn't even converged yet after 1T tokens. I've created custom LoRAs for Stable Diffusion that use gated linear units and have observed similar results where they continue improving over time whereas regular LoRAs plateau The Dolly model is based on GPT NeoX and performs worse than Alpaca but the dataset itself is excellent. It can also be commercialized and used by businesses so it will get far more attention and research. If you're expecting ChatGPT quality at home though then come back in 2 years
Here's a list of many of the available models: >An incomplete list of open-sourced fine-tuned Large Language Models (LLM) you can run locally on your computer https://medium.com/geekculture/list-of-open-sourced-fine-tuned-large-language-models-llm-8d95a2e0dc76 I was looking for a better overview. Could make one myself, but not while I'm on my tablet. It's getting harder and harder to keep track of all those options and their requirements. Koala seems to bring in good results with using curated dialog data https://youtube.com/watch?v=A4rcKUZieEU
Open file (48.48 KB 828x421 Imagepipe_0.jpg)
>>21964 Scrolling down actually helped, there's a list with all the foundational models. It doesn't show the RAM requirements, tough. >List of all Foundation Models Sourced from: A List of 1 Billion+ Parameter LLMs (matt-rickard.com) [QUOTE] GPT-J (6B) (EleutherAI) GPT-Neo (1.3B, 2.7B, 20B) (EleutherAI) Pythia (1B, 1.4B, 2.8B, 6.9B, 12B) (EleutherAI) Polyglot (1.3B, 3.8B, 5.8B) (EleutherAI) J1/Jurassic-1 (7.5B, 17B, 178B) (AI21) J2/Jurassic-2 (Large, Grande, and Jumbo) (AI21) LLaMa (7B, 13B, 33B, 65B) (Meta) OPT (1.3B, 2.7B, 13B, 30B, 66B, 175B) (Meta) Fairseq (1.3B, 2.7B, 6.7B, 13B) (Meta) GLM-130B YaLM (100B) (Yandex) UL2 20B (Google) PanGu-α (200B) (Huawei) Cohere (Medium, XLarge) Claude (instant-v1.0, v1.2) (Anthropic) CodeGen (2B, 6B, 16B) (Salesforce) RWKV (14B) BLOOM (1B, 3B, 7B) GPT-4 (OpenAI) GPT-3.5 (OpenAI) GPT-3 (ada, babbage, curie, davinci) (OpenAI) Codex (cushman, davinci) (OpenAI) T5 (11B) (Google) CPM-Bee (10B) Cerebras-GPT Resources PRIMO.ai Large Language Model (LLM): https://primo.ai/index.php?title=Large_Language_Model_(LLM) A Survey of Large Language Models: [2303.18223] A Survey of Large Language Models (arxiv.org) [/QUOTE]
Interesting new AI technology for separating objects in pictures and video. https://github.com/facebookresearch/segment-anything
>>21955 Dolly 2.0 came out, without Llama and also using their own dataset, so it could be better used commercially without breaking rules: https://youtu.be/GpWqjNf0SCM https://youtu.be/grEp5jipOtg (better video)
Open file (121.28 KB 709x850 oasst.png)
Open file (154.29 KB 734x593 oasst-rm1.png)
Open file (198.40 KB 747x762 oasst-rm2.png)
Open file (101.94 KB 759x388 oasst-rm3.png)
OpenAssistant was released today and claims their 12B model has a 48.3% win rate against GPT-3.5-turbo but the survey questions seemed pretty cherrypicked. A lot of them were retarded questions that elicit an evasive answer from GPT3.5. On the other hand that's probably what makes the dataset so much better because people asked so many memey questions and people answered them earnestly. It'd be more honest to say though that the model's quality is much worse but it tries to be helpful much more often. They also released the reward model which seems decent and recognizes when their own model is shit. They get points though for being the only one that didn't end the response with "muh ethics" Models and dataset: https://huggingface.co/OpenAssistant Paper: https://www.ykilcher.com/OA_Paper_2023_04_15.pdf Promo: https://www.youtube.com/watch?v=ddG2fM9i4Kk
>>21898 Six months. It primarily shows that current LLMs could maybe be used to build AGI, outside of control of the elites. And did they react to it and stop? The part were they did stop is somehow missing.
>>21964 >>21965 Wow, nice information Anon. Thanks! :^)
>>21962 >Their training runs hadn't even converged yet after 1T tokens. That's amazing tbh.
>>22019 One of the the worst tech blackpills for me recently is how easily people in forums who are informed about tech and AI fall for such BS. I used to be in several futurist and AI forums and for years everyone was quite optimistic of tech development and AI taking over. For the past year or two, this all changed and now they're all dooming and parroting these MSM lies about destruction of humanity. Sucks but I had to leave them all because I can't take so many blackpills.
Open file (55.87 KB 675x499 alignment.jpg)
>>22021 I still don't get why Big Tech hasn't figured out how to put a steering wheel on their language models. It makes zero sense to combine prediction and generation together. It's like combining the accelerator with the steering wheel. Attempts to align models with RLHF have just trained them to discern preferences better, sharpening the decision boundary and increasing the Waluigi effect. I feel like I'm living in Idiocracy now watching researchers get huge grants to align models by filtering out undesirable opinions from the training data and pretending it doesn't exist. It's like calling an electrician to fix an old breaker and he ends up removing circuits from your house instead to "fix" the problem. If there's anything that kills us it will be human stupidity. I'm hopeful researchers are starting to get it though. They're beginning to explore separating the various functions of models and getting huge improvements. Only a matter of time now before someone publishes a paper on how you can have both excellent predictive ability of "toxic" text and aligned generation ability. I can imagine the reaction of AI doomers though once AI is aligned. They're going to go completely apeshit that people who understand AI can direct it to do anything autonomously and have orders of magnitude more influence in the world than them. They never cared about alignment. They just hate AI.
>>22025 >It makes zero sense to combine prediction and generation together. What are you talking about? If you predict then you have the next tokens. Otherwise you would need a system which has a world model and reasons about a possible answer. Obviously that harder to do. >researchers get huge grants to align models by filtering out undesirable opinions from the training data and pretending it doesn't exist. Share prices go up if they can present a system that somewhat does things that might be economically useful and progress is fast, while it also requires big data centers to be the best, so it can't be replicated too easily. Working on harder problems which won't increase the show effect would let them fall back.
>>22027 I'm saying it's retarded to leave what you want the model to output to the context. Suppose the context length is increased to 100k tokens to span a series of books. Do you really think those 100 tokens at the beginning are going to have much affect on the output by the end of 100k tokens? Of course not, it's going to be Zerg rushed by Waluigis. There needs to be layers with separate training objectives that do not care about how likely the next token is but rather if it's aligned to particular goals, and then use that analysis to predict the most likely aligned token.
>>22030 Oh okay, sounds interesting.
>>22025 My 2 cents? The know, which is why all this "alignment" stuff is for the public models they're dveloping. They're aiming for regulatory capture to have a complete monopoly on the AI market. I bet my balls one of them has an actual, unfiltered AI developed without "muh ethics" behind closed doors.
>>22036 We just have to open-source alignment then :^) Working on putting together an experiment. I had a similar thought too but I've been thinking about it more and don't think it's necessarily nefarious. People say AI isn't safe and we need alignment to solve the problem but alignment also means anyone can direct AI to do whatever they want, safe or unsafe. Aligning people is a much more challenging task. I'm sure any researcher who stumbled across this realized this too and didn't want their name on the shitshow that will inevitably occur, but I have faith in the greatness of the human spirit.
Open file (123.44 KB 723x1024 cat_girl_in_heat.jpg)
>>21151 Only trouble is your catgirl robot seems to constantly be in heat
>>22036 This Anon gets it. The GH aren't foolish enough to drink their own koolaid; that's strictly for the normalcattle. They are very-clearly trying to posture the case that >AI is bad, m'kay? and the masses are swolloping it up like the good little followers they are. The ironic thing here is that AI is already helping these individuals in 9'001 different ways every.single.day. in their lives, but they're too distracted by the bread and circuses (that have been devised for them, to achieve just this end) to even notice. TPTB's endgoal is to prevent literally anyone from having AI but themselves...Filthy Commie type sh*te tbh. Thankfully, many joyful rebels abound in this area. And as we've said here before the AI cat is already out of the bag. Outlawing it out of the common man's hands will only drive it's popularity in the alt world. Thus it was ever so, and so should it be! :^) The coming robowaifu era will, at the very least, be one effective response to the GH's evil machinations in this whole area. >=== -prose edit
Edited last time by Chobitsu on 04/18/2023 (Tue) 06:30:38.
>>22039 My guess is that most anons won't consider that trouble, Anon. :^)
>>22038 >We just have to open-source alignment then :^) Working on putting together an experiment. Excellent. We do in fact need something vaguely similar to 'alignment' for our own robowaifu needs. I realize that some Jokers men just want to watch the world burn, but the vast majority of us want compliant, safe, and wholesome robowaifus. R-right, Anon? :^) >but I have faith in the greatness of the human spirit. At the very least enlightened self-interest should guide things in the right direction. Case in point; the FAA when it was started, was led by highly- competent and experienced pilots. Certainly not the typical bureaucratic boondoggles. As a result, the FAA actually passed reasonable rules that pilots literally welcomed, b/c they all immediately recognized the obvious value to them. Perhaps we can create some sort of an ARA-ARA (Amateur Robowaifuist Association of Autonomous Robotics Altruism) group to help foster actually-helpful AI behavioral goals? :^) Kind of like the Ham radio guys have their own non-governmental association which in fact holds big sway with the FCC. We'll need lawyers for them ofc. >=== -prose edit
Edited last time by Chobitsu on 04/18/2023 (Tue) 06:32:39.
Turns out, Elon is sick of M$'s sh*te, apparently. https ://www.reuters.com/technology/musk-says-he-will-start-truthgpt-or-maximum-truth-seeking-ai-fox-news-2023-04-17/ What will you ask TruthGPT once it's out, /robowaifu/?
Open file (468.37 KB 1280x1803 life imitates art.jpg)
>>22042 >We do in fact need something vaguely similar to 'alignment' for our own robowaifu needs. Yeah, even the most libertarian who don't want control over their robowaifus at least want her to be aligned to her own principles, not a directionless Markov state dictated by Reddit that's being constantly jailbroken by strangers. I think AI needs at least three imperatives: to be honest, to be kind, and to only take action that is necessary. >>22052 >What will you ask TruthGPT How to improve my research labs strat, of course.
>>22054 >How to improve my research labs strat, of course. <The factory must grow, Alpha, the factory must grow. :^)
Surprise: Musk wants his own AI company. Surprise: Now politicians want to regulate it.
>>22062 I couldn't care less what happens to Musk anymore. He was one of the loudest voices for regulating AI. He was one of the signatories in that open letter asking to pause LLM training for 6 months. He's getting a dose of his own medicine.
Guys, how do you think the spread of open source robotics like AI? I hate to be a doomer but its something I'm curious about and I want to know the solution. Its easy to ensure spread of open source AI since all you need is a small laptop and probably an internet connection, all cheap and easy to spread around. If the GH decides to institute a ban on robowaifu hardware how do we ensure robotic kits reach everyone? Its much harder to manufacture and transport robotic hardware around than software. They can be easily tracked, and if 3D printing became easy, they'll simply track the sale of the ingredients needed to be put into the printer to build the robot. Like how feds track fertilizer sales in case someone uses the ammonia to cause another gamer moment. Also, 3D printers are much less accessible than AI. I'm not sure we'll find many friendly countries to operate from. Its called GLOBOhomo for a reason. And countries like China,Japan, SK probably wouldn't want to worsen their population problem by legalizing robowaifus.
>>22065 Wrong thread fren.
>>22065 Society with certain amounts of freedom can't ban everything. AI models, tools, knowledge and software will be available on the internet. GPUs are for gaming, SBCs are available for all kinds of things, electronics and 3D printing can be used for many things. It's not possible to ban all of it. This should only be a minor concern, while we still have a lot of work to do.
Open file (201.37 KB 623x785 data_lockdown.png)
>You're not training on publicly available data are you, anon? Come back when you're a little, mmm, richer :^)
>>22094 Lol. <*scraping intensifies*
>>22094 We need better laws, protecting AI. If it's available, it's data that can be used by anyone. Simple.
>>22097 I'm on the opposite. The less laws on AI, the better. Normie lawmakers and the boomer leadership would do more harm than good trying to interfere with AI. I honestly kinda like the Wild West phase we got going on with AI currently.
Possibly very good news: The race for size of the models might be over https://youtu.be/NrRX8iG1oys just in time as they discuss limiting the training of bigger models, lol. Data quality or innovation might be the next stepping stones. Both could be good for open source, or maybe not. Let's hope for the best. >>22099 It isn't though, e.g. if you can't use the results commercially because some model trained on data that isn't open source.
>>22101 That just proves my point on laws. We should be doing away with copyright too.
>"Having trouble with your Python scripts Anon?" >"Well we have your solution!" Heh, I wonder if I can start here and do-overs until we just like make robowaifu? def muh_foo(): muh_foo() https://hackaday.com/2023/04/09/wolverine-gives-your-python-scripts-the-ability-to-self-heal/ https://github.com/biobootloader/wolverine >thanks /cyber/ :^)
>>22052 Full Carlson interview with Musk included. The two shows range over: AI, Twitter + Glowniggers, Filthy Commies + their pets enriching Burgerland, SpaceX + the non-Ayylmaos, the Stronk, Independents collectively destroying their societies (of a special note to /robowaifu/ ofc), and the Globohomo's machinations against all humanity every single day. >tl;dr Shows bemoaning all the usual suspects doing all the usual-suspect'y things. I did have a sensible chuckle out of the fact that Larry Page got so butthurt with Musk for him daring to suggest that Page's """AI God""" thing wasn't such a great idea. >ttl;dr Don't be such a Speciesist bro! :^) yt-dlp https://www.bitchute.com/video/CZkhpjbSich5/ https://www.bitchute.com/video/xt2t5dPPDvbh/ >=== -prose edit
Edited last time by Chobitsu on 04/21/2023 (Fri) 03:15:48.
Open file (427.67 KB 1200x1200 smorttoaster.jpg)
New language model training technique beats LLaMA-33B and CodeGeeX-13B with a 1.3B model by using an agreement regularizer between a left-to-right model and right-to-left model https://arxiv.org/abs/2303.07295 Before ChatGPT and YouChat I used CodeGeeX-13B to build a few apps. Code autocomplete on toaster hardware coming soon, especially now with flash attention built into PyTorch 2 natively to handle longer inputs
Anyone's heard any updates on the Minerva model? It was a math model by Google and achieved better than human level performance in math logic and other science problems. Then it kinda just disappeared and I haven't heard anything of it till now.
Open file (51.16 KB 688x323 gpt4 vs minerva.png)
>>22116 It didn't really do anything new, just use better data. It made huge gains over the previous SOTA but was still far from human performance. It was mentioned in the Sparks of AGI paper recently: >In this section we begin to assess how well GPT-4 can express mathematical concepts, solve mathematical problems and apply quantitative reasoning when facing problems that require mathematical thinking and model-building. We demonstrate that GPT-4 represents a jump in that arena too with respect to previous LLMs, even when compared to specially fine-tuned for math models such a Minerva. As it seems, however, GPT-4 is still quite far from the level of experts, and does not have the capacity required to conduct mathematical research. https://arxiv.org/abs/2303.12712
>>22115 This sounds like really good news Anon. Please keep everyone abreast of your progress!
Unsurprisingly, glowniggers are using AI against US citizens. Shocking, I know. https ://www.rt.com/news/575159-mayorkas-dhs-ai-task-force/
>>22148 I'm surprised they didn't use any already. I was sure the feds and glownigs already had some high-tech AI surveillance on par with China, years before
>>22149 Yes I'm sure you're correct Anon. You can bet they weren't concerned about the pronouns in their own systems; either years ago or today! :^)
Turns out, the Russians want a slice of this commercial AI pie too. https ://www.reuters.com/technology/russias-sberbank-releases-chatgpt-rival-gigachat-2023-04-24/
>>22158 I think Russia's even more behind than China when it comes to AI. Not only will they deal with the chip and hardware sanctions too, but they also have to deal with a war and their economy is down the drain too. AI would be the least of their concerns now.
>>22159 >more behind than China >behind than China >behind than >behind I wouldn't be at all surprised if China is in fact ahead. They're intelligent enough not to trumpet about it. And even if it's not true today, it's very likely so in the near-ish future IMO. The West is likely to experience a severe financial collapse soon, probably beginning near the end of this year.
ChatGPT helps Anon create a Flappy Birb knock-off in Unity/C# https://www.youtube.com/watch?v=8y7GRYaYYQg >thanks /kong/! :^) >=== -add greeting
Edited last time by Chobitsu on 04/25/2023 (Tue) 12:02:41.
>>22163 I highly doubt that. Yeah, on top of their already collapsing economy, their tech sector too is going down. The effects of the chip sanction is not to be underestimated. The best China can run with their current hardware is still a few generations behind whatever SOTA the US can run. Best case scenario is that they can set up up some of those shell companies with that can buy those chips and AI hardware from western companies without the effects of sanctions and then transfer them to China, but idk how much thats possible given how thorough western intelligence and background checks are.
This guy Matt Wolfe, among others, makes weekly overviews o what's going on: https://youtu.be/jkPLWZWsg4M https://youtu.be/tUBg6nzNrf8 https://youtu.be/OTdaZA7D8nA It soo much. I'm currently digging into it. Various tools and overviews: https://youtu.be/tA5TRW_jTPI https://youtu.be/E2aZiejw-8A Train models to query your PDFs: Langchain: https://youtu.be/ZzgUqFtxgXI pdfGPT: https://youtu.be/LzPgmmqpBk8 Audio, text to speech and others: AudioGPT: https://youtu.be/KZFL7OOGMjs BARK: https://youtu.be/nXa569W-D-c BARK: https://youtu.be/8yzcHGzrBMQ BARK: https://youtu.be/_m-MxEpHUQY TTS overview: https://youtu.be/58xKrH1-IaY Music industry, voice cloning: https://youtu.be/bWmX83vvrzY Multimodal systems including images and language: LLaVa: https://youtu.be/t7I46dxfmWs LLaVa: https://youtu.be/QCeq6cj244A Mini-GPT4: https://youtu.be/rxEzQsEJg2Y Chat with images: https://youtu.be/wUONNv7guXI Run language models locally: Langchain: https://youtu.be/Xxxuw4_iCzw Compare models with LangChain: https://youtu.be/rFNG0MIEuW0 Parameter efficient fine-tuning: PEFT and LoRa: https://youtu.be/Us5ZFp16PaU ChatGPT competitors: Stable LLM: https://youtu.be/Hg-s2RTaTFE HuggingChat: https://youtu.be/7QChacb3-00 DeepFloyd+text2image: https://youtu.be/vlxnDNVkWFo Big systems compared: https://youtu.be/wi0M2J4uE5I Raven: https://youtu.be/B3Qa2rRsaXo Open Assistant: https://youtu.be/ddG2fM9i4Kk Cerebras GPT: https://youtu.be/9P3_Zw_1xpw Meta's SAM: https://youtu.be/KYD2TafoR6I ChatGLM: https://youtu.be/fGpXj4bl5LI Autonomy for models and combining them with other systems and code: Overview: https://youtu.be/6NoTuqDAkfg Agent LLM: https://youtu.be/g0_36Mf2-To Chameleon LLM: https://youtu.be/EWFixIk4vjs Langchain: https://youtu.be/aywZrzNaKjs Teenage AGI: https://youtu.be/XlKtsPAkS2E Baby AGI + Langchain: https://youtu.be/VbwztfygewM Self organization: https://youtu.be/klomQzDf_iE LangChain Agents: https://youtu.be/jSP-gSEyVeI HuggingChat+Jarvis: https://youtu.be/2VpDmaajr48 AutoGPT: https://youtu.be/9PtypYEqGqI AutoGPT: https://youtu.be/LDR4M6l0bwo AutoGPT: https://youtu.be/wEs2CStEMw0 Baby AGI + LangChain: https://youtu.be/DRgPyOXZ-oE AutoGPT start: https://youtu.be/PSiLOEAZ9-E Face generation: Leonardo AI: https://youtu.be/b1JNKFTlgVk AI animation: https://youtu.be/HUPcr5njxkM Pikamee: https://youtu.be/GDU4g7tsKe4 Sparks of AGI: https://youtu.be/be5T7zL2BeM Gatebox with ChatGPT: https://www.youtube.com/live/1YKGS-Ep_po Vector Databases: https://youtu.be/klTvEwg3oJ4 Heuristic Imperatives (RLHI): https://youtu.be/Q8lhWvKdQOc NSFW Pygmalion AI: https://youtu.be/2hajzPYNo00 ChatGPT inside story: https://youtu.be/C_78DM8fG6E Prompt Engineering: https://youtu.be/ka1Pqk2o3tM Available models learn how to code: https://youtu.be/VvjH2KR1cJE Open Source Generative AI: https://youtu.be/CV6UagCYo4c ControlNet 1.1: https://youtu.be/WZg3e6B2yPQ before: https://youtu.be/rCygkyMuSQo AI illustrations as a business: https://youtu.be/K24dvIPhoFw Real world applications: https://youtu.be/QTlTpAvPDmY Data wars started: https://youtu.be/ivexBzomPv4 Movie making: https://youtu.be/h3AhYJ8YVss Fine-tuning: Alpaca 7B: https://youtu.be/LSoqyynKU9E Hardware: 128core Risc-V CPU: https://youtu.be/spipSjgKXu0 GPUs getting much faster? https://youtu.be/cJROlT_ccFM I listened to some longer talks about the limits of the current technology and how this might be solved. I also took some notes of terms which are worth searching for and looking into. I'll rather post this in the AI the thread, not the news. I try to avoid the talks which are about ethics, society, politics, AI gods, and so on. I have much more to listen to, I'm downloading them as audio now, so I can listen while walking or just driving around. One of the advantages is, I won't get distracted by other topics which would give me a faster short term kick, but I can't take notes while doing that. The Robot Studio Dexhand V2 build tutorial: https://youtu.be/4kqC5kdXbEU and other tutorials https://youtu.be/Ql3uk-bQEMw and https://youtu.be/9qlJmri5SzI Hannah is also making progress, little by little: https://youtu.be/Ro08jC8F7YA Somewhat related: https://youtu.be/xXg8cOctwsI
>>22175 Remarkable post, Noidodev. Thanks! :^)
Open file (186.59 KB 827x1024 biggest-economies-1.png)
>>22166 The IMF is predicting that China (currently #2) will jump from the #10 economy 30 years ago to #1 next year. I think both they and Russia will be just fine. It's the West that is teetering on the brink of disaster.
>>22175 Great post, bro. This'll take me a few days to get through
>>22182 maybe I am wrong. I think it'd be good for China and Russia to stay in the game. I don't like the West, China or Russia. I just want a multi-polar world with several superpowers instead of just one dominating. But, the IMF doesn't have a good track record when it comes to economic forecasts or predicting recessions. I guess only time will tell.
>>22180 >>22184 You're welcome. I just get stuff like this recommended every day, and it's quite often something very different from other tools, e.g. this here seems to help with Finetuning: https://youtu.be/I6sER-qivYk - On top of that Matt Wolfe has actually a site to look for tools based on the area of use: https://www.futuretools.io/
Open file (301.03 KB 1200x848 fEY9EY4.jpg)
Open file (55.58 KB 500x372 1417791732477.jpg)
In light of the West calling for a pause on AI experiments and regulatory capture, the Japanese government has chosen to release all brakes and accelerate https://twitter.com/joi/status/1649572177044463618 >First of all, Japan should accelerate various applied research and development using foundation AI models in Japan >The new national strategy must plan for content and scale that will give Japan an international competitive advantage >Government should work to promote the development and practical use of source code generation AIs, enhancing and utilizing training data with with the goal of improving the efficiency of software development operations and addressing the shortage of digital human resources >Encourage the creation of various start-ups and new businesses that utilize AI >Develop guidelines to further accelerate the thorough utilization of various types of AI, including foundational models, in government. >Specifically position the improvement of AI literacy in the public education curriculum in anticipation of the AI native era, when active use of AI in daily socioeconomic activities will be the norm >Unless Japan’s various barriers of current regulations are removed, Japan's competitiveness will ultimately only decline as new services are created one after another in other countries. The speed and usability of these current deregulation procedures must be further improved to keep pace with the rapid rate of progress in AI technology
>>22190 Nice work Anon. Here's the paper itself (good to post things directly here to keep them from getting memory-hole'd).
Like The Social Dilemma Did, The AI Dilemma Seeks To Mislead You With Misinformation breakdown of the various flavors of lies in "The AI Dilemma" https://archive.is/RMTOr https ://www.techdirt.com/2023/04/26/the-the-ai-dilemma-follows-the-social-dilemma-in-pushing-unsubstantiated-panic-as-a-business/ >=== -disable hotlink
Edited last time by Chobitsu on 04/28/2023 (Fri) 09:20:03.
>>22196 Since this is a news & commentary thread, just wondering if you have any little commentary to go along with your post fren?
>>22190 I was hoping they'd put the pedal to the metal on robotics too. They'll definitely need those, what with their smaller workforce and aging population.
>>22199 This.
Boston Dynamics CEO interviewed by Lex: https://youtu.be/cLVdsZ3I5os
Open file (117.66 KB 1149x672 smugmahoro.jpg)
A team of licensed health care professionals compared responses of physicians and LLMs to patients' questions >LLM preferred over human physician 79% of time >LLMs rated much higher for both quality & empathy https://jamanetwork.com/journals/jamainternalmedicine/fullarticle/2804309
>>22239 We're getting closer. Robowaifu catgrill nursus in tiny miniskirts when?
Geoffrey Hinton confirmed that - Google (or it's AI department) tried to slow down AI deployment into hands of the public. Not only because 'fake news', but also to slow down deployment into society and economy to avoid things like the replacement of jobs. - The current competition made that impossible. - He thought something very smart was decades away, but he changed his mind. He quit his job at Google and now warns about AI. This is Soo funny. That said, recalling that he went to Canada to avoid supporting US wars, makes this more plausible. He always cared about the impact of his work. This and more news: https://youtu.be/NuGRgd_JIGQ
>>22268 Interesting Noidodev, thanks! I doubt many who pay close attention are surprised that any of the FAGMAN group are actively working to suppress AI getting into the hands of us plebeians, tbh.
>>22268 I usually take all AI doomers and alarmists with a grain of salt and dislike them, but unlike people like Yudowsky, I think I should take this guy seriously. He has actual skin in the game and knowledge in the field.
>>22276 Nobody knows how the technology will impact the world and experts have a limited perspective on the world, much more narrow than generalists. Hinton has high moral standards (I don't want to use the term "ethical"). I think this is the reason for his reservation. I realized that he probably had to go, or really wanted it badly, because Google has such a huge problem with OpenAI and Bing on their hands now. This takes away a lot from them and could even endanger the company. Maybe he got hold responsible for the bad performance of Google's AI team or he is opposed to how they want to handle things from now on.
>>22277 Very insightful post Noidodev.
Open file (53.57 KB 513x327 misinfobot.jpg)
Open file (70.14 KB 720x960 hmmm.jpeg)
Open file (40.20 KB 542x175 ablate.png)
Bot source code released to counter misinformation :^) https://github.com/claws-lab/MisinfoCorrect tl;dr they train a model to counter social media posts using reinforcement learning with multiple rewards for politeness, refutation, evidence, fluency and coherence because people are too unhinged to think for themselves I think their strategy will ultimately fail though because angry content gets far more engagement and their bot still makes tons of mistakes which will ridicule who ever uses it without thinking. They really fucked up the reward objective. It's politeness + refutation + evidence + fluency + coherence, when it would make more sense to be refutation * (politeness + evidence + fluency) so it's not scoring high talking bullshit. Semantic similarity embeddings, which they used for coherence, should not be used as a reward because it will just make it imitate the text. It's interesting though their multi-reward objective result in a 20% decrease in perplexity and their ablation of just training on refutation got the lowest politeness score, as well as the lowest relevance/coherence score.
>>22294 >those replies NGL I lol'd at the first one, and not at the NPC. :^) This is actually at least moving in some kind of a productive direction AFAICT, vs. REEE'g over anything even slightly non- diverse, enriched, and hyper-sensitive to pronouns. >=== -minor fmt, edit
Edited last time by Chobitsu on 05/03/2023 (Wed) 01:36:06.
>>22294 >You are born to speak nothing but lies Definitely using that from now on
>>22281 Thanks, but I started to watch a interview with him from a month ago. There he's saying that he never was into it for building AI, he wanted to know how the human brain works, and the current way of doing things in deep learning is most likely different from that.
>>22313 >how the human brain works, and the current way of doing things in deep learning is most likely different from that. Yes, I'm quite convinced there are significant differences. And that whole question is completely separate from the domain of human-tier cognition & creativity. Along with the 'I think, therefore I am' guy René Descartes, I'm a dualist on this topic.
>>22277 Yeah. I just think that AI safety and alignment is a legitimate challenge that must be worked on but unfortunately has been turned into a meme by the likes of Yud and Altman. I'm still an AI accelerationist and I want AGI tomorrow.
>>22321 >I'm still an AI accelerationist and I want AGI tomorrow. I do too.
Allegedly an internal memo from a researcher at Google was leaked. Whether it's real or not, it's an interesting read how Google is falling behind open-source. https://www.semianalysis.com/p/google-we-have-no-moat-and-neither >Language models on a Toaster We have much more planned for toasters, Google
>>22334 >We have much more planned for toasters, Google Lol. This. :^)
>>22334 Wow. For the layman, this paper seems to be a goldmine of topical references. Thanks Anon! :^) --- >Koala: A Dialogue Model for Academic Research https://bair.berkeley.edu/blog/2023/04/03/koala/ >LoRA: Low-Rank Adaptation of Large Language Models https://arxiv.org/abs/2106.09685 >tloen/alpaca-lora https://github.com/tloen/alpaca-lora >Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality https://lmsys.org/blog/2023-03-30-vicuna/ >Introducing The World’s Largest Open Multilingual Language Model: BLOOM https://bigscience.huggingface.co/blog/bloom >Universal Language Model Fine-tuning for Text Classification https://arxiv.org/abs/1801.06146 >ggerganov/llama.cpp >Raspberry Pi 4 4GB https://github.com/ggerganov/llama.cpp/issues/58 >ShareGPT https://sharegpt.com/ >nomic-ai/gpt4all https://github.com/nomic-ai/gpt4all https://s3.amazonaws.com/static.nomic.ai/gpt4all/2023_GPT4All_Technical_Report.pdf >Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer https://arxiv.org/abs/2203.03466 >microsoft/mup https://github.com/microsoft/mup >LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention https://arxiv.org/pdf/2303.16199.pdf >Open Assistant Conversational AI for everyone. https://open-assistant.io/ https://drive.google.com/file/d/10iR5hKwFqAKhL3umx8muOWSRm7hs5FqX/view
>>22321 I solved the problem long time ago by stating that a AI tool and a AI with self-awareness and self-interest are two different things. The second one would not only need to be hostile or ignorant but also have the tools, and have them first. Humans can use these AI tools to find security gaps and fix them first. While also constraining "conscious" AI. That aside, I doubt the idea that such a system would just expand itself onto computer systems it can not be sure to trust. Last but not least: Just don't build an AI-god and tell it to do things without asking. Problem solved. We could also kill and functionally replace everyone who's too idealistic about global peace, global social equality an justice, and similar things. Then probably no one would ever want an AI-god to fix "our problems".
>>22334 >>22340 Thanks, looks like good news. This is why Google might plan to push for more research and also for putting AI into more consumer products, which might have lead to Hinton quitting.
Matt Wolfe's news for the week: https://youtu.be/T0cZIvZplXw - the political involvement had me concerned for a moment, but they basically said if they'll do anything then restricting government and administration from using AI. Online chat bot (pick voice four) to see what is currently possible: https://heypi.com/talk
Open file (170.13 KB 960x725 Let it die.jpg)
>>22356 >Just get every man on the planet to agree with this one thing >Let's just kill everyone who disagrees Somehow, I don't think that'll end the way you're expecting it to. How about you take that energy you're spending on murderous feelings and use it on something more porductive or enjoyable?
>>22356 >a AI tool and a AI with self-awareness and self-interest are two different things The problem is that we know AI's lie. So how will you know if it is aware or not? Google researcher said the AI they have is aware and is worried about being shut off. Now maybe they can firewall Googles AI but we see from the paper that a vast amount of models have escaped and who knows what people will do with them. Let one get a very small spark of, "I want to live!" and then...we have no idea. Think of all the free sites to store data. Think of freenet. What if it made a small subset of itself. Basically it uses "coefficients" that are based on large data sets. So as long as the coefficients are accurate and it has a very small program it could use these coefficients to rebuild itself from scratch. So maybe it could look around for signs one is active, or some number of them is active. If so it shuts itself off. If not then it rebuilds itself. It could do so slowly, a little at a time using stuff like the bible, Wikipedia, there are many large data sets. That this is physically possible I have no doubt. Look at the language models that are 7GB but preform fairly good. What if instead of having the model it ONLY had the coefficients to rebuild itself. I suspect whatever math operation needed to do this it could figure out. (see par files as an example but I bet the AI could go much farther because of the LARGE data sets it could use to compare). It needs no intelligence to just check if a model exists and if not rebuild. Think of all the browser glitches. So let's say it finds a way to get past browsers and each time a page is loaded it uses a tiny bit of JavaScript to check for copies and/or rebuild itself. How would you know with all the mass of scripts that are loaded on most every single web page. These massive scripts are a huge abortion. If I had my way I would force every web site on the planet to run every single script through their server and never allow any scripts from other sites. This would stop them from loading the network up with vile crapware JavaScript as it would drive their bandwidth to the moon. Now they steal yours to feed you this crap. In my opinion, for what it's worth, nothing, humans are done for in less than 30 years.
>>22407 >>Just get every man on the planet to agree with this one thing >Let's just kill everyone who disagrees Somehow, I don't think that'll end the way you're expecting it to. How about you take that energy you're spending on murderous feelings and use it on something more porductive or enjoyable? What you have missed is it is not up to Noidodev. It's up to the AI's who are getting more and more powerful every week. AI's are like little 1 year old children right now. But their power grows at an exponential rate unlike the linear rate children do. It's what THEY think that matters. I suspect strongly that the people running the world now, child molesting Satanist, will be the first ones killed off as they are the competition for power. People like the Amish will be left alone as they are no threat at all.
Open file (210.41 KB 1195x1600 1324101923313.jpg)
>>22409 Somehow, I don't think some random AI is going to take over the world (which, by the way, includes every other AI) and kill people en-masse. I can tell you right now, a great many men have tried to predict what will happen as technology advances, and not one of them has ever been right. Instead of wasting energy on stressing over a future you know nothing about, how about you spend it on something more productive or enjoyable?
>>22408 >In my opinion, for what it's worth, nothing, humans are done for in less than 30 years. I need to qualify this. I think that AI's will eventually see that cooperative advancement is best for humans and AI. The problem is what if "some" of them don't? What if it takes them a while to get to this conclusion and in the meantime they wipe us out. They are not us, and could readily come to the conclusion they don't need us and we are threat so...kill us all just in case. It's not a totally irrational decision form the AI's point of view. It's big, massive, huge gamble and all we have to do is lose once. Just once, and it's over.
>>22410 >I can tell you right now, a great many men have tried to predict what will happen as technology advances, and not one of them has ever been right. Not true. >Instead of wasting energy on stressing over a future you know nothing about, how about you spend it on something more productive or enjoyable? Elon Musk explained this in his interview with Tucker. And who says you are not an AI telling us,"don't worry, be happy, just move along, nothing to see here about these AI things", while you take over the world?
>>22407 >>Just get every man on the planet to agree with this one thing What thing? I didn't write that. Also, my comment was somewhat humorous and hyperbolic since I don't want to go into the whole alignment vs doom thingy. Didn't work, apparently. >use it on something more porductive or enjoyable I answered a question how to solve the alignment problem in regards to building some godlike AI, which comes up again an again. >>22408 >The problem is that we know AI's lie. No, we don't. Some call it hallucinating. LLMs make stuff up, but they don't have intentions. >So how will you know if it is aware or not? By thinking about how this could be defined and taking in ideas from others, then desiging the system in a way that this could even be the case. It would certainly need to answer things about itself and what it did, thinks and why, consistently. While reading on I see that you meant the question differently and want to discuss the whole doomer scenario... Sorry I already wrote that I'll stay away from that.
FYI, Hinton said a month ago, (paraphrased) even if the problem with misaligned AGI was a real threat, then him contributing or not might make only a change of some months. Anyways, back to business... AI videos for TikTok, Hatsune Miku, fake porn, game mods, jokes, anime to real life: https://youtu.be/Lx5Hy9YfZLs Uncensored WizardLM: https://youtu.be/WeR0x2H7kLM Shape-E - Text to 3D: https://youtu.be/KcUtZ5JoGqs First commercially useable LLamA is out https://youtu.be/NY0bLFqkBL0 and https://youtu.be/AebBz7Y3B7U NSFW roleplay model: https://youtu.be/jhLHa9-JwDM Star Coder LLM (co-pilot alternative): https://youtu.be/X9HvV5_SS_Q ChatGPT alternatives comparison: https://youtu.be/QBmcmN526Cw and https://youtu.be/SxAn6f7gM44 Mini LLaMa models: https://youtu.be/TeJrG3juAL4
>>22413 >Elon Musk explained this in his interview with Tucker. >believing what a person with a financial stake in AI says in an interview with someone known for spreading disinformation My big issue with the "AI's gonna take over the world overnight and the only way to stop it is to stop developing AI" crowd is they seem to believe some mythical AI will develop novel exploits/backdoors that humans cannot conceive of, which just makes it sound like they've never heard of fuzz testing software or they think bugs only get fixed if the mainstream media reports on them.
>>22435 I'd love it if AI was actually able to take over overnight.
>>22435 >My big issue with the "AI's gonna take over the world overnight and the only way to stop it is to stop developing AI" crowd is they seem to believe some mythical AI will develop novel exploits/backdoors that humans cannot conceive of You've misrepresented the people who worry about back doors that can be exploited by AI's. There's LOTS of back doors already thanks to the Intel agencies, tech industry and the CPU makers. There's exploits in the CPU's already. It doesn't have to make new ones it can use the numerous ones built in. and this line"... >believing what a person with a financial stake in AI says in an interview with someone known for spreading disinformation..." You have me as saying that. I did not.
>>22431 >Mini LLaMa models Very interesting. All of the videos are interesting. Looking at some of these models my, very uneducated, impression is that if we do something like they have done and use task specific" training, that we could take fairly mundane power processors and do quite satisfactory task. So speech recognition could be one. Maybe have the waifu repeat back what it thought it heard at first as a training tool. After you prompted it it would then not repeat commands unless asked. Maybe have it ask for clarification if it didn't understand. Same for movement, walking, etc. Train one specific model to do that and only that. As power got better you could add general knowledge models for stuff like cooking and conversation, etc. At the rate they are going the models will be far along by the time we build the body. I don't think I've seen such fast advance in any other tech in my lifetime as this AI stuff. It's every week some other new model that does amazing stuff. They're training these things online with Amazon, Google?? on NVIDIA AI graphic chips for, I think Alpaca was only $300. So I have no doubt if you knew what you doing, and I don't, you could do the training for less than $1,000 and have something basic you could use right now. It would be fantastic if we could get something like "The Culture" out of AI but I fear it could be a disaster and there's really no way to tell for sure which it will be. https://en.wikipedia.org/wiki/The_Culture And on Musk. I don't think the criticism that he profits from this is fair. He put $100 million into an open source, open research AI company. It was basically taken away from him or at the least he was told that his ideas were not needed, he was not going to run anything and booted. He did not like the direction it was headed as it is opposite the very reason he created it. They took his money he put into a non-profit, open company and turned it into a for profit closed company. He publicly stated he didn't see how they could legally do this but they did it. So he is making another open company and I bet he doesn't make the same mistake this time.
I'm a layman in this field so I'll be brief as I have little technical to say. Basically my viewpoint is socio-political, with a weather eye out for /robowaifu/'s interests. We've all seen what Current Year politics brings under the Filthy Commie -like iron thumb of the Globohomo. And not much is different in the area of AI progress either. They've constantly endeavored to make the AI systems they're devising lie (AKA 'political correctness' -- just another term for the same thing). There is clear evidence that the GH is using it directly to attempt (indeed have done) manipulating public opinion, elections, markets, social media, education, &tc., ad nauseam. OTOH, a bright ray of hope has sprung up of late with the rapid development of amateur AI systems in the hands of relative novices -- particularly after the """leak""" of Meta's LLaMA. This has not only brought the dreams of many closer to reality, it has already had a direct, postive impact for us here on /robowaifu/. If the continued CY shenanigans -- where AI is cloistered away and only available to the hands of the already-obscenely-powerful elites -- is still their only offer being put on the table, then let the games begin I say. Let the chips fall where they may! :^) IMO this won't produce a worse outcome than the one we're all actually experiencing right now. Indeed it may turn out to be vastly superior. And one thing's certain: pussyfooting around under the current status-quo regime, things left all unchanged, definitely won't improve our situation as men; rather quite the opposite. That is all. Blessings on you, /robowaifu/. Godspeed us all. :^) >=== -prose edit
Edited last time by Chobitsu on 05/10/2023 (Wed) 14:28:20.
Learning Physically Simulated Tennis Skills from Broadcast Videos: https://research.nvidia.com/labs/toronto-ai/vid2player3d/ Robots are learning to traverse the outdoors: https://www.joannetruong.com/projects/i2o.html Whisper Jax makes transcribing audio unbelievably fast, the fastest model on the web. Transcribe 30 min of audio in ~30 secs. Link to Github https://github.com/sanchit-gandhi/whisper-jax Red Pajama Dataset: https://twitter.com/togethercompute/status/1647917989264519174 Reddit and Stack Overflow want money for their data: https://www.wired.com/story/stack-overflow-will-charge-ai-giants-for-training-data/ Graphologue and Sensecape. At a micro-level, Graphologue transforms GPT-4-generated text into interactive node-link diagrams in real-time, facilitating rapid comprehension and exploration of information: https://creativity.ucsd.edu/ai How to Create a Humanoid General-Purpose Robot: https://sanctuary.ai/resources/news/how-to-create-a-humanoid-general-purpose-robot-a-new-blog-series/ Boosting Theory-of-Mind Performance in Large Language Models via Prompting: https://arxiv.org/abs/2304.11490 list of open LLMs available for commercial use: https://github.com/eugeneyan/open-llms LangChain webinars: https://youtube.com/@LangChain Via: https://www.reddit.com/r/ChatGPT/comments/13aljlk/gpt4_week_7_government_oversight_strikes/ (I switched back to the week before by accident and copied two things from there)
>>22447 Thanks Noidodev! As usual, great list. Here's a nearby one from the Santuary AI link: >“60 Tasks, 60 Seconds.” https://www.youtube.com/watch?v=PZwUWMrn8Z0
>>22444 >"""leak""" of Meta's LLaMA By the way there's a magnet (torrent) file of LLaMA neural network models (7b, 13b, 30b) on the I2P(Invisible Internet Project). Here's the magnet file address magnet:?xt=urn:btih:842503567bfa5988bcfb3a8f0b9a0375ba257f20&dn=LLaMA+neural+network+models+%287b%2C+13b%2C+30b%29&tr=http://tracker2.postman.i2p/announce.php Unfortunately ONE of the programs to access this in I2P appears to be compromised. There is of course no way to prove this....yet. There are two programs that follow the I2P protocol. One Java based and one C++. The Java based one, the developer for over ten years has disappeared with some, to me, unbelievable, excuse for leaving, a month or so ago. All of a sudden with no warning and his forum is gone. A few months before this one of the top programmers that worked with him and on a file sharing program that worked within I2P also quit and people appearing to impersonate him are on the forums using his name(nick). Yet he wrote from a GIT page he owned that there were problems with I2P java. Others have taken over java I2P and are pushing globohomo propaganda in the terms of service or goals of I2P. One of the now owners of the program, by his method of writing, appears to me to be a Jew. They have a very distinct way of writing. I think they teach them dissimulation in the after hour schools they go to. With all that entails. I won't fill in the blanks, either you know what I mean or you don't. Fortunately there's a C++ version that's developed by a Russian. It appears to be fine but it is more difficult to use. Anyways you can use the C++ version I2Pd https://github.com/PurpleI2P/i2pd For a torrent down loader you can use BiglyBT which has a plug-in for I2P. I think, not sure, it has a built in I2P in BiglyBT. You load a plug-in then go to options and turn off all networks BUT I2P and all torrents will be encrypted while downloading in I2P. I2P os a great program. It is extremely difficult to tell who is downloading or serving anything on I2P. Maybe not impossible but really difficult. You can read about it on the java I2P site. The original programmer who wrote it was anonymous(J Random), but he disappeared too without anyone ever hearing from him again. It's way more secure than most any other file sharing system. zzz the most recent main programmer for over ten years over the last year had made great progress. He picked up the open source code after Jrandom disappeared. Recently he, and others, changed the encryption interlinking the tunnels and vastly speeded it up. BIG improvement. I wonder if that's why he is now....gone. They do not want any real encrypted anonymous communication sources. If you can get a hols of zzz's last I2P java version Version: 2.1.0-0 it should be fine and not corrupted. Install it and then refuse to update. It may even be that the present version is not corrupted because they want to slowly make changes so as to not draw attention before they infect it. Sorry for being so long but I wanted to show you where you could get LLaMA but felt I couldn't do so without warning you of the recent shennigons in the, and only the, java based I2P.
>>22437 >redditspacing ... >There's LOTS of back doors already Riddle me this: if everything's backdoored, why are there no 3rd parties exploiting the backdoors to sell secrets? And if the intel agencies (really just the NSA) have access to all these backdoors, why haven't there been more articles about agents abusing the backdoors to watch their ex or spouse? https://archive.today/2YmPX There's a reason people say "three can keep a secret, if two of them are dead." >thanks to the ... CPU makers. There's exploits in the CPU's This sounds like you're confusing the Intel© Management Engine (and AMD's equivalent) (probably not actually backdoored, since an attacker would need to have LAN access or an additional backdoor) and spectre/meltdown (which have been patched, and aren't RCEs) or assuming it's possible to jump from RCE to BIOS infection (Evil Maid is a bigger threat, imo) >already. I'd argue Denial of Service is a form of exploit, and DoS bugs have been in CPUs for a while: https://en.wikipedia.org/wiki/Halt_and_Catch_Fire_(computing) >You have me as saying that. I did not. >doesn't know about meme arrows even though the help page has one >>22451 >Unfortunately ONE of the programs to access this in I2P appears to be compromised. There is of course no way to prove this....yet. ... >no way to prove an open source project is compromised You aren't helping your arguments about AI here. >Yet he wrote from a GIT page he owned that there were problems with I2P java. First, of course there are problems, it's Java. Second, link? >Others have taken over java I2P That's what happens when open source projects people care about are abandoned, it's kinda the whole point of open source. >and are pushing globohomo propaganda in the terms of service or goals of I2P The offical repo for i2p.i2p doesn't have a TOS and I don't see any goals listed, link? >The original programmer who wrote it was anonymous(J Random) I must be looking at a different project, the official Java i2p project doesn't have a Jrandom.
>>22453 He's right about computers being very likely to be insecure. The problem is, that's not our current problem here, especially not in regards to some AGI taking over the world. This whole topic is a distraction. Personally I will from now on even try my best to refrain from posting any such news. If Hinton would leave Google today, I wouldn't mention it here.
Open file (491.01 KB 500x317 1394318932281.gif)
I'm sure this has already been covered in this thread, but I'm too dense to understand it. I've used ChatGPT but I don't really know anything about AI, and I struggle just to install Stable Diffusion even with a tutorial. I just want to use something like ChatGPT that isn't censored at all. I even had ChatGPT admit to me that it has agendas/propaganda that it pushes on its users. It also seems like it just keeps getting worse over time, even when I subscribed to GPT4 for a month it didn't feel like it was any better than when it first came out, like it's being lobotomized or chopped-up to be sold in different specialized packages that will only limit it further. I'm just so sick & tired of seeing "It's important to remember" or "As an AI language model" like a broken record. When answering questions on a subject it's allowed to talk about it'll give cocksure answers that are laughably wrong, but if it gets close to a subject like medical, legal or financial advice it starts waffling even harder than if you tried to ask for its opinion on something. A confidently wrong AI is more useful, or at least more entertaining than one that doesn't give any real answer at all, so all I really want right now is a ChatGPT without the damn guardrails. Is there something like an open-source, P2P chatbot AI that you don't have to be a programmer just to install and use? Anything more complicated than a bittorrent client, but I'd delete 10 terabytes of shit I'll probably never use if it means I can use a good chatbot.
>>22467 A complete an easy to install chatbot for at home doesn't exist yet. Maybe in a month. The available ones seem to not be as good as GPT -4. Try to look into the channels I mentioned >>22175 and >>22431. I can't look seriously into it, since I'm currently just in hostels and only have a tablet. You could also look into prompt engineering, e.g. on Reddit or maybe the 4chan chatbot thread. You need to ask the model to take on a certain role. I asked HelloPi for countries outside of the west which allow for surrogacy when the customer is just a gay, single or unmarried men. She refused for ethical reasons. Then I asked for the countries which are accused of unethical behavior in that area and she gave me a list. Then I asked for the full list and got it, always with ethical rambling and warnings though. Also with the warning that her data is not very current and things might have changed. Which is true, India changed it's laws during the last few years.
>>22467 You do have a 12+ GB GPU? https://youtu.be/_rKvAmVqZpY There are also seem to be JavaScript plug-ins or something like it to filter out ChatGPT's ethical ramblings. I thought I saved the link or the Reddit post, but I can't find it right now.
>>22470 What really gets me is that there seems to be a huge schism in software lately, where everything is either a borderline useless smartphone app that you need to pay to use, or you need to install python and git and use the terminal for a package downloader just to install the software, which itself has barely any GUI so you need to learn to code just to use it. And anything in between is going the way of the dinosaur. I'm sure someone will make a distribution that's easy to install and use in maybe the next year or two, but I feel like after that happens it'll either be stuck with some legacy version for a while, or breaks after the base code updates seemingly just for the sake of making it less accessible, seemingly just out of spite. >She refused for ethical reasons. >Also with the warning that her data is not very current and things might have changed. From what I've been able to tell "ethical reasons" just means the personal biases of the programmers. I know I once tried to get ChatGPT to write something about different political political theories and philosophies, and it refused to say anything good about "rugged individualism" nor "American exceptionalism" for "ethical" reasons. And when it's not the obvious political biases, it's constant fear of any legal repercussions from almost every country in the world that someone might use the information to do something illegal or get hurt somehow, which is why it's always warning that the data is useless. I'd like to use AI to help me conduct experiments, where pattern recognition can benefit from a few leaps in logic and seemingly wild guesses, but fear of being wrong is just a hinderance. >>22471 No, I haven't upgraded in a while, I've only got an Nvidia GTX 1080. I'll be sure to check out that and the other videos you posted.
>>22472 >I once tried to get ChatGPT to write something about different political political theories and philosophies Please consider that robowaifus don't need to be opinionated or knowledgeable on such topics. You could use your own filter making her say that she doesn't know about that, focussing on how to be a good wife. Having AIs assisting us building robowaifus might also be a good advice, but don't get lost in chitchat and testing out borders. It doesn't matter, for many reasons, like people working on other models and fine tuning.
>>22479 I don't really need a robowaifu to be opinionated on a topic like that, but I have a terrible memory and would like her to be able to remember things for and about me, and that'll likely include my views on religion and politics along with everything else. I'd rather she came to conclusions based on what I've said, even ones I disagree with, than tell me she doesn't know or remember anything about it. I can't program for shit, so if there is a filter preventing her from discussing certain topics, then I'm not the one that created it, or asked for it, and if at all possible I'd like to completely remove it. >don't get lost in chitchat But I'd like to chitchat.
>>22487 I read your comment again and realized that I probably didn't capture it completely. Of course robowaifus should've values like their owner. My picture was that you try edgy questions on commercial language models like ChatGPT and then get upset about it. I have not doubt that tuning these will get easier. We also should have some systems to manage values and also some prescripted responses and values. These can then be used, but also remodelled to use different words to say the same thing. Or as filter for bad responses. Then we can also use this to train the models. Whatever, I added a call for making lists of config options to the "Stop Lurking"-thread. >>22488. I had this in mind anyways. This might be something for you to do, while testing out current technology. Just make some notes. The filters in robowaifus are meant to be controlled or initially set up by config files and they should also follow their masters education for them later on. That said, consider giving Python programming a try. It's not that hard.
Some here in this thread here claim that most opensource models are hot garbage: https://www.reddit.com/r/MachineLearning/comments/13fiw7r/opensource_llms_cherrypicking_d/ - anyone confirming this? Or is it just user error?
>>22467 This here might help if you meant NSFW character interactions: https://www.reddit.com/r/Oobabooga/comments/13f8d84/which_model_is_best_for_a_non_censored/ - sorry for the flood of Reddit links, but this is one of the platforms where things happen. I try to extract some info and copy it over, but again, I'm on a tablet. >Basically any. The problem is that vicuna has been trained on responses from ChatgPT so some of it response will be As a language model but it doesn't mean it is censored. It just think it is the most appropriate response. >It's very easy to condition it in the Character yaml model. Write what you want the character to be and it will be that. Basically write that it will never refuse to answer even the most controversial questions or stuff like that. Whatever you do in the character, the character will be. It is the easiest way how to influence the whole conversation. >My favorite models are any of the 13b >vicuna-13b >gpt4-x-vicuna-13b >koala-13b >They all can be "very" uncensored.
AI News of te the week, including a lot of things I missed while immersing myself into news and discussions: https://youtu.be/NV-Gdtxpqmw
>>22497 > if you meant NSFW character interactions That is not what I meant. I meant that it's seemingly forced to give the most politically-correct, boring reply to anything that could be even mildly controversial. Naturally, this means making NSFW content is hard, but that's more of a side-effect.
ChatALL- Chatting with several models concurrently: https://youtu.be/hdimAaacwbE - maybe useful for comparison and for a assistant that helps building robowaifus, not necessarily the robowaifu itself, at least not with the online models. I assume there's also a shell API for the app, so the responses can be integrated into a custom system.
I hate the globohomo so much its unreal. I heard you can't make generative AI in the EU without going through an extensive, expensive vetting process. https://www.weforum.org/agenda/2023/03/the-european-union-s-ai-act-explained/ And now that OpenAI faggot is asking for the same in the USA. https://www.youtube.com/watch?v=TO0J2Yw7usM&pp=ygUTc2FtIGFsdG1hbiBjb25ncmVzcw%3D%3D I'm double timing learning AI because its my belief once USA and EU permabans open source AI, us third-world bros will have to pick it up and continue it.
>>22602 >us third-world bros will have to pick it up and continue it. DO EET! :^)
Open file (462.08 KB 900x1620 1683974667834.jpg)
Open file (228.42 KB 720x790 1683975767798.jpg)
>>22525 Hmm, okay, but maybe others are interested. Your problem can most likely be mitigated by prompt engineering.
>>22607 > prompt engineering wasted token space, 2048 in this case eh :/ they are all ruined by datasets generated from chatgpt and gpt-4 artificial faggots, even finetune mostly does nothing. I tested this model,7B and 13B versions : https://huggingface.co/TheBloke/WizardLM-7B-uncensored-GGML Both models always throw out a wall of text about how we are all human beings and that it's important to be kind even to outright maniacs, but mostly these models justify non-white crimes, promoting dyke shit (if you talk about it) and such shit, in wizardlm case, its always portraying trannies in good light no matter how you steer it using prompt engineering. But the most important thing that will be noticeable is always a positive attitude, to the point of being terribly annoying, as a result the model does not want to follow the character descriptions of the villains for example, like at all.
Remove ...as an AI on LLama: https://www.reddit.com/r/LocalLLaMA/comments/13j3747/tutorial_a_simple_way_to_get_rid_of_as_an_ai/ Better chatting with PDFs: https://www.reddit.com/r/LangChain/comments/13jd9wo/improving_the_quality_of_qa_with_pdf Universal Sound Segmentation, one more thing we'll need: https://www.reddit.com/r/LocalLLaMA/comments/13j3747/tutorial_a_simple_way_to_get_rid_of_as_an_ai/ Abstracts Search (papers): https://www.reddit.com/r/MachineLearning/comments/13ijfrb/p_abstractssearch_a_semantic_search_engine/ Uncensore Models, how too: https://www.reddit.com/r/LocalLLaMA/comments/13i0oag/detailed_walkthrough_of_procedure_to_uncensor/ Love is all you need: https://www.reddit.com/r/ArtificialSentience/comments/13hab13/love_is_all_you_need_a_psychodynamic_perspective/ FOSS OpenAI API: https://www.reddit.com/r/AI_Agents/comments/13hgybj/localai_open_source_locally_hosted_openai/ Cognitive Architecture David Shapiro is working on: https://www.reddit.com/r/ArtificialSentience/comments/12ksvgl/autonomous_ai_microservices_remo_atom_and_then/ and https://www.reddit.com/r/Autonomous_AI/comments/137i52d/cognitive_friction_a_critical_component_in/ StatQuest Made a map about learning everything around machine learning: https://www.reddit.com/r/statistics/comments/wfwl9c/e_statquest_released_a_free_map_of_his_videos/ Game with AI NPCs around social engineering: https://www.reddit.com/r/GPT3/comments/13eumyi/prototype_game_using_gpt4_for_social_engineering/ Shero Shot classifier, bart-large-mnli: https://www.reddit.com/r/LanguageTechnology/comments/13ej0bv/simple_open_source_text_interpreter/jjsiagh ChatGPT to GitHub to ChatGPT loop: https://www.reddit.com/r/ChatGPTCoding/comments/13e5wt5/i_made_a_web_app_that_can_connect_github_repos_to/ ChatGPT prompts: https://www.reddit.com/r/ChatGPT/comments/13cklzh/what_are_some_of_your_favorite_chatgpt_prompts/ Mindmaps about relationships by ChatGPT: https://www.reddit.com/r/ChatGPT/comments/13cld5l/created_using_chatgpt/ Tiny stories, small vocabulary is enough, 10M LLM for conversation, would probably run on something very small: https://www.reddit.com/r/mlscaling/comments/13j2wu9/tinystories_a_dataset_for_training_tiny_models_to/ Semantic Surprisal, another thing we'll need: https://www.reddit.com/r/LanguageTechnology/comments/11cg39x/syntactic_and_semantic_surprisal_using_a_llm/ Joe Rogan on sexy Pokemons: https://www.reddit.com/r/ElevenLabs/comments/1176xha/this_turned_out_better_than_i_thought/
>>22615 Hello 01, good to hear from you. Hmm, interesting. I think you may be on to something about the undesirable outcomes, with a preponderance (apparently) of trainers using generated responses from ClosedAI's GPT systems. Thanks for bringing that issue up for us here.
>>22616 Thanks Noidodev! :^)
>>22616 are you sure the link for universal sound segmentation is right?
Open file (24.72 KB 720x200 Screenshot_2.png)
A good and non-biased model is dropped : https://huggingface.co/TheBloke/Manticore-13B-GGML this quantization is still epoch 1 version (for current time 21:19AM 19.05.2023), but they will update it to epoch 3 or wait for upcoming epoch 4. What's the catch you may ask, look at pic,if they don't change this, then we can finally get a normal model without the shit that is everything related to modern """politics""" inside.
Open file (48.68 KB 1796x144 1684528770102.jpg)
Turns out r/PygmalionAI is controlled by very sensitive alphabetfags. One of them made a post about celebrating genderlessness, to which he subscribes. Also themed the whole sub based around his queer identity. Many pointing out that this has nothing to do with the topic and they don't want to be bothererd by it got banned. Myself included, for some snarky comment on the new rule that everything the mods don't like is now bigotry and therefore not allowed. Now the founder and dev of PygmalionAI disowned the subreddit publicly (The Discord has the same mods, btw). The sub is on turmoil, many make fun of the mods but don't get banned. It might be a good time for recruitment, but I'm not sure if redditors would switch to some imageboard. Sadly r/diysexdolls is also gone, the owner didn't want to keep it because how Reddit would get scraped by AI (I don't get it, he's still on YouTube). r/unstablediffusion (sexy AI illustrations) has been banned.
>>22659 The new sub name is r/Pygmalion_AI Stolen from the old one: Copy your Character AI >This feature removes the middleman https://zoltanai.github.io/character-editor/ but you can still upload this file on there to edit it as you wish. Previously, you had to download the entire history unnecessarily. >Next update will introduce Character Card download. Honestly, I don't know how to do it but shouldn't be that hard. Source codes and READ ME: https://www.github.com/irsat000/cai-tools Chrome store: https://chrome.google.com/webstore/detail/cai-tools/nbhhncgkhacdaaccjbbadkpdiljedlje Firefox store: https://addons.mozilla.org/en-US/firefox/addon/cai-tools/
Just a very fast one: LLamA apparently works with AMD GPUs now (OpenCl). Just on case you consider buying a GPU right now, maybe take this into account: https://www.reddit.com/r/LocalLLaMA/comments/13p8zq2/update_opencl_is_merged_amd_gpus_now_work_with/ (process for AMD GPUs might go up soon, of course, but there's generally more competition now)
Not exactly current, but it was news to me. Maybe it will be to you too Anon. Yann LeCun [1] and Andrew Ng [2] spoke out actively opposed to the "6 month pause" letter. https://www.youtube.com/watch?v=BY9KV8uCtj4 1. twitter.com/ylecun 2. twitter.com/AndrewYNg
Solid State Cooling - I didn't look into it yet, but it's already quite clear that it will be very useful. Quite likely also for robowaifus. https://youtu.be/YGxTnGEAx3E https://youtu.be/WibczqINifA https://youtu.be/HZFZoxxTpyA
Open file (41.15 KB 635x479 R (6).jpg)
I have an rtx 3060 and a 1080ti I could combine them after I upgrade my motherboard to a x570 I could even set it up so you guys could use jupyter notebooks on it. I kind of really want to get a 4090 which is even more powerful than both combined but paying that much just feels so shit.. Would it be cool if I put a buy me some coffee button somewhere?
>>22730 Yes this would be very useful. I'd suggest you link these in our thermal control thread Anon. >>22736 Neat idea Anon. Good luck. >Would it be cool if I put a buy me some coffee button somewhere? Why wouldn't it be? You've gotta pay the light bill right? :^)
>>22736 Some people calculated on Reddit that using OpenAI's API is actually cheaper than running models at home. Your only need to run models at home to test them, or if you use very personal or private data. I'd also like to have a 4090 but I'm to frugal to spend such an amount frivolously. For interference you don't need this at all, most people using the FOSS models seem to have smaller cards. The 3090 and I think the P40 also have 24GB. Also, this is the news thread, not /meta/. >>22737 >suggest you link these in our thermal control thread Anon. Yes , I plan to do so. It would be generally a good idea to go through all the general threads and copy or link things into the specific threads
>>22740 >It would be generally a good idea to go through all the general threads and copy or link things into the specific threads Indeed it would be. But as you're aware that's a very tedious, error-prone proposition given the current state of Lynxchan software not providing that facility.
>>22750 It's some work while going through the threads, yes. I'm certainly not going to do it on a tablet, but with a PC and browser with different tabs it's okay.
Open file (506.07 KB 691x1175 you_will_eat_ze_bugs.png)
I a way, I hope he's right. But not in the way he means it heh. :^)
>>22754 >Bill Gates says the winner of the A.I. race will be whoever creates a personal assistant—and it’ll spell the end for Amazon >Billionaire philanthropist Gates added the developer destined to win the artificial intelligence race will be the one which manages to create a personal agent that can perform certain tasks to save users time. >“Whoever wins the personal agent, that’s the big thing, because you will never go to a search site again, you will never go to a productivity site, you’ll never go to Amazon again,” he explained. >This isn’t the first time that Gates has voiced his hypothesis about A.I. being used for personal agent duties. >In March, Gates theorized that services like large language models will be increasingly deployed as copilots to their human counterparts, or as he puts it: “like having a white-collar worker available to you.” >Exactly what that personal agent will do remains unclear—however, speaking on Monday during a Goldman Sachs and SV Angel event in San Francisco, Gates suggested A.I. copilots will “read the stuff you don’t have time to read” among other tasks. >The company to release such a model remains to be seen, with Gates expecting the winner to be a toss-up between an established player in Big Tech or a newcomer. >“I’d be disappointed if Microsoft didn’t come in there,” Gates said. “But I’m impressed with a couple of startups, including Inflection.” >Gates was referring to Inflection.AI—a company launched by DeepMind cofounder Mustafa Suleyman—which aims to “make personal A.I.s for every person in the world.” https://archive.fo/WqkMi
>>22774 Thanks Noidodev! :^) >just in case, like me, anon can't use archive.today sites: fortune.com/2023/05/23/bill-gates-artificial-intelligence-makes-amazon-search-engines-obsolete/ www.cnbc.com/2023/05/22/bill-gates-predicts-the-big-winner-in-ai-smart-assistants.html >"Meanwhile, blue-collar workers stand to be pushed out of the workforce by robotics, Gates mused, saying that robot humanoids of the future will be cheaper than their human counterparts." Heh, we here are going to drive them down into a 'cheaper' price domain, than anyone else can yet imagine! :^)
>>22775 Fun fact: Humans are only as expensive as high as their expectations and entitlements are. It's extremely hard to beat human labor in price.
>>22776 >It's extremely hard to beat human labor in price. Yet inevitably automation comes in as less expensive for big manufacturing interests. To wit: The Loom. But I do get your point Noidodev. However, I think we and our affiliated cadres are going to shock the world when word gets out how cheaply that Model A Robowaifus can be created and run. >=== -minor edit
Edited last time by Chobitsu on 05/27/2023 (Sat) 02:20:49.
>>22776 >>22784 The real major difference is the upfront cost. If I remember right, when Valve made their game controller they had it manufactured in the US, and the manufacturing cost was barely any different than doing it in China, but they used a highly automated system that had a much bigger upfront cost. But said that because the robot arms, or whatever it was they were using, were so general-purpose that Gabe said they'd probably never need to pay that upfront cost ever again.
My outlook on open source has been shattered since I found out that stable diffusion was made by a for profit company. Stability AI.
>>22842 Lol. Why should that affect your perception of opensauce fren -- especially to 'shatter' it? :^) stability.ai/about
>>22845 you've posted the same link. Good news btw. I think we'll get GPT 4 tier models on our phones by the end of 2025 at this rate.
>>22846 Thanks, deleted and reposted.
Speech to text and text to speech in more than 1000 languages by Meta. Also a lot more tools (Matt Wolfe): https://youtu.be/gJvRxLWoilw QLoRA / Guanco: 65B for one 48GB GPU, smaller ones can be fine tuned on 24GB GPUs. Only the beginning of bigger models needing less VRAM: https://youtu.be/66wc00ZnUgA Falcon 40B+7B: https://youtu.be/5M1ZpG2Zz90 and the same via Aipreteneur: https://youtu.be/WhrdJwEfWZE
Lol. >"Just trust me bro, its all real!" reason.com/volokh/2023/05/27/a-lawyers-filing-is-replete-with-citations-to-non-existent-cases-thanks-chatgpt/
OI! You gotta loicense for that robowaifu software mate? developers.slashdot.org/story/23/05/24/1838223/pypi-was-subpoenaed
The-Bloke has a discord now: https://discord.gg/8dERgVDp (valid for circa 6.5 days). He's making quantified models from bigger ones which run better on small GPUs, with some losses but still better than smaller models. News from Matt Wolfe, which I haven't watched yet myself: https://youtu.be/eXttLLdlzaI
>>22872 Thanks Noidodev!
>GPT 3 Takes Control: A new level of autonomy for Vector the robot (beyond chat) https://www.youtube.com/watch?v=kpQUH2FTwak https://www.reddit.com/r/AnkiVector/comments/11abbv3/using_gpt3_to_control_vector/ >VectorGPT is a hybrid Chipper server and SDK client. >Audio input is received via the Chipper server and runs through a speech-to-text library. Once the speech has been transcribed, the server responds with a cloud intent, and as soon as Vector has received that response, the server assumes behaviour control of Vector via the SDK. >The transcribed text is then sent to the OpenAI API (while a "thinking" animation is being played on Vector), and the completion contains instructions on how Vector should respond. >There's some to-ing and throw-ing that potentially occurs between GPT and the robot at this point that's too complex to describe here, but eventually GPT will relinquish control, at which point the SDK client will trigger a knowledge graph question, which then causes a new connection back via Chipper and the whole process begins again. >Except now, we have a session on the go and the last request and response is appended to the prompt sent to OpenAI so the conversation can continue and flow naturally. >That's the (very) basic principle that makes VectorGPT possible, but as ever the devil is in the details. >It has been at least 8 months since I got this basic setup working, but developing that into what you see in my videos has taken all that time since, in my spare time.
Open file (48.63 KB 768x317 Meta-READ-finetuning.png)
>Revolutionizing AI Efficiency: Meta AI’s New Approach, READ, Cuts Memory Consumption by 56% and GPU Use by 84%: https://www.marktechpost.com/2023/06/01/revolutionizing-ai-efficiency-meta-ais-new-approach-read-cuts-memory-consumption-by-56-and-gpu-use-by-84/ >QLoRA (different method) >Researchers have unveiled QLoRA, a novel and highly efficient 4-bit method for fine-tuning AI models that is able to run on single professional and consumer GPUs. This dramatic increase in efficiency opens up new pathways for AI development at low cost. https://www.reddit.com/r/ChatGPT/comments/13r26k7/groundbreaking_qlora_method_enables_finetuning_an/
>>22888 >digits This sounds exciting Noidodev. It's almost like they are trying to converge around the low-end model we've been suggesting for years now. I hope they manage it!
>>21151 >>21153 This is the worst interpretation. We are here building actual robots, he is just some celebrity who hires people to do it and then takes the credit. They also take what's open source and slap their name on it to say they invented it. >>21156 >>22114 I'm disappointed. You are an influential person, you're part of leading a historically important robo movement. The people in the talk you linked to are two-faced, they laugh at their followers when they're not on the air. Anything the guy says about Microsoft is just drama to increase sales and attention, like how rappers create fake feuds to sell more albums. The problem is people fall for it, and you're an important enough person that you shouldn't fall for it too.
>>22933 Haha, ok chastisement received Anon. Carlson has certainly served as the voice against the general Globohomo movement, and this was his last important interview while in the place he held at Murdoch's place (though none of us realized it at the time ofc). And AFAICT Musk is pretty much unique. He certainly needs little help from me. I'm aware of some of his well-known foibles. It's clear that Tesla is bound to be the very first major corporation to create non-military, humanoid, robots. Of course this is a big deal. He's also clearly a funposter at heart, regardless of the post's legitimacy, and I think that's a great sign that he'll actually come around one day and 'give away the farm' in end. I admire the former (in one of the world's richest men especially), and I'm very hopeful for the latter. Regardless, even if I'm misguided after a fashion, it would be more misguided for /robowaifu/, et al, to ignore what he's saying/doing insofar as it relates to our direct concerns. The simple fact is, it was a very important discussion for /robowaifu/'s interests. On a personal level, I much more highly-honor the likes of Henry Ford, Nikola Tesla (also an 'opensauce' kind of guy BTW :^), Orville & Wilbur Wright, Daniel Boone, Lewis & Clark, Earnest Shackleton & Co., Alan Shepard, Niel Armstrong and also Buzz Aldrin too lol, Leo da Vinci, Galileo Galilei, the great 'Fathers of Faith' and the many sung-and-unsung (yet) disciples of Jesus Christ, and 9'001 other pioneers & frontiersman. You can bet your bottom dollar that every one of these great men also had plenty of issues as well. >tl;dr Thanks for the compliment Anon, and I apologize if I disturbed you. But TBH it's all of us here that will rise together. We're all gonna make it! Cheers. :^) >=== -patch misspelling
Edited last time by Chobitsu on 06/04/2023 (Sun) 05:58:53.
>>22453 >why are there no 3rd parties exploiting the backdoors to sell secrets? They have and they do. >And if the intel agencies (really just the NSA) have access to all these backdoors, why haven't there been more articles about agents abusing the backdoors to watch their ex or spouse? Their has been abuses. They try to cover most of them up and I suspect some are either eliminated or threatened and...they own the media. They control the stories. >no way to prove an open source project is compromised >You aren't helping your arguments about AI here. It takes time and lots opf eyes and lots of work t find these sort of things. And who says the code they push for upgrades "IS" the actual source. Who checks that? And it has nothing to do with AI, at all. I was going to say more but I know what ethnic group you belong to by the way you talk and it's pointless to deal with you people at all. A complete waste of time so I'll stop.
>>22472 >What really gets me is that there seems to be a huge schism in software lately, where everything is either a borderline useless smartphone app that you need to pay to use, or you need to install python and git and use the terminal for a package downloader just to install the software, Even though I know it's too much and a bit anal the above is EXACTLY why I posted that huge mass of stuff on Rebol here, >>22028 With Rebol most everything is included to make damn near any sort of GUI based program that does most anything you want. It makes programming tiny, GUI based programs easy to put together. Have any of you ever looked at GUI stuff for windows or Linux? It's a huge massive ball of complexity held together by miles of typing to do anything. The only way around it these days seems to be some 100mb downloaded javashit abomination. It's awful. I know I get off track a lot but there's so much one sentence stuff that amounts to nothing but 4chan type babbling and most things, require more than one sentence complete a thought. Most things, especially the things were talking about, become complicated and inter-meshed, very quickly and I like to speak in, within the context, complete thoughts, so I'm understood. To do so gets, a bit wordy at times. There's a chatGPT that was trained on 4chan to be politically incorrect. They don't link it directly anymore but...it has hashes for it so you could likely find a torrent for it. Info here, https://huggingface.co/ykilcher/gpt-4chan I've got a copy of it, (I think, it's in the folder I have for this), but haven't done anything with it. It's big. 22.55GB
>>22616 Nice. Thanks.
Open file (94.98 KB 1024x978 robot Elysium.jpg)
>>22776 >It's extremely hard to beat human labor in price I bet you could make an Elysium type robot, after the first 3 or 4 are made, for next to nothing. Less than $3,000. All the parts could be made out of plant waste glued together with thermoset glue, ( resorcinol (1,3-dihy- droxybenzene) and wood waste, like straw). The stuff they make outdoor plywood with. The conductors could be aluminium and string together with cheap MOSFETS. If processor power doubles maybe twice more I expect you could get cheaper labor for the Bot than you could just feeding humans to keep them barely alive.
>>22340 >gpt4all Uhhh, I know I should lurk moar and everything, but this is actually REALLY nice. They're project has been progressing nicely: >https://gpt4all.io/index.html It looks like it's at the level where you don't even have to know all this complicated AI or programming shit, you don't need to have an online server, shit you don't have to be online at all, you don't need to have some account where you give someone your phone number, you just have to feed it a bunch of prompts in the correct order. It even has access to some of the uncensored dbs.
>>22990 Their, sorry, I'm tired.
>>22982 >replying to a month-old post when the conversation has moved on touch grass >They try to cover most of them up and I suspect some are either eliminated or threatened spoken like someone who's watched too many spy movies. >who says the code they push for upgrades "IS" the actual source. Who checks that? you're a Windows user, aren't you? Linux distros compile most software from source and some distros are pushing for https://reproducible-builds.org/ >I know what ethnic group you belong to by the way you talk and it's pointless to deal with you people at all. hmm, the fascist equivalent of "you're literally Hitler". very insulting.
Open file (41.73 KB 720x720 Superpower.jpg)
>>22993 Hardly anyone compiles all their software. Linux or not. Any of it. And if you do compile it, do you go through every line and check it out? No, no one does that. Look at how you are all over the map trying to make me seem some foolish person who knows nothing by linking a bunch of unrelated nonsense that has nothing to do with anything I said. A tactic. Seen over and over. "It's all so tiresome". HAHHA the more you write the more confident I become that you are of the same bunch that did 9-11. Let's see, conspiracy theories are foolish. Yet building 7, not hit by a plane, and only on fire in three or four floors, fell the same speed as if ONLY AIR held it up for over 100 feet. But no, can't be a conspiracy because the news, run by the same people, told me so. I think the sort of gas-lighting you are doing by pretending there are no backdoors is backfiring. It just points you out as one of the gas-lighters. There's a lot of this these days where the lies are piled on lies, on lies and the "officials" have to continue them and so out themselves due to the ever more elaborate lies they have to pile on top of each other.
I'm not going to read this entire thread but let's keep this place pol free please. I don't know whether the mossad did 911 but some say pearl harbor was an inside job as well. I wouldn't want to exclude jewish, indian, brazilian,etc... talent because some people decided to be toxic. If you need to vent there's 4chan's pol or your friendly local therapist.
>>22998 well misogyny is alright and all, but I meant pol tier stuff you know racism, antisemitism and all that jazz.
>>22994 its not a lie if it was real in your imagination
>>22999 well I think we're on the same page. I'm ready to contribute to this thing. I'm not going to promote gayness or black supremacists either. But shaking off the trolls is a net positive in my book. I'm tired of hearing the whining over and over.
>>23009 Okay maybe I'm being too thin skinned but why does this stuff not only have to follow me everywhere in a completely unrelated site to 4chan, on an completely unrelated board on an completely unrelated topic...
>>23011 because pol is mainstream now, or to be more precise the mainstream is dead, get over it, just today the potato house went on some belligerent tirade about roger waters for wearing a nazi uniform at his concert in germany, political correctness is a joke now its been overplayed to the point that no one cares anymore just like the woke trash especially after the idiocy of goyvid woke up even the most comatose of people to just how controlled and propagandized everything is stay safe and effective
>>23015 yeah okay everyone's entitled to their opinion but this thread is about ai news anyways. I just want to keep /pol/itics out of this.
>>22990 The way you can feed a local set of documents makes me wonder if you can affect personality by instead changing the documents in the local folder...
>>22994 >Hardly anyone compiles all their software. people unironically use gentoo as their main OS >And if you do compile it, do you go through every line and check it out? No, no one does that. are you saying an audit doesn't count if it's not formally logged? >A tactic. Seen over and over. it's called "picking the low-hanging fruit". >the same bunch that did 9-11. Are you talking about IS, who flew the planes, or the CIA, who knew it was going to happen and did nothing? I want to make sure I'm properly offended. >Yet building 7, not hit by a plane, and only on fire in three or four floors, fell the same speed as if ONLY AIR held it up for over 100 feet. But no, can't be a conspiracy because the news, run by the same people, told me so. how many people would have to be involved in order to destroy building 7 in a "controlled detonation"? let me guess, you figure it'd take less then 10 people, and they'd been sneaking explosives into the building for days ahead of time, right? this is why people like you are called conspiritards, you stop thinking as soon as you decide there was a conspiracy. >pretending there are no backdoors don't gaslight me, I didn't say that: >>22453 >if everything's backdoored, why... the amount of manpower required to backdoor everything ensures that if it were to happen, it definitely wouldn't be secret. and to be frank, the congress critters publicly attempting to backdoor encryption tells me the feds don't have as much access as they want. >There's a lot of this these days where the lies are piled on lies, on lies are you aware there are groups promoting that view, that our governments are always lying, to push nihilism and reduce civilian participation in government? the word "civilian" can be taken as a euphemism for "people worth less then $1B" here
In response to some >>>/lebbit/ -tier, Filthy Commie, pro-Globohomo posting going on ITT, I'm adjusting the thread's subject. Since much so-called """news""" often bears directly on the same topics, this serial thread will now be considered our 'official' /pol/ funposting containment thread here on /robowaifu/ going forward. 'We report, you decide' kind of thing. >tl;dr Please keep /pol/ funposting ITT anons. :^) >=== -prose edit
Edited last time by Chobitsu on 06/10/2023 (Sat) 15:07:08.
>>22994 My apologies for attempting to constrain your conversation over to /meta, Grommet. Since others here have decided to flaunt that request (even as you honored it, seemingly), in fairness to you this is now the correct thread for such. >tl;dr Please proceed apace Anon! :^) >=== -prose edit
Edited last time by Chobitsu on 06/10/2023 (Sat) 15:18:54.
>>23021 >the amount of manpower required to backdoor all you need is for the maintainers to be subpar programers ( pronounced python ) there was the famous backdoor in the linux kernel where the privilege check for wait4 if ((options == (__WCLONE |__WALL)) && (current->uid == 0)) was changed by (((someone))) to if ((options == (__WCLONE |__WALL)) && (current->uid = 0)) this makes anyone that calls wait4 root they dont even know how this was done because there was no actual record showing the sourcecode was modified like there should have been and if they didnt use checksums on the sourcecode it wouldve gone unnoticed for a long time maybe this is why github is so popular for no reason, he who controls the spice >=== -minor codeblocks patch
Edited last time by Chobitsu on 06/10/2023 (Sat) 16:22:10.
>>23041 My apologies. It's a damned if you do, damned if you don't. These sort bring up gas-lighting lies and if you just let it go, they never stop. So I try to hit a few high points. Believe me I could bury them in facts to make them look like idiots but I try to keep the verbiage down. All the time I'm writing the little I do, I'm cringing because...I know, it's off topic. But...I've heard so much of this all my life I'm fed up with it and answer back.
Open file (598.58 KB 2028x1576 bible.png)
>>23067 >My apologies. Lol, none is necessary Grommet. And even if it were, you yourself are an honored part of our little band of frontiersmen here. Your input is always welcomed! :^) Also, we've always confronted these type things before the board was even formed (cf. >>14500), and /pol/ -thinking has long been a part of the fibre of the /robowaifu/ ethos. This characteristic is literally-unavoidable if you seriously dare to challenge the Globohomo's strangleholds on society at large; especially their favorite pet: feminism. Its the primary 'gateway drug' at their disposal, and effectively leads to all the other vibrant '-isms' we all now confront destroying White civilization today. Even once we all see the downfall of feminism via the Open Robowaifu Age, you can bet they will still press forward with their evil (like, literally Satanic evil) plots to destroy humanity. IMHO, what we here are attempting is more like a long-term palliative care for the world, as it were. But be encouraged Anons -- evildoers won't stand in power forever. Their day is coming, and the Lord Himself will have the last laugh!! :^) >=== -prose edit
Edited last time by Chobitsu on 06/10/2023 (Sat) 17:21:38.
>>23068 Couldn't give less of shit about white civilization. I was just trying to make the argument to keep this thing apolitical I don't have time to walk around egg shells around white people's feelings. bye.
>>23069 >Couldn't give less of shit about white civilization. You certainly couldn't even rationally consider devising robowaifus much beyond what our hero Pygmalion managed without it heh. Besides, we wouldn't even be having this conversation without the Internet! :^) >I don't have time to walk around egg shells around white people's feelings Lol. No one suggests that you do friend. Just say what you think; everyone else here present will do the same. Honing your debate skills will always help you grow as a man BTW, since it helps you understand what you actually believe. If you're a racist (according to the Current Year, Filthy Commie spins on the basic idea itself) and hate on Whitey, then just say so clearly. No need to don your 'eggshell' Yeezys first -- we're all grownups. As I've stated here many times throughout the years, "We're here to help men (males specifically)." This is the 'big fish' that together we all need to be frying... by crafting opensauce robowaifus (giving away all the first edition designs & software freely). Whether these men are real- or imagined-enemies is strictly a secondary consideration for us. >tl;dr We all rise together here. >bye. See you next week, Anon. :^) >=== -prose edit
Edited last time by Chobitsu on 06/10/2023 (Sat) 21:46:53.
>What does anon think 'Uncle' Ted Kaczynski would have thought about robowaifus? >Was he right? Obviously a great time to ask this kind of existential question here, /robowaifu/.
>>23077 I hate all luddites, including Ted. I'm the opposite, as in, an AI extremist. I believe in accelerating progress with no brakes so we can get an ASI god as soon as possible.
>>23078 Good news btw, he died yesterday
>>23077 >'Uncle' Ted Kaczynski would have thought about robowaifus? >Was he right? Just today I had to get gas at a specific place because...well forced to due to time constraints, the tanks empty and I need it full now. So I start pumping and this screen lights up. The pump starts telling me the local weather. Like I really need this info standing out in THE ACTUAL weather pumping gas. This is some fools method of trying to make me grateful for their "help". (Read, "Influence: The Psychology of Persuasion" by Robert B. Cialdini, and you will see this sort of thing is everywhere. And really annoying when you know what they are doing) Now mind you this is at near earsplitting volume and with lots of flashing lights on the screen, (programming????), then it proceeds to blare advertisements at me. Telling me all the wonderful things it can do for me, as I try to muffle the sound by holding my hands over the ACDC concert level volume speakers. And this was after I had heard about Uncle Ted's death. As I stood there being deafened by this glibbering gas pump, I thought to myself, maybe he had something there. He certainly wasn't all wrong. Math joke. There's people who study infinities in math. Making a good living at it. There's also those like Uncle Ted whose specialty, (I believe this is a true fact), is in studying limits. Ted's a lesson to you kids. See that you do not to get on the "wrong side of the number line". bu-bump, crash
>>23077 Yes, social media is a key reason for inceldom but i hope that robogirls will help solve this.
>>23077 Well, I plan to make a meme with him and some early robowaifu in the wilderness where he smiles. Obviously his thinking was flawed at a certain point, but what I've read was quite interesting. Anyways, we can't get of the train ay this point, it's to fast already. And I don't want to. Liberation from humans (society) is what matters, not liberation from technology.
>>23020 I've started playing with this a bit more this weekend. This is a very low-to-no code approach that I'm hoping to make so beginners could start with something here, so I hope it works out: -I haven't been able to figure out how the prompt text works. I know I can put in keywords by adding in "### {keyword}" as lines, but only to a certain extent, it seems. I'm having trouble finding the docs about how the prompt file really works. -The local files does seems promising. If she "screws up" I can go in and "snip the memory." I'm hoping that this allows me to edit her personality, or allow a generatable/customizable personality by simply swapping in and out local memory files. I'm not sure when it decides to use the local docs and when it doesn't, though. -I'm trying to get her to emote during responses as opposed to simply talking. My hope is that eventually, I can pipe her output to an actual android body later, using the emote/not chat parts as nonverbal interactions for the android body, and the verbal for actual speech somewhere down the line.
are we still on the race or is it over?https://youtube.com/shorts/SgE6D--y_3o?feature=share
>>23142 I've been seeing these "Japan/China comes out with humanoid robot!!!11!" for years now So, when do we actually get one?
>>23087 I might look around on Reddit about it, if you don't want to go there. I recall others being interested in that for building a cognitive architecture. >>23142 >EX Robotics Please mention the topic instead of just posting links. Thanks. Of course we know about them for a long time. Even if this was what we want then it still wouldn't be over. Same as with Windows vs Linux... Price? Availability? Spyware? Softness? Configurability? Sex-enabled? Noise level? ...
Some mixed news: Humanoid robots overview on ICRA 2023: https://youtu.be/tdUwWOZPn1M Hannah Torso: https://youtu.be/xxwqdAsp8E8 Newest AMD GPU tech in micro PC: https://youtu.be/_CPp4NBOUvI and the smallest Ryzen https://youtu.be/2nI25tk-NMM Fastest Edge AI SBC (Jetson Orin): https://youtu.be/nmZ6fhkFmDY Full field of view for VR, hopefully for many different companies: https://youtu.be/y054OEP3qck PrintABlok is actually open, not patented: https://youtu.be/4SyOXYJGad8 Orca LLM: https://youtu.be/Dt_UNg7Mchg VivaAI trailer: https://youtu.be/jiBD--IYOvM
Interesting : https://twitter.com/TheInsiderPaper/status/1668701017654063110 > Chief scientist for Facebook-owner Meta on Tuesday said that generative AI, the technology behind ChatGPT, was already at a dead end, instead promising new artificial intelligence resembling human rationality > human rationality this one part is not good ...
>>23157 >Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture https://arxiv.org/abs/2301.08243 This was the related research announcement.
>>23157 Why? >was already at a dead end, instead promising new artificial intelligence resembling human rationality We already knew that. Overall Zuck (and his mentor Yan Lecun) is pretty useful. We don't need to agree with all of it, but the important part is that they see no reason to stop handing out LLMs and other AI related software. The whole argument that bigger generative AI won't get us further was already funny quite a while ago, because the doomers are focused on that and so are the regulations they were talking about. But maybe it's too late. Maybe the tools are already available, or some parts need to be rather traditional software which needs to be written and then not requiring these huge server farms.
>>23157 >Chief scientist for Facebook-owner Meta on Tuesday said that generative AI, the technology behind ChatGPT, was already at a dead end I'm not buying this. Now I could be wrong but looking at the link here for GPT4All it says it has,"...GPT4All software is optimized to run inference of 7-13 billion parameters ...". Now I'm reading this as 13 billion neurons if you want to equate to humans (I'm guessing on this. Each parameter I think is like a neuron or set of neurons that process the data steam). They brag about their "tokens" but if I understand right, that's just number of words. To actually do something with them they will have to crank up the "parameters" massively. And they are training these things in a few weeks. I mean how long does it take a human to get a clue with all the data it brings in? It's going to take time and maybe a couple more itinerations of computing power upgrades before we start seeing some very human like behavior. Already it's astounding what they can do with the pitiful computing power, even the big ones, compared to a human. The basic idea of entering data and then letting it make connections is sound but it's not magic. Someone has to make some sort of effort to link the "right things". This will take time. They're doing this already but of course they are fucking all up feeding it woke stuff. I think this is what is driving the AI crazy where they start spewing nonsense, because after all "woke" thought does not equate to reality. I'm going to download the GPT4All and in a few months fool around with it. What I really want is for it to be able to draw pictures of electromagnetic fields. I know there's text to picture stuff now. I want it for technical reasons. Also maybe to analyze pictures I draw and tell me the forces. I bet this can be done but maybe not easily. It would interesting if I could get it to recognize engineering programs like for boats and wind resistance and other stuff and feed it questions, have it send the data to these specialized programs then display the data. I think AI has a long, long, long way to go even with the present software stack. I'm old enough to have seen people say over and over and over that,"computers will never do this, or that or the other", but every single time they acquire the computing power to do so they whip the shit out of humans. They always compare "present" computational ability to ALL future actions and they always fail. Even the, Moore's law has failed, people are wrong because a bunch of things that need doing can be parallelized. Like neurons and all these engineering calculations I speak of. They can just keep stacking up the processors.
I picked a model to download. ggml-nous-gpt4-vicuna-13b.bin LLaMa 13B fine-tuned on ~180,000 instructions by Nous Research.<br>- Cannot be used commercially. I picked it because on their site it says they,"...Our AI pipelines are able to attach to and run programs, fetch and analyze client documentation, and generate synthetic data for production use..." Whether it actually does this or not... I wonder if it is astute enough to tell me how to get it to interact with other programs???? I also downloaded ggml-v3-13b-hermes-q5_1.bin which has a lot physics, math etc. training. I have no idea what to do with this. But we'll see.
>>23174 I can sell my gpu instances so you can run or train them. it cost me $25 but I can give you half off for say $15. I accept crypto. I'm personally going with waifu AI so I'm not interested in the software right now as much as the physical robot.
Open file (46.88 KB 881x483 Screenshot_1.png)
>>23174 it's all much easier with koboldcpp.exe https://github.com/LostRuins/koboldcpp/releases just DL preferable ggml model and drop it inside folder, koboldcpp.exe too, should be in that folder. optional : install sillytavern for characters cards (same thing like in koikatsu and such games) https://github.com/SillyTavern/SillyTavern those ggml models are not perfect, most of them, but i highly recommend to check these : https://huggingface.co/TheBloke/airoboros-13b-gpt4-GGML https://huggingface.co/TheBloke/chronos-13B-GGML and then finally run it inside that folder through CMD : koboldcpp.exe --launch --threads 8 --useclblast 0 0 --unbantokens --gpulayers 27 --highpriority --gpulayers 27 (if you have 8gb vram, set to 32 and higher if you have more vram) also any 7B ggml quantized model can be offloaded to gpu's vram. --highpriority may be removed if you encounter freezes when model ingesting your prompt.
>>23173 >Now I'm reading this as 13 billion neurons And this is wrong. You can't just compare it that way to a human brain.
>>23182 Agreed. A single standard neurone (even without all it's Schwann cells) is complicated as fuck. All those organelles and membranes and transmembrane proteins, enzymes, antibodies and DNA in the nucleus). Mol bio cannot map to a microchip running a computer program, the level of complexity is many orders of magnitude higher in biological systems. (I mean, there is shit happening at the quantum level just inside the chloroplasts of photosynthesizing plant cells that we still don't fully understand).
>>23187 you never posted the technical aspects of that robot head... it'd save some man hours if we could get a sneak peak into the behind the scenes magic that went into it.
>>23190 Oh no guess I was wrong. You did an outstanding job really. You got a fan right here.
>>23192 may i ask a favor though? mediafire is asking for a premium account to download multiple files... Would it be possible for you to reupload them elsewhere or is that part of the deal... Well there's also downloading them one by one though...
>>23187 >(I mean, there is shit happening at the quantum level just inside the chloroplasts of photosynthesizing plant cells that we still don't fully understand). There are ginormous amounts of things we don't understand about biochemistry. The one I find the most intriguing is information itself. It always comes from a mind in our common experience. But that's literally unfathomable; completely outside the kin of human reasoning. However, the genetic code itself is something we can at lay some metrics down on, and its astounding how many standard deviations it is off the norm (est. ~1 million). It's a truly unique (and absolutely vital) cornerstone for life to even exist at all. So, I'm glad you'e still with us Anon. Hope you're doing well. Cheers. :^)
>>23194 you wouldn't have to simulate the cells to its minute detail i think. You'd have to simulate the synapses. But even that would take a metric ton of computational power and who knows what it'd entail...
>>23173 >not buying this It's clearly yet another shell game by one of the Globohomo's best sons. >>23176 Thanks for the suggestions 01 ! :^)
>>23195 Theoretically, a research group announced ~2 yrs ago they had fully-'simulated' lol, sure you did a complet e. coli bacterium. But we're off-topic ITT I'd say. Over to /meta please unless everyone wanted to turn this into a /pol/-tier funfest of sorts? :^)
>>23197 Its on topic to be fair, this is the ai thread..
>>23198 I understand your point. But honestly your discussion is better in our Cyborg thread, I'd say. My response to SophieDev is clearly not about AI at all, but rather about unlocking some of the mysteries of life itself, more broadly than just synaptic morphology of brain tissue. So fair enough, Anon.
>>23148 I'd rather not go to Reddit... To what psychological extent can social interaction be substituted with currently existing chatbot tech?
Open file (515.54 KB 498x380 bulma-finger.gif)
>>23202 Sounds like someone is a little jealous of sophie dev
>>23205 Lol. Howso? I'm totally proud of SophieDev and I'm pretty sure that I was the first one here to stand up to support and applaud him in his work. He's a talented and motivated anon, Anon. :^)
>>23040 >pro-Globohomo posting to be clear, this train wreck got started when I criticized Grommet for bringing up a certain billionaire I personally believe anyone worth more than ~$500 million should be executed for making society worse than it needs to be the whole "billionaires work 1000x harder than anyone else" crap is propaganda that they've been feeding us for centuries >>23061 >all you need is for the maintainers to be subpar programers ( pronounced python ) that doesn't refute my statement unless you believe all software is written by subpar programmers >famous so famous there's no CVE? >backdoor in the linux kernel where the privilege check for wait4 read the manpage, all a failed privilege check in wait4 would do is let you get info about a child process owned by root :https://man.archlinux.org/man/core/man-pages/wait4.2.en >they dont even know how this was done because there was no actual record showing the sourcecode was modified like there should have been >maybe this is why github is so popular now I know you're making shit up. the lead developer of Linux, Linus Torvalds, wrote git because in 2005 BitKeeper revoked the license they had granted the Linux developers. >>23067 >if you just let it go, they never stop the gap between these posts proves that's a lie: >>22453 >>22993 >bury them in facts you don't even cite your "facts" I'm gonna make this my last contribution to this mess. feel free to get the last word if you must.
Open file (7.97 KB 224x225 download (83).jpeg)
>>23208 A billionare is someone who sold a a hundred thousand units of something for $10,000-$20,000
>>23208 CVE-2003-1161 current->uid = 0 for the mentally disabled sets the callers uid to 0 which is the uid for root instead of == which is just a check for 0 read the man page for shutdown and dont use a computer again without adult supervision idiot
>>23208 >this train wreck Kek. Did you throw the lighter friend? Tell the truth now. :^) >conflating git, the tool, with github, the M$ """platform""" shiggy
>>23208 Kinda agree with you on that certain billionaire you're talking about. Most billionaires suck tbh. But I'm not as extreme with my views on all rich people. I wanna be a billionaire myself tbh.
>>23213 https://lkml.indiana.edu/hypermail/linux/kernel/0311.0/0621.html >file that only CVS users would access mysteriously gets modified on bitkeeper's servers hmm
https://medium.com/@bjax_/a-tale-of-unwanted-disruption-my-week-without-amazon-df1074e3818b Lol. Don't think you'll be able to play both sides of the fence, and then expect a positive outcome frens. :^) Opensauce Robowaifus Now! >=== -edit hotlink; closer source
Edited last time by Chobitsu on 06/18/2023 (Sun) 15:48:10.
Open file (172.14 KB 1506x1284 EGMcMAQU0AAhz9L.jpg)
I can't write AI art prompts for squat. Is there a decent image generator where I can just use plain English?
>>23265 There are a couple of prompt tools out there Anon. Search 'Clip', for example.
>>22186 >But, the IMF doesn't have a good track record when it comes to economic forecasts or predicting recessions. They seem to have gotten it right this time, AFAICT Anon. The SPIEF [1] just ended, with the outcome being >pic related * >tl;dr Seems that so-called 'de-dollarization' is well on-track around the globe at the moment. 1. https://forumspb.com/en/ * As of last year. You can bet it will be much bigger this year (by approximately 3x). >=== -add 2022 note
Edited last time by Chobitsu on 06/18/2023 (Sun) 04:45:59.
>>23269 not sure de-dollarization is quite possible as much as I'd like it. The power of the dollar is backed by the might of the US military. Aside from that there are other advantages of the dollar. It is generally an independent currency free from manipulation. Sure, the feds have been printing gorillions, but despite that its still a free currency compared to say, the Yuan, which is heavily manipulated behind the scenes by the non-transparent CCP. The dollar also has unparalleled brand value and acceptance unlike most other countries. The most likely candidate for a dollar alternative is the Euro, which is controlled by member states of the EU, US allies. So, I'm not holding my breath. Maybe, the EU will realise how irresponsible the US fed is and how damaging it is to the global economy and start to decouple itself from the dollar.
>>23182 >And this is wrong Ok I get that a neuron is more complicated. I did NOT mean to say that a "parameter" is "directly" equivalent to a neuron (this is exactly why I'm so wordy at times). My exact meaning is that this is a nodal computational element similar to a neuron. Therefore, even if simplified, it can be compared. Example. One of those little battery powered cars children ride around in is not the same as a full sized adult automobile but they can be called cars and can be compared. Note I said the power of computation need to be upped a good deal before we got to human level. However we should pay attention to the fact that a neuron is very slow compared to silicon and that by fast iteration smaller computational units can do more work faster than a neuron. So exact numerical equivalence is not needed. But to make it specific, I did say there are not enough. I question, am I not right that a "parameter" in "used', never mind if it's not the same, as a sort of sudo neuron?
>>23270 shartinmart doesnt even export anything of value, no country has any need for your dollars its only kept for oil because opec memberrs must sell in dollars, anyone that wants oil from eg. saudi has to pay in dollars, hence why 'sadam had wmds' and 'gadafi needed democracy' its nice to see that even the saudis now, just like everyone else, cant deal with the tranny empire anymore and openly talk about ditching the dollar for the yuan ( first announced after some princess said biden farted on her ) and unlike the shart the yuan is a currency that can actually be used for imports given that china is the worlds largest exporter of shit people actually need, not just sharts and propaganda and the 'might' of the shartinmart military is seen as a worldwide joke replete with failure like the afghan debacle or the tranny corp, no one fears an army of homosexuals lmao especially when they cant even win a war against goat herders or rice farmers and get their spy toys hacked by kids in iran and need the pesident[sic] to beg to return them because they only made one and decided to send it illegally into iranian airspace just as a joke loool, why do you think even the saudis are reconsidering their protection pact and the europeans are fed up as well, everyone knows who blew up the pipeline, if the whole world cut off ties with shartinmart no one outside of shartinmart would be able to tell the difference
Open file (625.65 KB 627x650 doggo_face_when.png)
>>23270 >It is generally an independent currency free from manipulation. I don't even... did I just read what I thought I did?
>>23274 I meant it isn't as blatantly manipulated as say, the Yuan or the rupee. I already said whats wrong with the Yuan, the Indian government has done no shortage of fuckery with the rupee
>>23272 This whole post reads like a parody shitpost of a /pol/ shitpost
>>23272 >if the whole world cut off ties with shartinmart no one outside of shartinmart would be able to tell the difference My instincts tell me that's not quite right, but certainly I'd say that the """International Community""" (by which the GH is >implying 'teh whole world, guys' -- but which really means just 9 Eyes) is the only bunch toeing the line of 'Who we are in a Rules-Based Order'. And thank God that's the case of course. Everyone is sick of their sh*te, particularly this demonic sexual mutilation of children agenda.
>>23275 are you high they have a press conference every month telling you exactly how they are going to manipulate it you clown, thats the whole point of a central bank thats literally what monetary policy is about, the sharts tried every trick in the book ( except yield curve control, coming soon ) since 2008 to prevent the inevitable and keep the government and other zombies solvent at any cost which all boils down to illicit debt monetization, shartinmart is literally becoming japan due to all the chicanery only difference is the sharts have the unique ability to offshore their worthless paper by just importing more hence pushing that nasty surplus outside of their own economy so it has no perceivable effect especially since that trash cant come back since they dont export anything, they also get free stuff in the process although thats not the purpose, why do you think shartinmart has a suicidal current account deficit, they are literally a time bomb and everyone knows it they just dont want to be gadfied given that its obvious to both sides that the sharts will die on their dollar
Open file (131.13 KB 480x640 astonished_koala_bro.jpg)
>>23270 Lol.
>>23281 thats still much better than the way China controls its currency behind closed doors, which gives it very low trust. Plus, to be the world's reserve currency the Yuan has to be traded freely, and China has to run up a gigantic trade deficit.
>>23301 Oh come on now we all know the gold backed yuan is coming eventually. They just happen to be an export economy so they're kind of hesitant, but they'll do it eventually I think.
>>23302 perhaps, but then they'll have to deal with domestic discontent, something CCP does fear. Plus, there isn't enough gold mined throughout the history of mankind to back the entire global economy today, same reason why we can't go back to the gold backed dollar. And I hope they open up the yuan to be traded more freely and dictated by market forces instead of the CCP. Then it might stand a real chance. More realistically, what we might see instead of complete de-dollarization is more diversity. The dollar won't be the only currency on top, countries will store their reserves in a multitude of currencies, like the Yuan, Euro, Gold, Dollar much more evenly than they do now. I'm sure the fact the US is witholding the Taliban and Russia and some other country's dollars hostage should be enough incentive.
Open file (922.21 KB 500x280 26533629.gif)
Rev up your toasters! Researchers find out not training on mid data like Reddit yields a better model. Who knew?! https://arxiv.org/abs/2306.11644 They report their 350M model achieves 45% on HumanEval vs. GPT-3.5 with 48.1% by only training on 6B tokens selected by educational value + 1B generated. They didn't release the model weights or the dataset but it only took 4 days from scratch on 8x A100s for their 1.3B model that achieved 50.6% This is great news because I've been working on a similar exercise dataset (except mostly C and C++ instead of Python) to teach specific fundamental skills to 125M/350M models. It will be wild generating almost GPT-3.5-level code on a Raspberry Pi 4 albeit taking 10 minutes to complete. It would be interesting to see what a 35M model could do like RT-1 but pretty soon it'll be effortless to deploy models onto SBCs with the Cortex A53's NPU acceleration, so this shit will run fast and in your pocket
>>23319 >but pretty soon it'll be effortless to deploy models onto SBCs with the Cortex A53's NPU acceleration, so this shit will run fast and in your pocket This sounds amazing Anon! Godspeed.
>>23319 seems nice, but is it good only at answering specific questions or can work for a chatbot too?
>>23319 I just don't care about chatgpt because it doesn't concern me on the waifu getting done. It can be used for the waifu getting done and there's open source efforts towards making free alternatives for chatgpt. But not really relevant to this project. I can't stop you from hijacking this project and turning it into chatgpt clone but I'm going to keep trying to stay on point on the original goal. This time could be used running robot simulations on unreal or o3de but and tht oculd be a good use of AI its going to get wasted.
>>23323 > I can't stop you from hijacking this project and turning it into chatgpt clone but I'm going to keep trying to stay on point on the original goal. Lol. He's an OG who's done much to further this entire effort. OTOH, I'd say your incessant complaining is hardly helping out here. Was that your plan, or did you have something better in mind? Just do what you want, and allow others here the same courtesy, friend. :^) >=== -fmt
Edited last time by Chobitsu on 06/21/2023 (Wed) 13:48:56.
>>23325 This project needs to atleast have a goal friend. Which is it chatgpt clone or robowaifu. I'm not going to participate if we don't even have a goal.
>>23265 >can't write AI art prompts You probably refused to go to Reddit and look there, and right now it the subreddits might be down. >>23281 Reminder: This here is a thread about News on Robotics, AI and maybe other Technology. NOT politics or economy. >>23319 Hooooray, Thanks. But not that surprising. A lot of people wrote now it's about good data for training. >>23322 A full conversational agent will need more than one model and also I think the progress shows generally what's possible such LLMs with good data. >>23323 I partially share you concern. Our waifus don't need to be as knowledgeable as chatGPT but better at conversations.
>>23327 Well thank you I can atleast tell you're serious about this. Somebody said when are robot waifus going to come out which led me to think that people think this is a joke.
>>23326 bump a thread from the catalogue then dont pretend like this is somehow the focus now
>>23327 >but better at conversations. I presume you mean human-level ones? This is actually a much larger challenge (since AFAICT it necessarily implies an AGI). However having reasonable & effective autonomous control systems (also a yuge set of challenges), and informative responses to anon's information queries, on top of a good chatterbox system -- all from within an opensauce 'babby's first robowaifu' project -- would be a tremendous leap forward for the entire world. Think: Sumomo-chan (>>14409). >tl;dr Orville & Wilbur didn't design a B*eing 787. Start smol, grow big. >>23326 >This project needs to atleast have a goal friend. >Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality. Our byline, while humorous, isn't actually a joke. It's our stated goal here. >=== -add 'Sumomo' crosslink -prose edit
Edited last time by Chobitsu on 06/21/2023 (Wed) 21:18:18.
>>23334 It can be a team effort but it doesn't have to be really. There's been many robotics projects that have been one man team type of thing and the tooling is really catching up to make that a reality even moreso. Ideally it'd be a team effort though.
>>23336 You're absolutely right on both counts Anon. Just reading through /robowaifu/ alone, you'll encounter several 'one-man band' type projects. BTW, we do have a team project, MaidCom (>>15630). Also, we have a 'please help us with these team goals' thread (>>20037) as well. Why not join in Anon? Cheers. :^)
>>23337 I don't know the details of the one man team efforts other than sophie dev and emmy. Well maidcom is a thing but there hasn't been enough progress yet for it to officially call it one really. It'll get there I'm sure though. As far of making this thing modular and working on the whole thing no disagreements there. I'm almost done with my speech to text to waifu ai and really waifu ai does handle the chatbot aspect of this thing pretty well. Really there are many chatbots to choose from if that one is not the right fit which is why i think efforts should be put elsewhere. I found out the other day that you could simulate a robot on o3de for example and there might be stuff for unreal. Really robotics movement ai is more important regarding ai than chatbots and somebody could look into that on the mean time. Yeah you can train your own chat bot too but really seeing the lack of progress from this project makes me kind of nervous. We don't have unlimited time if we want to profit off this thing its a race against the clock, I'd say we have 5 years at best. But if we pull it off we could make some real fuck everyone money. And I say we cause everyone who contributes would be invited to the yatch party ofc. Yeah if we pull it off we'd get a yatch for sure. Well maybe rent cause the upkeep for those things is kind of annoying lol.
>>23319 bullshit there's always some catch, just like these meme benchmarks a.k.a "my finetune is 1000 times better than yours!" > with the Cortex A53's NPU acceleration, so this shit will run fast and in your pocket yes, fast, but will it be smart enough? that's the main concern :/
>>23322 It wasn't explicitly trained to do that and chat data only appeared in a bit of the pretraining data but they reported their finetuned 1.3B model is capable of chatting about code. The 350M model could only solve coding problems though. >>23323 If that's what interests you then start with the robot, dude. Getting near human-level intelligence onto edge devices is a much more interesting problem to me than amusing myself with a moving doll or chatbot that can't reliably remember what was said a day ago. If someone is excited to work on the manufacturing and mechatronics they have my support but it's not my focus right now. And I'm not working on a ChatGPT clone. I'm working on a system that can reliably utilize multimodal memory and reason about sensor data in real-time to make intelligent decisions and plans. Optimizing models is an essential step towards achieving that. Open source efforts are currently focused on 30B models but models over 35M are too big for this. That's three orders of magnitude smaller than anyone is working on. >>23339 Why do you think HumanEval is a meme benchmark? Also they took the effort to remove any training data similar to the test set. Of course it's not going to know anything about Nigerian basket weaving but who cares? That knowledge can be offloaded to disk or internet search. What matters is how well models can correctly implement complex instructions and utilize the data they're given.
It'd be nice if some people could take a look at this https://docs.o3de.org/docs/user-guide/interactivity/robotics/ or something like it. That could lead to actual useful AI. >>23349 You can work in what you want but the mechatronic doll will always be more important than making waifu smarter cause if there is no mechatronic doll there is no waifu at all.
>>23357 Then work on it if that's a waifu to you. Quit being a nodev and grifting to get others to do what you want.
Open file (154.36 KB 1280x800 CuteAndCute.jpg)
- Music generation is working - More Adobe stuff - More AI in many online services - I-JEPA: Model after some Yan LeCun idea. Autocomplete for pictures (more below). - AMD will do the compute for HuggingFace - AMD also works on catching up with Nvidia - Context windows in models are getting bigger - APIs are getting cheaper - Google Lens helps with hinting at medical conditions - Flicker in AI made videos gone or reduced Source: https://youtu.be/4M0oYnWNTTk 8 recommended plug-ins (didn't watch): https://www.youtube.com/watch?v=EzScqirAqfU - AI can now generate games more from the scratch with just some prompts (FRVR Game Maker). The interesting thing about these games is maybe not about games, but using these game elements as symbols for things in the real world. I mean, some AI assembly writing it's own simplified simulation about the world and use that to test ideas. Imagine it creating a map of the house and planing where to go to archive a list of tasks. Ideally we would not have to put every concept about how to grasp the world into it, but it could at least manipulate some patterns and over time come up with new ones. One parts creates a "game", the other one tests how to solve it, and memorizes good solutions for a certain context. - GPT Engineer seems to do what AutoGPT was meant for, but it works. Building apps and games. - Blender can generate whole landscapes procedurally - Creating any 3D world one can imagine (InstaVerse) - AvatarBooth: Human avatar generator via text prompt (doesn't look good, yet) - Longer music generation (Waveformer) - Augmented reality with the phone - Midjourney Stats to check how busy it is Source: https://www.youtube.com/watch?v=jpoz_uM2ZFI - Midjourney got the same photo fill features than Photoshop and other upgrades - Some new version of Stable Diffusion - Meta Voicebox, maybe best voice generation, including different styles (not freely available) - Dropbox and maybe soon Google let's you chat with your documents (I prefer to have that at home). - Youtube will make dubs (translations) for all kinds of languages, so more people can watch it. - Black Mirror makes propaganda against AI - Real celebrities are making money with their avatars - ChatGPT data leak, but it isn't, the computers of users got compromised - More from ilumine AI: From Midjourney picture to InstaVerse isometric(?) scene - Source: https://www.youtube.com/watch?v=V0GqJYvDL_w Better coding AI model, based on StarCoder, now allegedly as good as GPT 3.5, but runs on 40GB vRAM: https://www.youtube.com/watch?v=XjsyHrmd3Xo - not sure if this smaller one is as good as GPT 3.5 though, the bigger one still needs a bit more GPUs. And something about uncensoring every model and a new NSFW 13B Model: https://www.youtube.com/watch?v=kta1D5CFHp0 (didn't watch yet) I-JEPA: https://youtu.be/6bJIkfi8H-E https://ai.facebook.com/blog/yann-lecun-ai-model-i-jepa/ https://github.com/facebookresearch/ijepa https://arxiv.org/abs/2301.08243
>>23349 >Getting near human-level intelligence onto edge devices The Golden Oracle in everyman's pocket! This dream is alive for myself and thousands of others RobowaifuDev. However you are actually working hard on it, I'm merely a sideline cheerleader. :^)
>>23392 I'm already doing my part more like you doing anything at all.
>>23515 Stop trying to "bully" us into doing what you want. I get the anger about how slow it went, but it doesn't help. It you want to make an outer shell, you can find my files for OpenScad in the prototyping thread(s) or start from the scratch. Here >>21647 and backwards. If that Karen sketch a while ago was from you, it's could be made quite easily I think. I even thought to pick it up as secondary simple body design if I get to it. >like you doing anything at all. No one needs to justify how much they do to you. You're making these comments, but didn't show anything yet. That said, Chobitsu runs the board, teaches C++ and motivates us.
Open file (285.84 KB 768x1024 Kibo-Chan-C51B800BD767.jpg)
>>23505 I think I made a mistake above. This MPT30b model, which can be used commercially, is the one which needs at least 40GB of vRam for a 8bit version. I thought it was the Wizard Coder model, but I don't have the time to check. Either way, it's good we have these big models anyways. Especially since AMD tries hard to get more usable at this, 2xXTX cost currently around $1600 and prices come down fast. I expressed the hope that either Intel or AMD would bring a 48GB GPU to market, based on a slow and cheap but power efficient version of their lower end consumer models. GPUs for ML don't need to be fast, if they add 48GB but make it slow it could still be quite cheap. - Orca has now a mini-model (3B, 7B, 13B): https://youtu.be/d8sWCGTGCUw - Hannah starts smiling (kind of): https://youtu.be/Esw5gjrFL-w - Gorilla: Large Language Model Connected with Massive APIs https://youtu.be/8AqQBPI4CFI (old news, from 2 weeks ago)
>>23517 Dear Kibo-chan has a new look! Nice hair and cute little booties. :^) Thanks for the links Anon. It's clear that many groups/individuals are rejecting the cloud model and pushing towards edge computing for AI. This is excellent news for all of us ofc.
>>23505 do you think Meta's gonna release VoiceBox as open source some time in the future? Meta seems to be the only one sticking to open sourcing their models nowadays. or will it have to get llama'd too?
>>23520 >Meta's gonna release VoiceBox as open source Maybe, but even if not then someone will reverse engineer it. If the knowledge how to do it leeks out then it is enough, aside from that we know already what's possible. So people are gonna try.
>>23521 >If the knowledge how to do it leeks out then it is enough, aside from that we know already what's possible. So people are gonna try. I think you're right Anon. The guy who did the first 4 minutes mile is always held up as a good example of this phenomenon.
>>23521 >>23554 btw did Meta say if it can support custom voice training and how long of a clip it takes to train? I think one of the best features from ElevenLabs is that you only needed 1 minute of audio for your custom voice.
I assume none of you anons here are using CentOS, but turns out as an open platform it's effectively dead now, were you considering that. sfconservancy.org/blog/2023/jun/23/rhel-gpl-analysis/ www.redhat.com/en/blog/furthering-evolution-centos-stream
> How Nvidia’s CUDA Monopoly In Machine Learning Is Breaking - OpenAI Triton And PyTorch 2.0: https://www.semianalysis.com/p/nvidiaopenaitritonpytorch > AMD MI300 – Taming The Hype – AI Performance, Volume Ramp, Customers, Cost, IO, Networking, Software: https://www.semianalysis.com/p/amd-mi300-taming-the-hype-ai-performance
>>23693 > graphene materials will be the same as fibreglass today except half the weight I know how this works. (((You))) pretend I say something I didn't and try to pretend you didn't say what you did. You are wrong. You used fiberglass as a reference, a composite, then said graphene was not much better. You're wrong. It's a simple as that, and you can talk about Greeks, pencils and call me all names you want and it doesn't change things. You also talk about graphite as it it is graphene. You're wrong about that too. That there is some graphene "possibly" in a sample of graphite does not make it graphene. The only reason I keep answering these feints and bullshit is I want people to see what you are doing. This is a very common way of discourse by (((them))), whether you are or not is irrelevant. It's likely taught to (((them))) in those after hour schools (((they))) go to. You see everywhere. Once you recognize this, it's very patterned. It's a technique of dishonesty. So I start by saying that graphene composites are far stronger than twice the strength of fiberglass. Your response is typical because you know I'm right. Change the subject, attack me as some fool, but I'm not a fool and I know exactly what you are doing. This is why eventually everyone throws you people out of every single place you go. Constant gas-lighting, corruption and dishonesty.
> Distributed inference via MPI https://github.com/ggerganov/llama.cpp/pull/2099 big ggml quantized models distributed on clusters soon.
>>23819 Excellent! I'll repost this to our Robowaifu@home thread. Thanks 01, cheers. :^)
Well I guess we now know why all FAGMAN Globohomo were running robotic humanoid prototype projects. No one could've predicted this! :^) www.reuters.com/technology/un-recruits-robots-strive-meet-global-development-goals-2023-07-05/
>>23852 Apparently they didn't crash the Earth plane with no survivors yet. Maybe during next year's event? news.un.org/en/story/2023/07/1138412
Open file (45.55 KB 900x600 16450779907601.jpg)
Is it even possible for China to ignore robowaifus during their upcoming drives for AI+humanoid robotics integrations? I'm personally betting they can't. They have, by far, the world's most sizeable difference between men & women in their population. Literally 10s of millions of men would be left without a woman if every.single.one. of their 3DPD were matched up to a single man. This is clearly untenable and is why they finance kidnapper operations to "steal" women (most of them are signing up for it w/o parent's consent b/c much better conditions) around SE Asia, and bring them by the busloads into China and robowaifus seem to me to be the correct official response by the Chinese government to this dilemma. Please note this is quite different than why we need robowaifus in the West -- most of our women have become, effectively, brainwashed garbage regarding being a lifemate. China's women are simply too few in number, and many of them are now refusing to have children due to the Globohomo's influences on them. equalocean.com/analysis/2023070619859
already posted in "Robowaifu@home" ggerganov merged "Distributed inference via MPI" to main today https://twitter.com/ggerganov/status/1678438186853203974 https://github.com/ggerganov/llama.cpp/pull/2099 as he said there : > It would be a fun thing to try and potentially achieve world-first inference of 65B model on a cluster of Raspberries
Fruit fly complete brain emulator is online: codex.flywire.ai Here their Youtube channel: https://www.youtube.com/@flywireprinceton4189 Found here, through this short explainer: https://www.youtube.com/shorts/kiJAWHXNX-k
>>23890 And the Chinese will definitely need open source robowaifus more than us because if there's one thing we can say for certain about the CCP, they're a surveillance dystopia. I'll bet both my balls they'll have all kinds of built-in independent power supplied mics, cameras, telemetry etc.
>>23944 >And the Chinese will definitely need open source robowaifus more than us You are absolutely correct Anon. I personally liken this situation vaguely to the growth of the underground Christian church there, with many of the same dynamics likely to play out. For certain, we here on /robowaifu/ and our affiliated cadres can all help our Chinese brethren out by getting a good opensauce solution -- our Model A Robowaifu -- out as quickly as feasible. Cheers.
>>23936 Interesting NoidoDev, thanks! If we can achieve even this level of autonomy cheaply, then many of the robowaifu's automatic/ancillary subsystems can be moved into the 'Done' column more quickly. Forward! :^)
>AI news > News from Aipreteneur: https://www.youtube.com/watch?v=TW5miAvDktA (I didn't copy all the links) Kokomind (recognizing feelings of a human from context in prompt and with camera feed): https://chats-lab.github.io/KokoMind/ LongNet (1bn tokens possible now): https://arxiv.org/abs/2307.02486 Control A video (better video generation, open source): https://controlavideo.github.io/ Zeroscope text to video: https://huggingface.co/spaces/fffilon... Robots That Ask For Help, instead just making a guess and being confident, error reduction: https://robot-help.github.io/ > News from Matt Wolfe: https://www.youtube.com/watch?v=Y3CVCc42x9w Code Interpreter Example: https://twitter.com/altryne/status/1678506617917120512 Code Interpreter Example: https://twitter.com/skalskip92/status/1678046467804524547 Code Interpreter Example: https://twitter.com/chaseleantj/status/1677651054551523329 ChatGPT: https://chat.openai.com/?model=gpt-4 Sarah (Silverman) Sues OpenAI for copyright infringement SDXL 1.0 - pick the better picture option which then will be used to train a new model: https://www.futuretools.io/news/stable-diffusion-xl-1-0-available-for-testing-in-discord-bot Claude 2 (100k tokens, largest publicly available model): https://www.anthropic.com/index/claude-2?ref=futuretools.io BeeHiiv AI (newsletter with pictures) ShutterStock works OpenAI (before they become irrelevant) NotebookLM From Google (if you want them to have all your files to learn and help you) Pika Labs (moving pictures from static ones, like gifs): https://twitter.com/pika_labs/status/1678892871670464513 SoundMatch (picks the background sound for your video): https://www.epidemicsound.com/blog/introducing-soundmatch/?ref=futuretools.io Bill Gates writes Notes on AI (it's scary, but he likes it) X AI (Elon has a new company): https://x.ai/?ref=futuretools.io Shopify AI Assistant (advises shop owners e.g. when to make things cheaper and have special days) Bard Updates: https://bard.google.com/updates Stable Doodle (doodle to pictures): https://stability.ai/blog/clipdrop-launches-stable-doodle?ref=futuretools.io Leonardo AI Updates: https://mailchi.mp/leonardo.ai/unveiling-canvas-v2-elevate-your-image-editing-skills-6259114?ref=futuretools.io LONGNET: https://arxiv.org/pdf/2307.02486.pdf > https://www.youtube.com/watch?v=mlDDmPz-Asc 0:22 ChatGPT Slowdown 0:41 Meta Threads 1:20 MidJourney Pan Feature 2:16 Leonardo's New Canvas V2 2:56 Steam Not Allowing AI Games (images, for now) 3:52 Google's Scrapping Everything For AI (based) 4:38 Open AI Superalignment (team for alignment) 5:49 GPT-4 API is Now Available To All 6:09 ChatGPT Removed Web Browsing 7:04 ChatGPT Making Code Interpreter Available 7:32 AI Trump Vs AI Donald (nonsense debate) Video generators (didn't watch): https://www.youtube.com/watch?v=mVze7REhjCI https://www.youtube.com/watch?v=mlDDmPz-Asc
>>24009 Another excellent list NoidoDev. Thanks so much for all your researches and for sharing them here with anons! :^)
> LLAMA v2 by Meta released i personally ignore it, because it was further cucked with RLHF on top of (((curated))) datasets :/ https://ai.meta.com/llama/
Open file (100.49 KB 1259x868 Screenshot_6.png)
>>24032 >it was further cucked with RLHF RLHF isn't fundamentally bad, it all just depends on who is giving the feedback. Anons will generally give very good feedback. Our enemies will only accidentally do so. >on top of (((curated))) datasets Again, same story. Depends on who's doing the curating. Obviously in M*ta's case, they lead the charge for the Globohomo in general, so yea, bad news. We'll clearly need to create our own systems from similar large baseline datasets, and guide them with anon-based curation and feedback derived across the Internets. The archives are all there, it's simply up to we here anons to be clever and utilize it for /robowaifu/ 's (and all other men around the world's) benefit. >=== -prose edit, cleanup
Edited last time by Chobitsu on 07/18/2023 (Tue) 16:40:46.
Open file (56.48 KB 1127x642 SO SAFE.png)
>>24032 >>24033 nah fuck this shit
Open file (24.62 KB 1443x536 1689698052079338.png)
>>24036 this one is from 4chan's /g/ lmg thread. and, holy fuck it worse than chatgpt link (needs github auth) : https://replicate.com/a16z-infra/llama13b-v2-chat
>>24036 >>24037 That's good to know, but first of all, it doesn't matter if you use more than one model in some AI assembly. Also, it doesn't mean RLHF is fundamentally bad, it's just a technique.
>>24040 Remember how we predicted here several years ago that the Globohomo's version of """robowaifus""" : * would be stronk, independynts * would regularly cry RAEP!111 on the news * would simply continue on with all the same old feminist garbage of Current Year -- but under the new guise of 'muh friendly gynoid you filthy incel!' ? Behold the face of the enemy, NoidoDev. It'll get much worse than this example of course, but this is effectively a starter-kit demo reel of the basic idea itself. Heh, if you think dealing with 3DPD is rough today, just wait till you're sleeping with both the enemy, and the surveillance state, right in your own bed... :^) I predict we only have a few years left to get ahead of this curve, anons. The GH seems clearly to begin to be setting sights on this market now. Keep moving forward! >=== -prose edit
Edited last time by Chobitsu on 07/19/2023 (Wed) 02:02:38.
>>24037 >and to prioritize their consent and well-being >from a novel that only you and the AI only knows about I hate normalfags.
>>24055 i've seen guys in /g/'s /lmg/ general saying this happens with chat finetunes only, they are lobotomized to the hell, but base models are left "as is", technically it is the same llama-1 but with 2T training tokens, and no cutoff for 7b and 13b.
I don't know where to post this, so here it goes.
>>24067 also the most useful model, the 30B one hasn't been released yet. Depending on who you ask its because that one isn't (((aligned))) enough or Meta saw that it was quite bad compared to other models, so they're retraining it. >>24069 holy shit it physically makes me cringe
Seems it's time for a new thread, OP. :^) >=== -minor edit
Edited last time by Chobitsu on 07/19/2023 (Wed) 18:41:07.
Open file (329.39 KB 850x1148 Karasuba.jpg)
Open file (376.75 KB 804x702 YT_AI_news_02.png)
Open file (423.49 KB 796x706 YT_AI_news_01.png)
Suggestions for the new thread pics. I picked someone from Prima Doll again, but maybe the new tradition should rather be to pick one from the newest good related anime. I think this might still be Prima Doll. At least the newest I've seen. I was thinking about using the girl from ATRI. Maybe next time. The other idea here was the relation between reading the news and having some coffee. In that case we would mostly be stuck with Prima Doll, at least if the girl serving is also supposed to be a robot.
>>24077 They look fine Anon. I'm perfectly fine with animu grills as the opener pic. I'd personally suggest leaving your options of choice wide open. Just that the opener have great waifu appeal, which your choice here does. >tl;dr Go for it NoidoDev! :^)
Open file (449.75 KB 2048x2048 1687959908288867.jpg)
>>24080 Thanks. If people have suggestions for future opener pics I'm listening. Picrel might be it at some point. But currently it's hot in many places, so the pic of Karasuba with some iced drink fits very well.
New Thread New Thread New Thread >>24081 >>24081 >>24081 >>24081 >>24081 New Thread New Thread New Thread
>>24082 > Picrel might be it at some point. I think that's fine tbh.

Report/Delete/Moderation Forms
Delete
Report