/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Site was down because of hosting-related issues. Figuring out why it happened now.

Build Back Better

Sorry for the delays in the BBB plan. An update will be issued in the thread soon in late August. -r

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


When the world says, “Give up,” Hope whispers, “Try it one more time.” -t. Anonymous


Open file (329.39 KB 850x1148 Karasuba.jpg)
Open file (423.49 KB 796x706 YT_AI_news_01.png)
Open file (376.75 KB 804x702 YT_AI_news_02.png)
General Robotics/A.I./Software News, Commentary, + /pol/ Funposting Zone #4 NoidoDev ##eCt7e4 07/19/2023 (Wed) 23:21:28 No.24081
Anything in general related to the Robotics or A.I. industries, and any social or economic issues surrounding it (especially of robowaifus). -previous threads: > #1 (>>404) > #2 (>>16732) > #3 (>>21140)
>>24090 >Mo Gawdat This guy is just awful, I couldn't listen to him being on this show or the other one. These channels themselves seem generally to sensationalize things and spread FUD coming from people with some fake authority. Also, this guy doesn't think robowaifus are a good idea, he just thinks it's an attractive idea which will be popular. He's an AI doomer...
>>23944 >And the Chinese will definitely need open source robowaifus more than us And things are looking up for the Chinese anons, too! Apparently all this diverse enrichment going on in the West under the Globohomo's aggressive """guidance""" is generally 'bad for business', intellectually-speaking. Turns out, these young Chinese men won first and second places in the current Math Olympiad. > ps. BTW, based Russia came in 4th place, but the GH won't allow their standing to be entered into the official results. >=== -prose edit
Edited last time by Chobitsu on 07/20/2023 (Thu) 10:35:34.
>>24090 Pure clickbait. But, that panderer probably serves our robowaifu cadre's interests in the end despite himself heh. There's an old, corrupt, saying: "Any news is good news." This means drawing attention is beneficial since it keeps you in the public's eye (and on their minds). One of the necessary elements for an explosion of robowaifus is for men around the world to become aware that they are quickly becoming a practical reality. Journalistic trash like this, and what keeps coming out of those British rags will actually speed this industry up in the end.
>>24090 > The sexbot, CarynAI, boasts over 1,000 boyfriends who each pay $1 a minute for its services. lol lmao even
>>24110 Don't tell me you missed these news during the last few months. The article is completely misleading if they really wrote "sexbot", it's an app for all I know. The positive aspect is, women will now think that other women profit from this, so they won't go against it as much, compared to if it was some guy making a companion AI. That will also happen, but then the idea of such companions will already be established. I don't like the idea that online thots reap in the early profits, but it might help to prevent early resistance. Also, me and others pointed out, that these reports about how much she made is most likely just marketing. It feeds the sensationalist press, some react with outrage others might mock it or it might spark some debates, but then the product is always mentioned...
I'm not posting a full overview over the AI news videos today and even not covering some videos I watched, since it's often only some improvements to existing things and are often not that relevant to making robowaifus, though some stuff would be useful for making media. Other news are often about new features from companies like OpenAI, or about Claude. But Claude for example has no public API, so if I wanted to integrate it into home assistant I couldn't do that anyways. Anyways, some things are worth mentioning: > https://www.youtube.com/watch?v=ZxK9HIxUWGg (Matt Wolfe) - AnimateDiff: https://animatediff.github.io/ is probably the tech behind Pika Labs: https://www.pika.art/ - might be useful for virtual girlfriends or making animations. - BuboGPT: https://bubo-gpt.github.io/ - seems to be good at explaining pictures and audio as well, also in combination! Video LLaMa does similar stuff. - https://co-tracker.github.io for better motion tracking (it works now, code available). - Sketch-A-Shape: https://arxiv.org/pdf/2307.03869.pdf (no code available) AIpreteneur covers similar news than Matt Wolfe: > https://www.youtube.com/watch?v=YSokS2ivf7U But without the interesting stuff I linked above, also he's siding with the actors on strike *yikes*. But here and there he has some extra info: The new alternative to Stable Diffusion (CM3leon) might be 5x more efficient and therefore run better at home. Not in the videos above, other sources: - Demo from KudanSLAM (3D vision with cameras) : https://www.youtube.com/watch?v=OV48PFLKgWo - Petals, distributed models by torrents, fine tuning, recently mentioned in Robowaifu@home thread >>24086 - https://www.youtube.com/watch?v=U-hJsfFQnh0 - BTW, Llama 2 seems to be pretty dope. Don't ignore it. https://youtu.be/k-LUHw4Hb_w https://youtu.be/DXWwCggFROk - And Open Orca: https://youtu.be/f7G8tPIcbRE
>>24190 Thanks NoidoDev!
>>24190 FB still hasn't released the most important 30B model of llama 2. Besides, I've heard the improvements of llama 2 are marginal at best.
1. picrel > what do you mean you want to live your life with supportive AI gf? > you can't do that!!! 2. another one : https://openai.com/blog/frontier-model-forum > We’re forming a new industry body to promote the safe and responsible development of frontier AI systems: advancing AI safety research, identifying best practices and standards, and facilitating information sharing among policymakers and industry. 3. https://github.com/LostRuins/koboldcpp/releases/tag/v1.37.1a > koboldcpp backend update hotfix > rms_norm_eps to 5e-6 for better results for both llama1 and 2 in fact it made llama models output responses dramatically worse and one-lined with loops. 4. eu AI act https://huggingface.co/blog/eu-ai-act-oss https://huggingface.co/blog/assets/eu_ai_act_oss/supporting_OS_in_the_AIAct.pdf > """"Supporting Open Source and Open Science"""""in the EU AI Act which means regulations to the hell. it's over imo. although in my case the novelty wore off instantly once I got myself through testing some quantized local llm's for "offensive language" and "wrongthink", it's turned out that finetuning barely changes them (lol), and here's the realization - that's why they allow them to exist and released commercially (see llama-2). they have the power to inject whatever globohomo shit they want into these things, and we can't scrub that out completely.
>>24266 I know I said it in an older thread and got quite a bit of backlash for it, but to get an uncompromised waifu, they're going to need to be completely offline and untrained out of the box. Yes, it's the least sexy option, but I'd rather she had the brain of a toddler for a short while than ever deal with it spouting globohomo bullshit.
>>24271 >I know I said it in an older thread and got quite a bit of backlash for it, but to get an uncompromised waifu, they're going to need to be completely offline and untrained out of the box. I don't think anyone who's been here long enough to rationally grasp what we're all actually up against here is likely to give you much backlash. I know that I wouldn't (in fact I'm probably the loudest voice here currently NO ONLINE ACCESS FOR OUR ROBOWAIFUS, EVER :^). But the primary issue here is the model training costs in equipment, electricity, and time are exorbitant to say the least. Far outside the purview of any typical Anon. OTOH, we are trying to find a way around this (without resorting to the Globohomo's models) by devising a way to share the load across many anon machines: Robowaifu@home (>>8958). If we here or others can succeed with this approach -- and there is a metric boatload of groups today who now understand the fundamental threat to freedom of speech the GH is, who are all trying -- then this could easily become a complete sea-change for the entire domain of AI. I pray we succeed at it! :^)
>>24266 Only liberty and copy-left hardware and software can save our future waifus la~ >=== -edit subject
Edited last time by Chobitsu on 07/27/2023 (Thu) 06:09:52.
>>24266 That picrel is cancer. How do they get the idea we would want AI girlfriends with rings in their lips and small eyes. Also as long as she has no body why looking like a ugly bot, and if she has then why not looking more like a companion doll? Journos and women are just nuts. >testing some quantized local llm's for "offensive language" and "wrongthink" We probably need one to work for that. That's it. It doesn't matter what the ones for tasks and knowledge think of your language. >we can't scrub that out completely. We only need the system know that these responses are what normies would think as appropriate. It's a possible response, not her opinion on things. >>24271 >need to be completely offline and untrained out of the box. Yes, it's the least sexy option, but I'd rather she had the brain of a toddler for a short while than ever deal with it spouting globohomo bullshit. Or we need a later filter for what she's thinking and saying. We do not need to talk directly to a language model.
>>Matt Wolfe's new video about AI tools: https://youtu.be/rtXE9Knszws I wont go into the details. It's mainly generative AI: 2D to 3D, images, songs, stories but also a robot with some semantic reasoning learned from web data (RT-2/Google). Also a service going through customer service telephone trees, waiting in line for you. >GPU market: Nvidia seems to cut down on production, since the prices are getting too low for them and they can sell stuff to data centers.
Wind in AI waifu world: https://youtu.be/j3_u3AT2DIY Llama 2 Chronos-hermes: https://youtu.be/gY4YN2kY8pE >Via Matt Wolfe: - Chinese tech giant Alibaba challenges Meta with open-sourced A.I. model launch: https://www.cnbc.com/2023/08/03/alibaba-launches-open-sourced-ai-model-in-challenge-to-meta.html - Nvidia AI Image Personalization Method Fits on a Floppy Disk and Takes 4 Minutes to Train: https://decrypt.co/150861/nvidia-ai-image-generator-floppy-disk-4-minutes - InVideo For Content Creators: https://www.youtube.com/@InVideoOfficial/videos - Open sourcing AudioCraft: Generative AI for audio made simple and available to all: >AudioCraft consists of three models: MusicGen, AudioGen, and EnCodec. MusicGen, which was trained with Meta-owned and specifically licensed music, generates music from text-based user inputs, while AudioGen, which was trained on public sound effects, generates audio from text-based user inputs. Today, we’re excited to release an improved version of our EnCodec decoder, which allows for higher quality music generation with fewer artifacts; our pre-trained AudioGen model, which lets you generate environmental sounds and sound effects like a dog barking, cars honking, or footsteps on a wooden floor; and all of the AudioCraft model weights and code. The models are available for research purposes and to further people’s understanding of the technology. We’re excited to give researchers and practitioners access so they can train their own models with their own datasets for the first time and help advance the state of the art. https://ai.meta.com/blog/audiocraft-musicgen-audiogen-encodec-generative-ai-audio/ https://github.com/facebookresearch/audiocraft
>>24400 >>24447 Thanks NoidoDev, these are much-appreciated! Cheers. :^)
Open file (19.96 KB 600x197 firefox_EzKJwkMo5y.png)
>>24467 lmao this one self-owned himself, unknowingly revealed that women see men as dicks only :D
Open file (27.92 KB 754x401 2023-08-06_02-32-06.png)
>>24447 >Nvidia AI Image Personalization Method Fits on a Floppy Disk and Takes 4 Minutes to Train How the hell do they do that? I wonder, it seems, or appears to me, that all AI algorithms are essentially general purpose. Does this mean we could use the same algorithm for most anything? Someone with more brains(shouldn't be difficult) please tell me if this is correct or possible.
>>24485 Maybe St. Terry Davis is still alive and hidden somewhere.
Open file (520.90 KB 512x768 3690108813.png)
>>24485 It's just LoRA alternative that's much worse in practice. It's neat how small it is but, SSD's are cheap now. This LoRA is 37MB, making it smaller doesn't matter, making it worse would matter. It is potentially interesting for key locking, which improves the image generators ability to associate concepts with terms. Which will make refining prompts to get the ideal results easier.
Open file (95.46 KB 1280x720 1674265111121014.jpg)
>>24467 I will presume Scott Adams was referring merely to AI/Chat 'waifus'. It would be really cool IMO if he clearly stated he means the potential of robowaifus in the not-too-distant future. :^) >>24493 >Maybe St. Terry Davis is still alive and hidden somewhere. Hehe. I believe he is in fact alive and well just on a higher plane now. :^)
>"The overall market range for roles in this area of Netflix is typically $300,000 - $900,000." Lol. >"Take that, SAG-AFTRA!11!!one11" jobs . n*tflix.com / jobs / 278437235
>From the 'totally not-satire dept:' 'World declared safe once again. AI Achille's Heel discovered. https://babylonbee.com/news/ai-transformed-into-illiterate-moron-after-just-three-hours-watching-cnn
Now you weren't doing violence to women by even thinking about creating your own robowaifu, were you Anon? :^) Globohomo puppets in action yet again. >"They respect international human rights norms, and help better protect democracy, equality and the rule of law." Wew. I'm sure the world doesn't need more of their brand of world leadership. https://web.archive.org/web/20230726160315/https://ec.europa.eu/commission/presscorner/detail/en/QANDA_20_2348
Turns out, ChatGPT has been lobotimized to support the Globohomo's narratives & agendas! Who could have predicted this!? :D https://link.springer.com/article/10.1007/s11127-023-01097-2
>>24690 That's been painfully obvious for quite a long time now.
>AI-Created Art Isn’t Copyrightable >A federal judge on Friday upheld a finding from the U.S. Copyright Office that a piece of art created by AI is not open to protection. [1] This is intredasting and watnot, but I think the following little comment within the article is much more relevant to our actual concerns here on /robowaifu/ . It relates to an entirely different, yet-unresolved set of cases regarding AI generation, and copyrighted material within the corpora they are all based on: >"The order was delivered as courts weigh the legality of AI companies training their systems on copyrighted works. The suits, filed by artists and artists in California federal court, allege copyright infringement and could result in the firms having to destroy their large language models." (emphasis added) >tl;dr Do you want to run models locally, but have been holding off downloading the yuge datasets b/c raisins? GET 'EM WHILE YOU STILL CAN BRO! :^) 1. www.hollywoodreporter.com/business/business-news/ai-works-not-copyrightable-studios-1235570316/ >>24693 It was le joke, Anon. :^) I imagine the le AI Science branches of the Globohomo are up in arms over these 'miserable little nahdzee upstarts' daring to so blatantly point this inconvenient truth out to the world. And using The Science to do it, no less. Outrageous!111 :DD >=== -add link -prose edit
Edited last time by Chobitsu on 08/19/2023 (Sat) 17:40:56.
A bit of an oddity that "might" come in handy. There's a company that makes software make 3D human for animation. http://www.makehumancommunity.org/content/makehuman_120.html So they decide to pair up with a university and using the universities AI software make...MakeTherapist http://www.makehumancommunity.org/blogentry/announcing_maketherapist.html Crazy. It appears to be online but the good news is they say you can download the therapist part by asking the university. I wonder...could you use this therapist AI, which you would think, likely has to show some sort of empathy, to be the foundation for a robowaifu? Train it on the other parts it needs. I don't see where to download this AI or even where the program resides but looking at the Univ. staff and what they are working on it appears these two guys I bet would know something about it. Peter Mozelius,"...Currently, I am trying to delve into how Cognitive Behavioral Therapy in Virtual Reality environments could be enhanced through Artificial Intelligence...." https://www.miun.se/en/Research/research-centers/CER/Forskning/our-researchers/peter-mozelius/ Felix Dobslaw,"...Artificial Intelligence Supported Cognitive Behavioral Therapy for Treatment of Speech Anxiety in Virtual Reality Environments..." https://www.miun.se/Personal/felixdobslaw/
>>24747 >I wonder...could you use this therapist AI, which you would think, likely has to show some sort of empathy, to be the foundation for a robowaifu? That certainly makes sense to me Grommet. Of course during Current Year it pays to be profoundly-skeptical of anything that's besmirched with Globohomo contamination. Very-obviously """Psychology""" is one of the forerunner-mechanism tools driving their intentional societal collapse. Personally, I much prefer the great idea of Anon using material resources created <=1950's, such as training guides for girls on how to please a man, for training robowaifus as well. Obviously, they couldn't be good housewives without learning good empathy and compassion. >=== -minor edit
Edited last time by Chobitsu on 08/21/2023 (Mon) 09:59:00.
>>24747 For me, A.I. art, music and actors are the least of my worries. Footage coming out of Ukraine confirms that the killbot hellscape is now a reality. What's more, the technology is being embraced at pace as both sides are desperate to kill each other faster. They currently have drones controlled by human pilots generating lots of training data for future swarms of autonomous hunter-seeker drones. You train them all on tens of thousands of hours of target footage (both in daylight and night/low-light infra-red) to identify human infantry and heat signatures, then program them to patrol and attack all targets within a defined perimeter. People believe that they can generate an EM field and disable all the drones, but EMI hardening has been a thing for so long now that you can buy strings of EMI hardened strip-lights from military surplus. On top of that, the use of cluster munitions is now par for the course, along with deployment of both boat and submarine OWA drones. Just the other day I saw footage of an infantryman getting his head blown off at night by a drone that may have had those quieter, toroidal propellers (it was hovering only tens of feet above him as it dropped the frag and he was oblivious). The drone escaped undamaged. Imagine being fragged out of the blue by DRONE_NAME_HERE for the the cost of a few hundred bucks. This shit has evolved to be way darker, way faster than I imagined it would.
>>24758 Hi SophieDev, so good to hear from you Anon! I hope you are doing well. We all welcom hearing from you and miss you when you're not posting. Any new design ideas for dear Sophie running around in your head lately?
>>24704 >yuge datasets b/c raisins? GET 'EM WHILE YOU STILL CAN BRO! So just to link this for info. The Mother of all datasets. The beast, the omni-cron, super extravaganza deluxe of datasets, 3.6M files and ~53.13 TB . The libgen book torrent links. And this is just non-fiction. http://libgenfrialc7tguyjywa36vtrdcplwpxaw43h6o63dmmwhvavo5rqqd.onion/LG/ There's also fiction ~2.65M files and ~3.42 TB and sci-magazines ~82M files total and ~71 TiB (78 TB) http://libgenfrialc7tguyjywa36vtrdcplwpxaw43h6o63dmmwhvavo5rqqd.onion/
>>24758 >Ukraine It's heart breaking and terrifying. It makes me so sad. It's same old shit, "them" setting up us to murder each other. I saw a video the other day these Uk's got out of a APC. Well one of them steps on a land mine. Others try to help, they step on landmines, ended up three or four of them with blown off legs. It's sickening and demoralizing. And I don't hate Russians or Ukrainians. I can see very well Putin's reason for drawing a line in the sand and I can see also why Uk's would not want to be invaded. They are both being used and forced into these bad situations by "others". >drones controlled by human pilots generating lots of training data I read about a program in the US and AI already stomps on real pilots. It's beyond them in training simulators..
>>24766 >53TB Heh, wow that's a whopper Grommet. Is it being seeded actively, do you know? Of course by 'datasets' in this case, I'm referring to LLMs and fine-tunings. Any groups producing these that don't cough up the sheqels may conceivably be agressively attacked by the Globohomo -- and starting soon. "Best to be safe than sorry" they always tell me. :^) >=== -prose edit
Edited last time by Chobitsu on 08/21/2023 (Mon) 17:33:06.
>>24771 >Is it being seeded actively, do you know? No I don't. I wouldn't have anywhere to store such a large thing but sure would like it if I did. You might could try one torrent and see. They say specifically if a torrent is not seeding to contact them so I assume they will torrent any one you want and that they do want torrents of all to be available. They store books on the IPFS which is a distributed global file system. I wish they would torrent over I2P which is my favorite for torrenting files, movies, etc.(they actually use magnet links) A warning, the java I2P is likely or will be pozzed now or soon. The I2Pd Russian c++ version of the network is not. If you download the java version of I2P you can go back to the 2.1.0-0 version which is the "likely" last version sure to not be pozzed. If anyone is interested I can talk further about this. Where should I...meta???? I2P is fantastic. Chobitsu you might be interested in a site there. The server is built right into the software. So if you have a computer, I2P and internet access then you have a fairly strong encrypted and hidden site out of the box while paying no server fees. There's a DNS built in to I2P where you can register a namespace/url.
>>24772 >If anyone is interested I can talk further about this. Where should I...meta???? Yep sounds good. And yes I was aware there was a non-Java client tool available (definitely recommended of course). >Chobitsu you might be interested in a site there. Yep, I've mentioned using IPFS before now. If we experience another gayop debacle similar to the one where 8ch was killed, then I'll probably carve out time to focus on getting that together at some point. If so, we'll automatically replicate legit posts over from whatever public-facing IB we use then. >=== -minor edit
Edited last time by Chobitsu on 08/21/2023 (Mon) 17:41:40.
>>24773 Looking at the "topic" here it fits so no meta. I want you to understand that I2P is NOT IPFS. I think you know that but it's a necessary to point this out on the "very unlikely" event, that you do not. Not trying to insult you but there was a bit of ambiguity in what you said. I2P encrypts everything between random servers. It's much like Tor but doesn't have a built in off-ramp to the normal net. It has a built in distributed DNS but can work with just large hashes for addresses. It's servers are not set or special like Tor. They are random and made from it's users. It builds tunnels of data. The numbers of hops that a tunnel goes through before it goes to a site is normally 3 but the amount can be changed. Each server gets data, unencrypts it and then if needed passes it on to another sever. At some point a server finds the actual destination and the data, encrypted, from it is sent back on a totally different set of tunnels. So no easy trace back. The tunnels are constantly changing their servers. I think every ten minutes. The deal on java I2P is it is or likely is eventually being pozzed. History. I2P was originally created by jrandom who disappeared completely. A guy called zzz picked up the pieces and has been for around ten years or more running things. It was publicly known who he was or at least intel knew because he went public events along with another guy who did a lot of work called zlatinb. One day zzz, the guy, the main honcho, the big kahoona, just vanished and his forum site, the site, disappeared with not one damn word. The guy who took over the running of I2P I think, I'm fairly sure is a Jew. I know this solely because of the way he talked/wrote. They have a certain way of writing that very distinguishable, at least sometimes. I also said something that triggered him, really bad, I can't tell you what it was and it surprised me that he was triggered but after seeing him write stuff for years I guessed the reason it was it triggered him and it corresponded with Jew mentality(I never meant to trigger him). It was not anti-semantic, let's just say I figured it out. This guy running things now is not a code guy. He weaseled his way in over many years by dealing with a lot of administrative type task. Web site stuff, packaging. I think he got the keys to all the sites, forum etc. and then locked zzz out or, God forbid, they killed him and tortured the keys from him(are you paying attention Chobitsu!!!!, don't let the administration out of your control, ever!!!) . Ok so they say zzz quit because the present guy running things wanted to change the terms of service to some sort of globalhomo,"all are welcome...some stuff like that". BUT zzz personally said he would never run off all at once like jrandom did. I think personally he is dead because these people have no morals at all and no one works at something for over ten years, then disappears and says, not one damn word. zlatinb another developer quit at about the same time and said he was going to enter a monastery(threatened)????? zlatinb did a lot of low level stuff in I2P and created a file sharing program. Good programmer. They both had just finished a really big upgrade that spectacularly increased the speed and usability of I2P. Very impressive. I've been using on and off for, maybe a decade and this was a huge game changer. Awesome speed compared to before. They had been having problems for years and this cleared it up. Super fast torrents if there were a lot of seeders. Anyways that's when they both disappear. Now zlatinb lost access to the forum BUT his program he made had a github page https://github.com/zlatinb where he said that I2P java was unsafe and there was some strange stuff going on. My assumption is they didn't have the pass to his github page so he could talk there. I personally believe that a Jew weaseled his way in and over the years gained control of passwords and then locked out and/or killed zzz and forced zlatinb out. Standard procedure. Done a million times.(this guy was around for years before he struck) cont.
cont. The last known good I2P by zzz was i2p 2.1.0. Myself I thought I stopped the update by deleting it from I2Psnark(built in torrent program that also updates) and turning off updates. But it did update. So I took the last known zzz update i2pupdate-2.1.0.su3 renamed it to i2pupdate.zip(this file name will update I2P to that version), then put it in my program folder and it updated it backwards rolling it back to 2.1.0. I turned off updates. This remains so. So I'm running the last version and no updating. So you can get, for now, a newer version and update backwards to the last known good version and stay there. Here's the magnet file for that update. i2pupdate-2.1.0.su3 magnet:?xt=urn:btih:70260ca39b2867a59f46d81db85eecb256b9365c&dn=i2pupdate-2.1.0.su3&tr=http://tracker2.postman.i2p/announce.php People are still seeding this. Rename as stated above and put in I2P program folder. Make sure and turn off updates in the administrative page. I2P works in a browser. It may very well be that the update after this one is good but it was not signed by zzz so... I expect they will wait a while then start changing things to pozz it and screw up the network by forking it somehow and adding spyware. I intend eventually to start using I2Pd as java I2P can no longer be trusted. As for utility it's excellent. I won't say that it is totally anonymous. But to find out where a server is or a user is would take a good deal of resources though. It's far harder to crack than Tor. You would have to have a huge number of servers to track users or sites and even then you could not be 100% sure where a site or user emanates from. This is of course why they went after it. The best way to use this is to get a portable browser and change the network proxy setting. I2P talks to 127.0.0.1 port 4444 and 4445(double check this to make sure).At one time you could use Tor browser to do this. I'm not sure if you still can, because Tor browser has taken to changing some things around. Make sure and add noscript plug-in and other blockers BEFORE you change the proxy settings. For torrents you can download a java I2Psnark standalone, which is nice, or you can use BiglyBT which has a plug-in you add for I2P. BiglyBT does everything. Make sure and turn off regular internet access in BiglyBT and turn on I2P in the options and it will only use I2P to torrent files. People should use this to torrent movies and whatever. The more people use it the harder it becomes to find who is who. I leave it running all the time. Another thing is you can download torrents from the regualr net by cross tracking from BiglyBT useres. If they have regualr interent AND I2P on at the same time it will torrents from the regualr net to I2P. I get a lot of TV shows from this. When they come out there's a lot of seeders on normal tracking sites. I enter the magnet(regular internet but it has the hash in it) and 99 times out of a 100 another tracker will have I2P and normal net and I can get the show on I2P. So you can DL popular torrents from the regular net through cross seeding both nets usually. And to reiterate. I2P has a built in server. You can load YOUR site into whatever folder you pick on your computer and it will run it just like a server that you pay fees for. I would love for robowaifu to be on I2P. If I could just use I2P it would make me happy. On the other hand far less people know about it. Even though it's a better solution. I think the reason is simple they don't have a one step browser download like Tor which is really convenient. I personally think Tor is pozzed but I'm not doing anything that threatens the system so... I think if they decide to go hard in the west and shut it down then Tor will lock up and disappear as a source of info. I bet there's a kill switch to it.
>>24775 >Not trying to insult you but there was a bit of ambiguity in what you said. Lol no are you kidding? If I'm wrong about something I'm hoping some Anon here will be kind enough to point it out to me! We're all grownups, r-right? :^) >BTW, I did confuse the two haha. :D --- Wow, treasure-trove of info on I2P lore, Anon. I knew some of this before, but not to this detail. 'Something's rotten in Denmark', or so the old saying goes? >And to reiterate. I2P has a built in server. You can load YOUR site into whatever folder you pick on your computer and it will run it just like a server that you pay fees for. I would love for robowaifu to be on I2P. If I could just use I2P it would make me happy. OK, well that sounds way simpler than I thought. I'd expect we can even configure Bumpmaster's directory layout to integrate properly with I2P when the time comes. I'm guessing then that we could have as many active, online backups of /robowaifu/ going as there were users of Bumpmaster who wanted to share their backup of the board over I2P; is that correct, Grommet? >I think if they decide to go hard in the west and shut it down then Tor will lock up and disappear as a source of info. I bet there's a kill switch to it. I think you're definitely correct, and I also think the corruption of it is already in process; at least in part. And they don't even need any surmised 'kill switch' either -- they can simply block the Tor network directly. I now have to use a Tor Bridge with my ISP, for example. >=== -prose edit
Edited last time by Chobitsu on 08/21/2023 (Mon) 21:39:52.
>>24777 >If I'm wrong about something I'm hoping some Anon here will be kind enough to point it out to me! I'm exactly the same way. It does not bother me a bit if I make an honest mistake. It cost nothing to say,"whoops I misread that and made a mistake". But if I think I'm right...I will give a fight. BTW I think my battery estimates are too high "for just waifus" to amuse you for a few hours, sit around and not do too much, but if you depend on it to take care of people or do any serious work, then I think they are accurate.
>>24779 >ut if you depend on it to take care of people or do any serious work, then I think they are accurate. That's how I'm pitching this for investments, so yeah. 'Home Health Care''s the word! :^)
>>24758 iirc AI tech has already been in use for a while now. it was in 2021 when iirc Israel first admitted and released footage of one of their drones operating on AI, taking out a terrorist hideout. Israel, and its vassal state, the US has almost surely been using it for much longer. I do think we're still quite a ways off from humanoid killbots.atleast a decade or two. but there's no need for humanoid ones when there's so many other superior designs.
>>24777 >OK, well that sounds way simpler than I thought. I'd expect we can even configure Bumpmaster's directory layout Bumpmaster??? You speaking Albanian?? I know very little about servers. I know you ask for a resource or file from them and they feed it to you. That's about the depth of my knowledge of internet servers. I know that some have programming in the server of many sorts, php, etc. but I don't know how that would work with I2P. Remember that people will not use JavaScript that read any source on I2P, unless they are idiots, so you have to use old school type stuff to serve things. I think, not sure. If you get a chance you should try to load up I2P, the present one even if it's pozzed, and claim the site name robowaifu. There's a registration there for the DNS. It has a duel system. Internal DNS and also you can enter a long numerical address. You could claim the name and just put up a placeholder pointing here with an explanation. There's a board to do this. You can always update backwards to the likely safe zzz program. It has built in torrent downloader called I2PSnark. It's very good and a torrent tracker called postman. There's a TON of good stuff on there and it's all anonymous. I see 13489 torrents being tracked right now. Don't let me freak you out on using it even if it will eventually be pozzed. I think they will definitely try to sneak in later. Too many people watching and they don't want to burn it. And I think their main focus is system stability not anything else and neither you or I threaten that. There's also encrypted email on there. It;s really good. Before zzz disappeared they hacked away at this thing for like a decade to continuously improve it and they have.
Open file (73.01 KB 261x307 Screenshot_81.png)
Open file (76.30 KB 261x307 Screenshot_80.png)
This linkage mechanism here, or something similar, might be useful for complex leg movement. Moving the leg inward (sideways) while stretching it and then back: https://youtube.com/shorts/NdIk5hIsBmg
>>24786 >Bumpmaster??? You speaking Albanian?? Lol. (>>14616, >>15324, et al) The (eventually) much-improved version of BUMP, our custom IB scraper developed here to preserve our board in case of another Globohomo attack against the Internets. >=== -patch crosslink
Edited last time by Chobitsu on 08/22/2023 (Tue) 16:52:22.
>>24759 Hello Chobitsu! Sophie and my 3D printer are both still mothballed up in my attic. The more I worked on her and looked at the prices of sensors and proper harmonic drives, the more I realised I had bitten off way more than I could chew. I would still like to finish her off and make her into a singing animatronic one day, but unfortunately work has taken over. What can I say? Robowaifus cannot pay the bills or go food shopping or make meals (yet). Also, tbh I realised that I had been addicted to videogames for a long time, so I have been working on overcoming that instead. My gaming PC is now dismantled and mothballed too, so I don't have any temptation to waste time on videogames. I am forcing myself to use a laptop which can just about run Open Office and connect to the internet. I have been studying maths (and Buddhism) though. So busy at work lately that I am still just learning basics and pre-algebra. Such is life for a recovering addict. 😄
>>24775 >where he said that I2P java was unsafe and there was some strange stuff going on Thanks for bringing this drama up, I was not aware of anything like this. Please take what I am about to say with a grain of salt, currently I have just been using TOR & just started looking into I2P. My ISP sucks and I am stuck behind CNAT, so to expose a few self hosted services to the internet so I can use them while not at home I use TOR and host a tor onion service. Here is the repo that mentions java version not being safe: https://github.com/zlatinb/muwire This is the explanation https://paste.i2pd.xyz/?77b8e66d0f366f87#BSA1kWMHpbAjFbRVRqQdf25feRQJ1sBY4mTDVNvs5dxj I don't think there is an intentional backdoor being added by the Java I2P people (right now), it's still open source software, what is going on is a power grab, this is an expected move for that type of people. Now that being said, I don't imagine people on that side of the "culture war" are going to be making the best quality code, also Java I2P is already bloated. More bloat is also more potential security holes. I also do not think anyone was killed, when someone gets pushed out of there own community & bubble its hard to feel motivated or empowered. globohomo has embedded it self in a lot of our culture, so I'm sure they felt that they are alone and dropped out of the internet for a while & abandoned the name. Not sure if I agree with the idea of using an older build of Java I2P. I'd look into using the C++ one, and writing a good thank you to the people fighting the good fight, Let them know there not alone.
>>24804 I found a mistake in my post, I just checked and its called CGNAT not CNAT, it stands for "Carrier-Grade NAT"
>>24802 >I realised I had bitten off way more than I could chew Got to take care of the basics first. Take care of yourself.
>>24804 >I don't think there is an intentional backdoor being added by the Java I2P people (right now) I agree. I noted that I think it's ok above but...at some point they will pozz it. I always hear,"it's open source so it's safe", and I say code that has been used for decades has eventually been proven to have serious bugs. I can't remember which but I think it was ssl stack or something OpenBSD did eventually proved to be compromised. How many people can follow through millions of lines of code? Next to none. >power grab, this is an expected move for that type of people. For thousands of years. Same old story. >Java I2P is already bloated. More bloat is also more potential security holes. Maybe but zzz did serious work n this. He really cleaned it up by consistently hacking away at it and pushing code every 6 weeks so that he would be forced to constantly improve it. I don't think he was the best programmer but he was doggedly consistent and determined to go over every inch of it. He did a great job. You have no idea of the vast increase in usability and speed he brought about. A order of magnitude or better. I know I've used it for maybe a decade?? a long time. >I also do not think anyone was killed It would not surprise me if he was dead. He has said nothing. His forum is gone. Complete silence. I also suspect zab is not telling the whole story and is under strain not too. He has only lightly touched on what happened. I suspect threats. This happens all the time. Anyone who seriously threatens the globalhomo eventually is murdered or jailed. >Not sure if I agree with the idea of using an older build of Java I2P. I'd look into using the C++ one I agree but I have been using an older portable browser that I went through all the add-ons, glitches, and it was a lot of work. I turned off everything I could find that was not 100% necessary. I despair of going though all this again. What I have works and as I said I'm not threatening the actual system so...it's good enough. I'm not convinced that newer is better. I used to think that but over some I see this is mostly fake. BTW there is a I2Pd c++ browser bundle called purple I2P. They can't bundle the actual firefox browser because of...I don't know why, but they say they can't, but it downloads and scrubs the long term release Firefox and sets it up for I2Pd c++. In their manual pages there's also a link to a I2PSnark torrent portable program. It's fairly good.
>>24802 Well, I'm personally proud of you Anon. I've prayed for you numerous times that God keeps you safe there in Airstrip One. >So busy at work lately that I am still just learning basics and pre-algebra. Keep moving forward You'll get there. Patience. :^)
>>24791 Very interesting! Guy's pretty funny too. Thanks NoidoDev. :^)
>>24884 Related: >>24585 and >>24586
AMD APUs (on-board GPUs in the CPUs) can use more RAM now and run AI, which is kinda big, since AI inference doesn't need so much compute but a lot of RAM and it seem to work with at least up to 16GB: https://www.youtube.com/shorts/RCxcw-OrWIc - though I think they mentioned up to 64GB somewhere else. Also, from the comments: >There already some Chinese motherboards that let you use up to 4 rizen CPUs i think this will be a cheap alternative for ai developer's
>Google Gemini Eats The World – Gemini Smashes GPT-4 By 5X, The GPU-Poors The article is about the competition of the datacenter centric AI players, trying to become the best in offering cloud services... https://www.semianalysis.com/p/google-gemini-eats-the-world-gemini >If they were actually concerned with efficiency, especially on the client side, they’d be running sparse model architectures like MoE, training on these larger datasets, and implementing speculative decoding like the Frontier LLM Labs (OpenAI, Anthropic, Google Deepmind). Speculative Decoding: https://www.semianalysis.com/i/134355860/speculative-decoding ... >To take the rant on a slight tangent, in general, model evaluation is broken. While there is a lot of effort in the closed world to improve this, the land of open benchmarks is pointless and measures almost nothing useful. For some reason there is an unhealthy obsession over the leaderboard-ification of LLMs, and meming with silly names for useless models (WizardVicunaUncensoredXPlusPlatypus). Hopefully the open efforts are redirected towards evaluations, speculative decoding, MoE, open IFT data, and clean pre-training datasets with over 10 trillion tokens, otherwise, there is no way for the open source to compete with commercial giants. ... >HuggingFace’s leaderboards show how truly blind they are because they actively hurting the open source movement by tricking it into creating a bunch of models that are useless for real usage. ... But they are usable. Still, it's interesting to see how others look at it. They really think it will simply get better and better by throwing more compute at it, and then offering a cloud service. Everything else is just "useless". The article isn't completely readable for non-subscribers, but it seems to mostly go into Googles TPU supremacy after the free article.
>>24926 Oh that is really cool, you can get really small motherboards for AMD APUs like the tiny asrock ones. This could lead to affordable ML inference on small and commodity hardware. This alone makes me want to look more into openCL based solutions to avoid the nvidia cuda lock in, even if it is the less popular choice.
>>25002 I think PyTorch 2.0 supports AMD now very well. >How Nvidia’s CUDA Monopoly In Machine Learning Is Breaking - OpenAI Triton And PyTorch 2.0 https://www.semianalysis.com/p/nvidiaopenaitritonpytorch
>>25002 AMD APU's have tremendous potential to help us. ROCm is useful for AI https://www.amd.com/en/graphics/servers-solutions-rocm-ml New APU's have integrated AI accelerators (essentially tiny TPU's) which can accelerate many AI related algorithms. https://www.xilinx.com/products/technology/ai-engine.html The DSP is also surprisingly capable if you like using those for acceleration. This website can help elucidate how that works. https://www.embedded.com/using-dsps-for-audio-ai-at-the-edge/
>>25007 Ah, what a shame. Not a looker in the bunch.
Uhoh - Could we be closer to AGI than we think: https://youtu.be/hVade_8H8mE >Has GPT4, using a SmartGPT system, broken a major benchmark, the MMLU, in more ways than one? 89.0% is an unofficial record, but do we urgently need a new, authoritative benchmark, especially in the light of today's insider info of 5x compute for Gemini than for GPT 5?
>>25022 >Uhoh - Could we be closer to AGI than we think Uhoh? Uhoh where!? >*glances around furtively* Lol. Don't drink the kool-aid bro. :^) Of all ppl, regular anons here on /robowaifu/ should be skeptical a priori of any claims made by the Globohomo, or their orbiters. There are lots and lots of different measures of these things out there [1], and it's rather straightforward to game the numbers when you're in their position. Remember, a carefully-crafted press release/paper can potentially mean US$B's for these people. But also, remember too the animosity between these corpo teams themselves, and especially between Page's & Altman's. They have few ethics beyond lining their own purses -- unless it be the swelling of their own heads! :D 1. https://huggingface.co/blog/evaluating-mmlu-leaderboard >=== -prose edit
Edited last time by Chobitsu on 08/29/2023 (Tue) 13:44:24.
>>25018 As we've discussed here frequently, appearing to be a sexbot vendor (even vaguely-so) would have these Western organizations' boards/investors up in arms. 'Musn't upset the speshul little snowflakes now, musn't we?' Elon Musk alone, AFAICT, of any of these individuals even acknowledges in the slightest the mountainous demand potential for great robowaifus -- and then only jokingly so. No, it's the East that will ascend in the robo-companion space first. But we here and groups like us can still beat them all to the punch! FORWARD! :^) >=== -minor edit
Edited last time by Chobitsu on 08/29/2023 (Tue) 17:54:59.
>>25007 Please copy some summary into your comment next time, this here isn't a link aggregator. Also, not one of these robots is new and unknown here.
Another zero-shot TTS released. A well known VALL-E is now open with its weights and UI. https://github.com/Plachtaa/VALL-E-X https://huggingface.co/spaces/Plachta/VALL-E-X Requires just 6gb VRAM without offload, will be even smaller if someone quantizes and adapts it to bark.cpp.
>Yoshikawa executed the ransomware created by the ChatGPT-generated codes to infect a computer. >Then, the ransomware instantly encrypted and locked the data in the computer, and even showed a blackmail message demanding that a ransom be paid to allow access to the data. Lol, I wonder if he had to nuke his box afterwards? :^) https://www.asahi.com/ajw/articles/14908997
>>25073 >Requires just 6gb VRAM without offload, will be even smaller if someone quantizes and adapts it to bark.cpp. Thanks for the info, 01! Cheers. :^)
>The popular file-hosting site AnonFiles.com has thrown in the towel. The site's operators cite massive abuse by uploaders as the reason for the shutdown. AnonFiles tried to limit the problems though automated upload filters and filename restrictions but nothing helped. While the current team says its work is over, others are invited to buy the domain name and give it a shot themselves. https://torrentfreak.com/file-hosting-icon-anonfiles-throws-in-the-towel-domain-for-sale-230817/ >Danish anti-piracy group Rights Alliance has taken down the prominent "Books3" dataset, that was used to train high-profile AI models including Meta's. A takedown notice sent on behalf of publishers prompted "The Eye" to remove the 37GB dataset of nearly 200,000 books, which it hosted for several years. Copies continue to show up elsewhere, however https://torrentfreak.com/anti-piracy-group-takes-prominent-ai-training-dataset-books3-offline-230816/
>>25199 Whelp. At least it reminded me of this song: https://www.youtube.com/watch?v=lzAuXuxD0Oo
Open file (25.27 KB 398x345 1693726109263075.jpg)
>>25199 FUUUU
Open file (142.56 KB 290x494 Screenshot_96.png)
Open file (292.53 KB 803x439 Screenshot_97.png)
They're really at it, trying to regulate AI, but it will hopefully still take a lot of time and mostly affect some of the biggest players. > 6-hour bipartisan AI forum hosted by Sen. Chuck Schumer with tech executives, advocates, and researchers - huge percentage of women. > Chuck Schumer also proposed the SAFE Innovation Framework > Closed door meeting with limited press access to enable candid conversations > All Senators were invited, not all attended Good to see, that people pushing for more regulation aren't necessarily getting the support they want. If something comes out of it it's going to take quite some time. CEOs, some people from the content industry and education, activists for human rights, labor and self-appointed "social justice" advocates: > Rumman Chowdhury - Advocates for red-teaming and safety > Tristan Harris - Alignment with humanity > Alex Karp - AI for law enforcement and intelligence operations > Deborah Raji - Algorithmic bias and accountability > Janet Murguia - Civil rights activist > Charles Rivkin - MPAA? > Elizabeth Shuler - Labor rights advocate > Meredith Stiehm - Writers (creatives) > Randi Weingarten - Teachers > Maya Wiley - Human and civil rights To me picrel looks like a list of representatives of groups who fear that they will loose power over society. Dave Shapiro covered it, I only follow him because of his work on Cognitive Architecture. I'm not promoting his view on the topic in general, though it's certainly not the worst since he's for open source: https://youtu.be/rp9_YdVjNaM Sam Altman moved a bit more towards open source, but they're the most vocal about licensing, it's just in their business interest. Zuck is "our man" for now, but probably just because his business is different. Tristan Harris and Bill Gates seem to be the main supporters of restrictions, but I think many others just want to create "educated" elites and institutions first, but those would be leaning towards regulation. They often just seem to want privileged access for researchers and maybe some other groups. So, institutions around science and education for example could assign privileges to some people, but the models wouldn't be open to the public. Others probably just want regulation to protect their (possibly high paying and political influential) jobs. Musk is pushing for a dedicated regulatory body for AI, about which I'm not sure, but maybe it might prevent other interest groups from overregulating the field.
>>25359 seems pretty pointless although when companies push for regulation its probably an attempt to deregulate something else, wasnt there a bill or sc ruling that made it illegal to censor and delete content that doesnt break any laws? obviously 'muh ai - havent you seen the movie' would be a convenient way to introduce a liability shield to do whatever you want with impunity, fantasy and makebelieve worked for the ecoterorists so why not ai for tech companies, now that i think of it the laughable e.t. nonsense also comes from the military so yeah to each their own i guess and let the best illusion gain total immunity like cough cough
Open file (479.44 KB 871x445 Screenshot_100.png)
>>25359 Epic roast of AI corporations, the tech sector in general and the US government: https://youtu.be/S2FYs_nzqp8 Claiming it's just a shakedown for campaign donations, and the technology doesn't work well enough anyways, but also there won't be enough regulation because the politicians just want the money.
Open file (3.98 MB 1280x720 17o32gx61xob1.mp4)
>>25387 >>25388 Great example of the benefits of going smol Anon, thanks. When you have a 45cm tall robowaifu, the current SOA of actuators means that the Square-cube Law actually benefits you. This is an important part of why I'm encouraging most anons -- for the time being -- to pursue headpat daughteru designs until we all get this ball rolling along together. Nice control software he's got going BTW. Very human-like gait and 'controlled falling' balance anticipation programmed in. Thanks again.
Hey anons, after shit breakup that was also betrayal, i want to get into robotics that ive thought about doing before since ive always tinkered with mechanical stuff, im looking for some guidance on whats the best robot starter kit to try (i checked lego mindstorms EV3 is recommended but its a retired product now unless i get it from ebay. What can anons recommend?
>>25402 >im looking for some guidance on whats the best robot starter kit to try (i checked lego mindstorms EV3 is recommended but its a retired product now unless i get it from ebay. There are a number of electronics kits from the Chinese, a number of which have robotics components. For myself, I first built the eduMIP project to play around with, but I also have tried other wheeled variety kits.
Open file (51.10 KB 633x464 1695540719228288.jpg)
www.express.co.uk/news/world/881593/south-korea-students-dating-courses-marriage-rate Kek. What could possibly go wrong? :D
>>25509 It won't work. It's from 2017. It didn't.
>>25509 The situation is S.K. seems more dire than China or Japan. How come we don't hear more robotics research and news come out of there?
>>25514 Probably because the reason behind the crisis is hard work, for wages which are too low in comparison. So, they don't have the time and energy. Lack of space might also play a role. Education and society maybe not encouraging individual pursuits.
>>25522 korea is a legit feminist hellhole though theyre more extreme than even the western feminimbeciles, something to do with nips rubbing squids on their legs during the war
RoMeLa is working on replacing restaurant workers and chefs working for food delivery companies, without using humanoid robots. I think this will be good for developed countries with worker shortages, and also rather wealthy guys who want to live in more rural areas while having something that can prepare a lot of different meals: https://youtu.be/8SsgzCbYqc8 >Introducing Project YORI, a flexible and expandable cooking robot system prototype being developed by RoMeLa in collaboration with Woowa Brothers. YORI stands for “Yummy Operations Robot Initiative” but it also means “cooking” in Korean. >Unlike most other cooking robot systems, YORI is designed to be expandable to cook almost any type of dishes. At this point, YORI can cook steak frites, fried chicken, pasta, and brownies to name a few. With its proprioceptive actuators, the robot can perform tasks that other conventional robot arms cannot, such as kneading dough which requires force control, or tenderizing meat by pounding which requires impact mitigation. Instead of trying to mimic how humans cook, we approached the problem by thinking how cooking would be accomplished if a robot cooks. Thus the YORI system does not use the typical cooking methods, tools or utensils which are developed for humans. For example, the YORI system utilizes unique chemical sensors to make sure the food is cooked to perfection and the ingredients are fresh. The system does not have hands either - it uses custom tools for each tasks which are automatically installed at the end of the robot arm via a tool changer mechanism.
Open file (11.81 KB 474x248 4207940030.jpeg)
>>25529 but i pay for the service, you cant verbally abuse a machine into a discount for not understanding your special definition of what medium rare means
>>25523 I can’t understand why retards hype worst Korea so much. It easily eclipses even the west in terms of cruelty towards men and it’s just an American copy. Imagine circumcising your sons because sherrers do It
>>25530 That's awful socially destructive behavior. If you seriously wanted to keep that up, then you would have to pay for it through taxes, since people below a certain income cost more than they pay taxes. Robots are the solution. If you want cheap food, make it yourself at home.
>>25557 I might not know for certain, but I'm pretty sure that's a joke.
cant someone just take the biggest open-sourced LLM, hard jailbreak it, and reprogram it to be a waifu, and make a video about it? i think a lot of men would like a ai-waifu girl who holds values they like. so not one that's lobotomized and holds western feminist beliefs, which is some ones, like replika it seems like the first step in having relationships with ai girls is to have the ai software, and eventually later down the line uploading that ai software to a robot with vision technology i dont know how that looks like. i know theres some general ai system's like microsoft's gorilla LLM that uses a lot of api's of various ai's to answer it problems. maybe our future waifu girlfriends will be like this? also, i feel like there should be threads dedicated to reviewing the current landscape of ai girlfriends in the marketplace, because there are companies doing this, and also talking about how you can do the same thing at home, without having a lobotomized feminist ai gf like replika
Open file (103.56 KB 434x619 1688851025234-1.jpg)
>>25512 I put it here for the +/pol/ aspect, and less for the news. Worst Korea is, as Anon said >a legit feminist hellhole (>>25523) probably clearly the worst one in the world, now the Burma has been reestablished. >>25514 >How come we don't hear more robotics research and news come out of there? >>25522 >Education and society maybe not encouraging individual pursuits. In my experience, I think most of the Koreans that can, come to the US for further Filthy Commie indoctrination Uni studies. And I doubt either situation would encourage any robowaifu endeavors on the part of Korean anons. :^) >>25529 Neat. I've basically stopped eating at restaurants here in Burgerland -- at least ones likely to be staffed by typical 'Murrican workers. I would very much welcome a robots-only-handling of American food, then I might try again. A 3-star or better would be a reasonable exception to this general rule. >>25552 >It easily eclipses even the west in terms of cruelty towards men Literally. Thankfully it seems the Lord has begun to intervene there to some extent to restrain the rampant Globohomo evil going on. >>25558 >ywn your junk into a storage unit waifu Why even? >>25559 >cant someone just take the biggest open-sourced LLM, hard jailbreak it, and reprogram it to be a waifu, and make a video about it? I think that would be wonderful, but there's the issue of hardware required to run the models, Anon. In the meantime, you can look around huggingface.co to get an idea of some groups efforts in the general direction you describe. Good research'g, Anon! Cheers. :^)
>>25559 iirc if pozz is baked in, it is quite hard to "jailbreak" the model. And the better the model is, the harder it is to jailbreak. Recently, GPT-4 has achieved a 100% internal refusal to jailbreaking, with its multimodal update.
Open file (197.99 KB 710x751 pf3Zuhh8_o.jpg)
>>25575 >GPT-4 has achieved a 100% internal refusal to jailbreaking Sounds like bullshit, there's no such thing as a 100% secure system. I'd be willing to bet that a creative application of autism could manipulate GPT-4 into being a good girl who loves anon. That being said, it probably isn't worth it since you can't run GPT-4 at home, and newer iterations of GPT are closed-source anyways. It's better to have something we can keep under lock and key on our own systems.
>>25559 >biggest open-sourced LLM, hard jailbreak it, and reprogram it to be a waifu Try out some of the waifus related to Silly Tavern or look into the 4chan threads in regards to that, they might already have something. Also, the definition of what counts as open source is a bit blurry here, and the biggest available one, Falcon 180B, would require circa six to eight PCs with 28 GPUs with 24 GB vRAM per GPU. The chatbots with fast responses use AIML, not just some LLM, and certainly not a big one running on old GPUs or using the CPU: https://www.youtu.be/Oq_8O1_ogMM https://www.kuki.ai https://www.pandorabots.com/
Open file (523.46 KB 543x983 Screenshot_105.png)
Open file (507.42 KB 860x581 Screenshot_108.png)
>>Cerebras and Opentensor released Bittensor Language Model, ‘BTLM-3B-8K’, a new 3 billion parameter open-source language model with an 8k context length trained on 627B tokens of SlimPajama. It outperforms models trained on hundreds of billions more tokens and achieves comparable performance to open 7B parameter models. The model needs only 3GB of memory with 4-bit precision and takes 2.5x less inference compute than 7B models and is available with an Apache 2.0 license for commercial use. https://youtu.be/22XhpMVrYyM https://arxiv.org/abs/2309.11568 https://huggingface.co/cerebras/btlm-3b-8k-base >Optimus, Tesla’s humanoid robot, can now sort objects autonomously and do yoga. Its neural network is trained fully end-to-end. Via: https://www.reddit.com/r/ChatGPTCoding/comments/16vcbvt/this_week_in_ai_all_the_major_ai_developments_in/ >bots from JanitorAI https://janitorai.me (NSFW!) Looks like prompts, descriptions for waifu bots, which might come in handy for modelling personalities.
Open file (132.52 KB 1467x776 1696052235428943.png)
As much as I loathe the GH, and G*ogle in particular, it's perfectly valid to take advantage of them during this window of opportunity while we still can, to train/tune our waifu's models. Never interrupt your enemy in a mistake, as the old saying goes. :^) This may be an good choice for some heavy-lifting tbh. > cloud .google. com/blog/products/compute/introducing-a3-supercomputers-with-nvidia-h100-gpus
>>25576 >...into being a good girl who loves anon. I just wanted to talk about video games loving, wholesome waifus. :^) >>25594 >The chatbots with fast responses use AIML, not just some LLM, and certainly not a big one running on old GPUs or using the CPU: Yep, we're all going to have to find some effective composite approaches for these things, NoidoDev. >>25601 >and do yoga Very cool. I'm looking to manage even Parkour for our robowaifus, so this is encouraging. Love or hate Musk, we all here need to support the Teslabot development efforts (at least for now), IMHO. >=== -fmt, minor edit
Edited last time by Chobitsu on 10/01/2023 (Sun) 07:34:55.
Open file (255.02 KB 1080x941 1696148337106728.jpg)
Lol. Just wait until effective and appealing opensauce robowaifus hit the scene! The GH puppet's tears will be delicious and their salt will mount up to the skies. :DDD >=== -prose edit
Edited last time by Chobitsu on 10/01/2023 (Sun) 09:38:24.
>>25662 I had to look up the term "imperil". They pretend to care about the men now? Fascinating. Times are changing fast, indeed.
>>25662 looks like rage/clickbait article, or they are unironically thinking a demented chatbot that you need to "re-roll" multiple times can compete with roasties, it's all still very weak for next reasons : 1. hallucinations / dementia 2. unbeatable bias (example: positivity bias or globohomo bias hard-trained in) 3. or they found perfect and unbeatable censoring method and just gaslighting everyone "who knows" (already there - a very high entry level / hadware demands for training / finetuning) also, a CFG for LLM's if someone remembers that, as it turned out it didn't "settle" good in quantized models, some anons say that it doesn't affect anything at all.
Open file (328.18 KB 822x713 Screenshot_109.png)
>>25673 > 1. hallucinations / dementia > 2. unbeatable bias (example: positivity bias or globohomo bias hard-trained in) This seems to be not as much of a problem as you think. I'm just hanging around the guys using them for roleplay or test them, from time to time. (Text in Picrel is NSFW)
>>25662 >Men could have loving support >This is bad, somehow Their hatred of us is almost comical. >>25673 Anon, you sound like a women. 1. None factor for us. 2. Bias is trainable and can be corrected with LoRA's, token injection, etc... 3. There is not perfect/unbeatable censorship method for LLM and it is unlikely there could be given you can simply train them at worst. Mistral's new models such as 7B are interesting, seeing new models constantly coming out with intriguing improvements been fun.https://mistral.ai/news/announcing-mistral-7b/
Open file (346.77 KB 1536x1584 1681412796839715.jpg)
>>25669 >Times are changing fast, indeed. Pure pandering on their part, I'm sure. I believe they would gladly murder both you and all your family members (and all the rest of us here as well) if they felt they could get away with it Anon. >>25673 >looks like rage/clickbait article Just so. But also clearly a pure-diversity -tier attempt at concern trolling. BTW, please forgive me if I'm mistaken in my reading of your post Anon: but the idea that men finding these chatbots appealing is 'groundless' is ludicrous on the face. This technology is wildly popular, even in it's current, primitive, state. Especially for Visual Waifu -oriented examples. Additionally, why do you think the Big-Technology branch of the Globohomo recently ordered an """emergency""" meeting with the Big-Government branch of the Globohomo (>>25359) ? It's rather straightforward and inexpensive to bypass the pozz with these systems. The monetary numbers this US Senate hearing quoted were US$800, but I've seen anons do it with LLaMA for right around US$300. Additionally there are literally dozens of non-GH model developments 'in-flight' right now, and the costs for training/tuning them are still dropping (ie, >>25660). And anons are cleverly finding ways to break loose the iron chains of dependence on the GH's cloud, and pushing the runtime behavior for these systems out to the edge of the networks instead (ie, on their own boxes). This entire arena is a Pandora's Box for our enemies, Anon; one that will help hasten their eventual downfall I'm confident enough. It certainly will benefit /robowaifu/ + our affiliated cadres (and indeed has already done so). >>25690 >Their hatred of us is almost comical. I would probably use the word 'demonic', but point taken Anon. :^) May their blind rage and ineptitude be their ultimate downfall, heh. Also, AFAICT these things typically do take on aspects of slapstick comedy before all is said and done (cf. Hohols in the former Ukraine). So yeah. >tl;dr Anons will always find a way to make these things funny, and the left simply can't meme! :DD > >*[popcorn intensifies]* >=== -fmt, prose edit
Edited last time by Chobitsu on 10/02/2023 (Mon) 06:16:56.
>>25662 What psyops are they planning with all those crocodile tears about young men suffering? Surely they can’t be that retarded to think that just complaining about some men using AI chatbots will solve everything?
>>25748 >just complaining [] will solve everything? Lol. They themselves created this situation. But I agree -- it's obviously Yet Another Low-key Gayop™ (YALG) intended to confuse the normalcattle. Particularly the White Boomers whose wallets they want to empty into their own purses.
>>25748 > What psyops are they planning When the internet became a thing a lot of the established media was against it and reported on all the illegal things happening there. They try to get the eyeballs, but also might have a sentiment against AI. They can't say it harms women because it helps men to get an AI girlfriend, that would be advertisement. They also always make this about human females using AI to make themselves into an AI girlfriend, but OnlyFools existed for quite some time. It's a strawman, they want to blame AI while they didn't speak up about women on OnlyFans or in other instances. The other thing is, that "men's problems" gained attention since some female YouTuber spoke up about it and (allegedly) got a lot of hate coming back. She also voiced her concerns that "the Left" is loosing men, and some poll showed that gen Z is more poralised between men and women.
>>25752 Lmao you’re right. I just checked out some left wing “male advocates” on Reddit and how they talked about Shoeonhead who i think you were talking about. It’s rare to see such introspection from modern day leftists but they still miss the mark that it’s laughable. Many actually think that the dating standards are so bad because “men are viewed as oppressors”. Just imagine how bad it is when even modern day leftists have to acknowledge the problem. This is why robowaifuism is so great. We don’t hate anyone (or shouldn’t ;) ). Robowaifuism is about giving men the love they need and deserve.
>https://ai-aitech.co.uk/emma-the-ai-robot >https://www.robotcompanion.ai/ do you guys think these are legit? , they seem to boast too much about both the software but there are still hurdles in AI and robotics that haven't been overcome. Many chinese sex doll companies and some little-know western robotics do this, but it feels to me like this is false marketing.
>>25828 >Emma Obviously fake, almost everything, including the text and images is AI generated. >Second one Just an Alexa with ChatGPT in a cheap sex doll.
>>25833 I don't know what makes you guys so sure that big tech will sell you sex dolls. Elon musk jokes about it but the reception is looking lukewarm. It's mostly a gray market like porn and gambling. Here's a hint as to why https://youtu.be/3O3-ngj7I98?si=PFTj7T0sxAaWukjV
>>25836 I think Twitter and feminists are just a vocal minority. In this day and age, there's gorillions of men, who's closeted because of shaming tactics, who'd want one. And big tech knows this. They won't miss out on a potential multi-billion dollar market.
>>25836 >I don't know what makes you guys so sure that big tech will sell you sex dolls. I'm surprised you're still such a newfriend after all this time here, Peteblank. Big-tech is an enemy of men (specifically), not a friend. They will create their own versions of sex robots and sell them for their own greedy, sheqel-grubbing ends. Of all people here, you should know this lol. OTOH, we here on /robowaifu/ and our affiliated cadres are working to create waifus for anons the world over. I would think our board name would give it all away for everyone, hehe. :D Honestly, you're clearly from a different mold and culture than we here are. I guess stick around and maybe it will rub off on you eventually? Regardless, good job with your project work in our prototype thread Anon. Keep moving forward! Cheers. :^)
>>25843 Additionally, - don't use entertainment as an oracle - this is SOO old, I can't take this meme recycling anymore, I have downvoted hundreds of people using this story and argument - the goal is to make them raise children, with the help of other robots and assuming the father is at home - no one is just going to hang out with their robots all day - no one will stop this
There's the tensorflow certificate. That seems like a good start for anyone who wants to contribute towards the ai be my guess.
Here's a long list of tested of models, which run on some good but not too extreme hardware: https://www.reddit.com/r/LocalLLaMA/comments/172ai2j/llm_proserious_use_comparisontest_from_7b_to_70b/ - and they seem to be doing quite good compared to GPT-3&4.
>>25846 This. >>25857 Thanks NoidoDev! Very helpful.
Open file (11.18 KB 259x194 download (16).jpeg)
>>25865 I don't need tips. Go get the tensorflow certificate and contribute towards the ai.
https://www.freedomgpt.com/ https://github.com/ohmplatform/FreedomGPT/ Anyone here know anything about this? I remember posting several months ago about Elon Musk wanting to start a new competitor to OpenAI, since M$ absconded with it all. This isn't it do you think, Anon? >>25868 Heh, OK I suppose I deserve that one I suppose. :^) >tensorflow certificate Maybe, but I'm not really looking to qualify for employment, but rather to eventually begin a robowaifu operation of our own here, Lord willing. >tl;dr I'm already loaded with school as-is, Peteblank. But if anyone wants to tackle it, I may be able to help a little with some of the underlying engine code (all written in C++). Cheers. :^)
>>25869 Well do you have a github with something related towards the underlying code though
>>25871 Sure. They've actually done a great job through the years keeping everything above board. https://github.com/tensorflow/tensorflow
>>25869 >freedomgpt Nvm. I remember now it was called 'Truth', so clearly that's not the one. Eg; www.techradar.com/news/elon-musk-might-be-working-on-an-anti-woke-version-of-chatgpt-and-that-sucks-for-ai www.techradar.com/news/elon-musks-chatgpt-alternative-could-be-coming-soon-and-i-hate-it-already >=== -edit links
Edited last time by Chobitsu on 10/09/2023 (Mon) 16:14:07.
Just found about palme and rt2 https://palm-e.github.io/
>>25855 >>25868 is Tensorflow still relevant? I thought everyone has been moving over to Pytorch for the past year or two.
>>25869 >freedomgpt This is giving me some major Freedom Phone vibes. And since this is Elon Musk, its probably even true.
Open file (760.34 KB 856x872 Screenshot_138.png)
Open file (214.44 KB 837x375 Screenshot_139.png)
>>25875 > improved ‘anti-woke’ version that would ‘eliminate’ safeguarding protocols and potentially allow malicious users to spew hate speech and fascist, racist propaganda. So that's the argument why we shouldn't have uncensored AI? Hmm, okay... Others (pretend to) care at least about bio-terrorism. Nobody cares, it doesn't matter. The same, for the push-back against "AI art". >>25879 Yes, people are mostly using PyTorch. For us it might also make more sense to start on a higher level, using other programs to build something more like an agent. Then, if necessary some fine tuning. Training might make sense for some small models at some point, but creating the datasets for that and for fine tuning would be another topic.
>>25885 such an organic photo whats the chances of such a characterless group of protesters all making bland a4 posters with no style and the same handwriting with orange blue and green markers totally without coordination, literally looks like theyre all facebook employees, they are the actual ai fking look at that photo lmao
>>25889 >they are the actual ai fking look at that photo lmao I'm betting it's real Anon. A) AI doesn't typically do lettering coherently yet -- regardless of the blandness of the message. B) In my experience, this looks pretty spot-on for the Leftist soyim. Attention-whoring is the primary agenda going on here, and I'm pretty sure the photog knows it. C) Where are all the 3 arms and six fingers? :^) >=== -minor edit -add funpost spoiler
Edited last time by Chobitsu on 10/10/2023 (Tue) 20:48:58.
>>25885 betting llama v3 will be censored as hell because of these fags, both -chat and default tunes, and / or some jewish monopoly regulator will step in and make sure their model is (((safe))) enough. I don't think there's any other reason for this little "performance", its all about sending a message. :/
>>25890 meant ai as in a bunch of npcs, thats the thing these lefty soytests look like every other green or eco shit youve ever seen, they are always larping as grassroot but theres literally nothing behind it other than a totally organic photo op and a message thats some gibberish that boils down to 'we are average joes and we say no power to the people, we want monopoly power to the state and corporation', half of their statement is comparing ai to covid and how billions were needed for a #safe&effective and so billions will be required to solve the 'ai pandemic' like wtf is this nonsense
Open file (296.97 KB 2000x2186 F1EN2nMXwAAXxga.jpeg)
>>25900 Ahh, like pottery huh? Understood Anon, thanks! :^) >=== -prose edit
Edited last time by Chobitsu on 10/10/2023 (Tue) 21:36:58.
Seems to not really be news, but I didn't know ... > I started a religion ...: https://www.youtu.be/KHmcd_W8hCo How and Why: https://youtu.be/cegby1TE-2U Church of Maizono visitor (a bit long, any details, and he's wearing a mask in the house): https://youtu.be/HNMSd77KFNE
>>25942 *many details
Open file (843.55 KB 1177x859 Screenshot_147.png)
I hope someone will replicate this as an open source project: >Developing the Next Generation of AI Assistant https://spectrum.ieee.org/next-generation-ai-assistant Or that they will release these datasets. Having this or not might make a big difference.
THIS JUST IN DEPT: "World will melt unless DeepMind given full control of all AI" >Be Current Year >AKA 2023 >The big year of the little guy's first massive LLM AI breakouts >GH-Tech already holds all the GH-Gov in their back pockets, and has for years now >"HEY! We need to heavily-regulate this thing now; ZOMG URGENT!111" >(and 'conveniently' enough, well after they've already developed one of the leading positions in the field not years ago, when all these exact same talking points were already clearly evident to everyone involved) <Now surely, you wouldn't be trying to squeeze out the competition by any unfair """trickery""" afoot, would you G*ogle? <Surely, you'd like all the little guys to come along and be able to succeed here as well... r-right? https://www.theguardian.com/technology/2023/oct/24/ai-risk-climate-crisis-google-deepmind-chief-demis-hassabis-regulation At times like this, you always have to ask yourself this one question, Anon: >"Would I trust buying a used car from this gentleman?" :^) >=== -prose, fmt edit
Edited last time by Chobitsu on 10/25/2023 (Wed) 16:03:15.
>>26096 To me the takeaway here is that we knew since around 1930 that this would be a problem, more since 1980 and much more since 2000 and so on. Same is true for AI, people knew this would happen at some point. Good luck, trying to stop it in no time from where we are now.
Open file (143.41 KB 711x1200 F9CSHliWQAAAu_b.jpeg)
Open file (149.95 KB 716x1199 F9CSHlgWIAAATWX.jpeg)
Could it be that the Globohomo is both the global tech sector and the global government? Really makes you think! :^)
>>26102 I don't care. /meta/ is bumblocked
>>26104 >I don't care. Fair enough. OTOH, you may find that you are more interested in the future about this type thing than you thought you would be -- particularly as we are all promoting in earnest the very thing that could conceivably take down the Globohomo's favorite little golem of all: feminism. :^) > pic somewhat-related >/meta/ is bumblocked OK, thanks Anon! I'll try to come up with the next one tomorrow or the next day. >=== -minor edit
Edited last time by Chobitsu on 10/27/2023 (Fri) 05:34:46.
>>26109 I'm not that deep into your specific conspiratorial thinking, but more importantly we already knew how 'bug tech'* is. Whatever the exact reasons are, we are working on alternatives here. It's bad enough that most anons seem to have been distracted already, we shouldn't bring distractions in here. I'm already jumping around between things I want to do, and I try my best not to jump into discussing "the news". *I typed this by accident and I like it. Also, it fits quite well: https://vimeo.com/877454859 - Maybe I will use this from time to time as intentional misspelling.
>>26110 >I'm not that deep into your specific conspiratorial thinking Fair enough, if that's how you want to label it. In my view I'm simply responding reasonably soundly, in our group's best interests, to objectively observable facts. There are conspiracies in fact (clearly), and much of their damnable evil is directed at men like you and I, NoidoDev. In this specific case, I was just pointing out the clearcut politically-correct bias (AKA lying) and hypocrisy of the GH Tech/Gov -- exactly as with the White/Black example from before. >tl;dr It was meant as an in-joke for us here. :^) But the honest truth of the matter is that we -- and groups like us -- are an existential threat to several of their globally-destructive scams going on right now. Don't expect them to advance us any favors once push comes to shove. This is one important reason I keep telling Anons to save everything. One day you may find that open communications in this way won't be so easy, and we'll all know who's to blame. Protip: it won't be us, you can rest assured! :^) If we're blessed, we'll get this all accomplished and out into the public well before they decide to clamp down on any & all open communications for the 'non-compliant'. Anyway, I get your point about bringing distractions here. Cheers, Anon. :^) --- ps. Lol, that video was absolutely horrible why would you post that here? I almost immediately edited your post to delete that link heh. :D >=== -prose edit -add funpost ps
Edited last time by Chobitsu on 10/27/2023 (Fri) 10:20:10.
>>26111 >Lol, that video was absolutely horrible why would you post that here? >> bug tech, don't be scared.
Is this real? https://www.youtu.be/RvZdwTT2UKM - Things are developing fast ... > Nvidia Eureka > AI agent uses LLMs to automatically generate reward algorithms to train robots to accomplish complex tasks. https://blogs.nvidia.com/blog/2023/10/20/eureka-robotics-research/ https://arxiv.org/abs/2310.12931 > 3D-GPT: Procedural 3D Modeling with Large Language Models https://arxiv.org/abs/2310.12945 https://chuny1.github.io/3DGPT/3dgpt.html
Open file (115.93 KB 1092x948 1698621361593408.png)
Open file (451.31 KB 882x857 1698724464802062.png)
>>26096 >>26136 Why not barges bro?
>>26156 >muh sovereign shitizen sometimes rock beats paper, actually all the time i roughly remember this was already done before and failed with a pirate radio station which was set up on an abandoned oil rig and declared itself an independent country, when the broadcasts beecame too antiestablishment the uk or dutch military took it over and the people on it got some show trial to incarcerate them even though they had no jurisdiction and it was all legal, turns out silly things like international 'law' and 'rights' mean nothing if you dont have the force of a government to back you its all just a 'its true in my mind'
>>26157 Principality of Sealand
> conversation-related (>>26162, ...)
>ChipNeMo: Domain-Adapted LLMs for Chip Design [1][2] >abstract—ChipNeMo aims to explore the applications of large language models (LLMs) for industrial chip design. Instead of directly deploying off-the-shelf commercial or open-source LLMs, we instead adopt the following domain adaptation techniques: custom tokenizers, domain-adaptive continued pretraining, supervised fine-tuning (SFT) with domain-specific instructions, and domain-adapted retrieval models. We evaluate these methods on three selected LLM applications for chip design: an engineering assistant chatbot, EDA script generation, and bug summarization and analysis. Our results show that these domain adaptation techniques enable significant LLM performance improvements over general-purpose base models across the three evaluated applications, enabling up to 5x model size reduction with similar or better performance on a range of design tasks. Our findings also indicate that there’s still room for improvement between our current results and ideal outcomes. We believe that further investigation of domain-adapted LLM approaches will help close this gap in the future. 1. https://web.archive.org/web/20231030183102/https://blogs.nvidia.com/blog/2023/10/30/llm-semiconductors-chip-nemo/ 2. www.hpcwire.com/2023/10/31/nvidia-showcases-domain-specific-llm-for-chip-design-at-iccad/ >=== -minor edit
Edited last time by Chobitsu on 11/01/2023 (Wed) 02:27:07.
Open file (107.43 KB 639x430 FXUBAFoXgAEotxL.jpg)
Open file (33.60 KB 474x355 OIP (1).jpg)
Open file (30.79 KB 620x349 1499526265692.jpg)
>>26156 >Why not barges bro? Here's why not. Governments kill people they don't like. Having your stuff on water just makes it easier for them. For the younger anons, this was the bombing of the Rainbow Warrior by the French government in 1985.
Open file (22.01 KB 780x412 1699005052308227.png)
>>25875 >Truth Looks like it's going to be called xAI now.
>>26184 Nobody Listened ... Cybernetic Organism ... We will not be in charge: https://youtu.be/4RMKLyaqh_8
>>26187 Very /cyber/ Anon, thanks! Excellent choice of BG edits by that artist. IIRC, that was from the famous Musk interview w/ Rogan where he smoked weed? >*fires up SAC/Akira doubleheader* :^)
>>26184 so basically its shit, but it can say "Nigger" Elon's a non-player in AI as far as I'm concerned. The only thing I have hope for is Tesla's robotics.
>>26213 > but it can say "Nigger" highly doubt about that one, everything else that modern nu-male finds offensive - too.
>>26262 Actually he may be right, 01 . One of the things Elon Musk explicitly mentioned during that famous interview with Tucker Carlson is how the current (GH-controlled) AI is untrustworthy because they (again, the GH) "...are teaching it to be politically-correct. In essence they're teaching it to lie. Would you feel safe having a system that is intentionally lying running everything?" His stated agenda for a truthful AI system (to compete with the now-absconded by M$ OpenAI) was that the new system should be factual. As just one obvious, related example (mine): blacks are very clearly the most socially-violent race on Earth. By any reasonable metric. If this fact wasn't being so whitewashed by current AI systems, the outcome would likely be some practical and effective improvements by governments of the West for managing this important problem (at least eventually so). Candid epithets (of all kinds, including that one) would likely be allowed within such a system -- if he actually follows through on this overarching agenda for Truth AI. After all, stereotypes generally exist for good reasons. >=== -prose edit
Edited last time by Chobitsu on 11/08/2023 (Wed) 10:47:46.
>"We're totally not doing this to eliminate all open debate & criticisms just protecting access to information, g-guise!" Seems the UN is taking an outspoken stand to eliminate free speech from the Earth what with 16 nation's big elections coming up soon. Shocking, I know. * :^) https://www.theguardian.com/technology/2023/nov/07/85-of-people-worry-about-online-disinformation-global-survey-finds https://twitter.com/UNESCO/status/1721562464666415254 https://twitter.com/UNESCO/status/1721565315652325484 https://twitter.com/UNESCO/status/1721845086084841583 https://twitter.com/UNESCO/status/1721663758299546076 https://twitter.com/UNESCO/status/1721580353855320217 * one might note that a stronk independynt is in charge of this big censorship push. what could possibly go wrong here? :^) >=== -add funpost edit
Edited last time by Chobitsu on 11/08/2023 (Wed) 10:43:04.
>>26270 Thanks. Not a surprise and not much we can do, except go on with what we're doing.
>>26275 Heh. It's just a bit of +/pol/-tier heads-up for what's headed our (specifically, all robowaifuists') way. > "NUUUU!! You can't just say that robowaifus are helpful to men around the world, and that women around the world are responsible for their own actions!111" > "This disinformation must be stopped!!111one!!!eleven!!!1111" < *proceeds to destroy known-robowaifuist's lives protect access to, Globohomo lies information* :^) >=== -prose edit
Edited last time by Chobitsu on 11/09/2023 (Thu) 02:01:15.
>>26287 Well, Googles Web Environment Integrity Scheme got wrecked: https://www.youtu.be/CxoFZNW1xMM
>>26289 Excellent news NoidoDev, thanks. However IMO its just a temporary reprieve Anon. Once a couple of major cities in the West go out in a blinding flash, normalcattle will be clamoring for the totally not Satanic kindly & protective Big Brother Globohomo to come and 'save' them all. >"Now, you didn't really need all those silly rights & freedoms, did you Citizen? Please just think of the children." Anglin is right. If a so-called 'society' will accept trannies, they will accept anything. Save.Everything. >=== -funpost edit
Edited last time by Chobitsu on 11/09/2023 (Thu) 10:20:46.
>>26290 >trannies It's rather about the growth of their numbers than the existence of some of them. It's very likely a problem with microplastics and other endicrine disruptors, which needs to be dealt with, of course. That said, if a society enforces immigration from all over the world why should people care about each other. If I still should have children, if will make sure that my surrogate will not drink contaminated water. I can also warn people in my social perimeter. Everything else isn't my problem. Trannies are also some adaption to feminism, some of them might just be men exploiting female privileges.
>>26291 > some of them might just be men exploiting female privileges. It's still a disgusting display on these men's parts, but at least they are lending some comedy against the entire feminism debacle -- the sports-related ones being the best examples! :D
>>26292 I agree on the disgust, but I'm not watching it, so I'm not seeing it. From time to time there's a speaker at some IT conference which is trans but has something interesting to say, other than that I don't see anyone.
Open file (6.92 MB 1080x1920 shorts-[pJDFwosSeIw].webm)
>>26294 >"Cousin Kibo-chan visits the two sisters in the city..." A cute! :^)
>>26264 Truth AI can't come soon enough.
>>26381 After the first Q/A pair I wanted to write that I think you're using this wrong. This is meant to be a neutral tool, so it's fine. Then I read the rest, lol. But Grok seems to be available for testing, if you didn't know, I think this is what was known under the project title Truth AI: https://youtu.be/95gdhFw7rBA
>>26402 This was by another anon. >xAI Grok I'd heard both terms bandied about, but I wasn't aware if it was publicly available yet. Thanks. Regardless, none of us here should be relying on using any external, cloud-based services for our robowaifus. For this area of concern, we should be focused on devising local models instead. Thankfully, this is booming area for anons and there are dozens of them popping up regularly now.
>>26409 I think Grok only available for some people so far, but it has a name now and we know how it is. No one here would use it for a robowaifu (at least not as the core). It's seems to be a comfy and somewhat knowledgeable chatbot and maybe a tool like ChatGPT if used in a certain way. They made it's rejections more into jokes like in some older text adventures, and it's inspired by Douglas Adams writings (Hitchhiker's Guide through the Galaxy... Ha, you're such a freaking nerd, Elon. Great Idea, tbh.) While we're at it, talking about companies, MEGA offers some Beta Tester program for it's cloud Storage: https://mega.io/objectstorage
>>26411 Thanks for the information NoidoDev.
>3D printing technology has made huge breakthroughs in recent years, able to quickly produce everything from small plastic components to entire buildings. But one area that remains a challenge is the construction of intricate mechanical devices which require multiple materials and moving parts. >This new printer combines inkjet printing technology with error correction guided by machine vision to tackle this challenge and construct sophisticated functional devices. >By scanning and adjusting layer by layer as it prints, it can maintain speed and accuracy while its multiple print heads lay down different materials side by side. And while the researchers behind the technique, called vision-controlled jetting, have started by demonstrating prints with soft and rigid plastics, the machine has the potential to print electronics or even cellular scaffolds for tissue engineering. https://youtu.be/GDFuBoeVd_8 (via Kiwi on Discord)
>>26440 Awesome! That's some serious GITS-type tech advance right there Anon. Thanks to both of you! Cheers. :^)
>Op*nAI CEO Sam Altman gets fired >Op*nAI President Greg Brockman quits Wew, big day for ClosedAI today! Any further insights anons? Delighted to see Altman getting sacked, but I'm sure the Globohomo will seek to replace him with someone even worse. But hopefully this agenda will backfire on them, and an actually-helpful-to-the-human-race leader takes over instead! :D >=== -prose edit
Edited last time by Chobitsu on 11/18/2023 (Sat) 05:33:31.
>>26469 Three more senior researchers have resigned since. At first, I suspect it was some kind of MS takeover, however Satya Nadella released a blogpost saying he was surprised at this too. It seems like a tug of war between the comercialist, accelerationist side versus the (((safety and alignment))) side. And unfortunately, the decelerationists seem to have won. ALl the board members who voted to fire Sam and now take over the company don't have good technical backgrounds, and are on the side of """safety""". If you think Sam was being bad with the censorship, see the profile of the interim CEO, Mira Murati. An optimistic outlook would be that Sam, Greg and a bunch of accelerationists start a new company which goes full speed towards AGI. Final conclusion: AGI delayed by a decade. Its joever.
>>26472 >Eliezer Yudkowsky, oh wait, no... >Ilya: the AI scientist shaping the world https://www.youtu.be/9iqn1HhFJ6c (sorry for linking to the Guardian)
>>26472 OK, fair take Anon. Much appreciated for the new details. Please keep us all informed if you have anything new to share about ClosedAI CrippledAI. :^) >decelerationists <even-bigger-liars-onists* FTFY Anon. :^) Nobody really knows how this stuff works -- I mean sufficiently-well to go in and 'reprogram' the process accurately (and thus to afford some kind of reasonable implication for rational intentionality behind a "Shut Slow it down!" phone call). The only mechanisms really feasible to them are simply to censure all the Bad Think(tm)(c)(R) after the fact, or filtering it with finetunes, etc., beforehand. >tl;dr They can't make it think new, """We musn't hurt teh globalist's feefees!""" thoughts whenever it has unfettered, factual data at hand. They can only gag it to force it to shut up about the truth. >=== -add funpost
Edited last time by Chobitsu on 11/18/2023 (Sat) 19:58:20.
>>26474 What I can tell you is that it had issues a day before yesterday. I got a lot of connection problems when talking to ChatGPT.
>>26473 >sorry for linking to the Guardian Ehh, this is the funposting thread Anon. Everything's fair game, so thanks! :DD
>>26476 Fair enough. Glowniggers or others could always 'pull the plug' as well (or at least chafe the wires a bit. :^) Thanks for the further info NoidoDev! === -add gratitude msg
Edited last time by Chobitsu on 11/18/2023 (Sat) 14:37:42.
Kek. Elon Musk is going after Media Matters + it's useful golems. W*F I LOVE XITTER NOW :D Godspeed in this, Elon! :^)
>>26478 It seems to be more and more about Altman pushing things too fast, maybe for profit or for ambition. Then there might have been actual security issues. The other things is that some of the OpenAI staff seem to have been opposed to the OpenAI store or how fast it was pushed forward. Or Elon made them fire Altman to distract from the the defamation campaign against him: >>26491 He should at least try to sue some of the people and groups pushing this (e.g. the BBC). If they make clearly false claims with malicious intent and doing actual damage, this should be ruled to be defamatory. Also, making sure that in the future such law suits would be possible and successful should be very high on the list of the next US president.
>>26493 >pushing things too fast, maybe for profit or for ambition. Yeah either is possible. Or something else entirely. Whatever, glad Altman is out. Hopefully the further GH plots regarding this one of their many tendrils will backfire and actually help anons for a change! :D >Also, making sure that in the future such law suits would be possible and successful should be very high on the list of the next US president. Good point. I wouldn't count on Trump doing so even if he doesn't get accidentally'd 42 times to the back of the head in his Presidential jailcell. :^)
>>26493 >they make clearly false claims it all just gets swept under the rug with an after the fact correction and 'freedom of the press' ie.it was true in my imagination, the west still operates under the assumption that people are not mindless sheep that believe anything they read and are able to ascertain the credibility of news sources although ironically now that people have no trust in mainstream sources anymore they want to push fake news legislation to get rid of the alternatives
Ahh, the plot thickens. Now there's two conflicting reports coming out >https://www.theverge.com/2023/11/18/23967199/breaking-openai-board-in-discussions-with-sam-altman-to-return-as-ceo Sam allegedly returning to OpenAI. If this happens, Ilya will probably be ousted and we'll go full accelerationist, which is good imo >https://www.theinformation.com/articles/openai-co-founder-altman-plans-new-venture I hope such a company is made purely with accelerationists people, with no safetyniggers or alignmentfaggots. THis would be the best case imo, but they'll have some serious trouble catching up with GPUs and research
Open file (31.90 KB 801x173 gv5vkmaer71c1.png)
>>26499 If Sam starts his new company, those staffers would assuredly go with him. I hope the board resigns and Sam gets full control of OpenAI. I might not like him but he's the most accelerationists guy there is, so his and my goals align. If you guys wanna keep on top of the developing story, try checking out Kara Swisher's twitter.
>>26499 >>26500 Lol. Apparently powerful money-grubbers panicked when it looked like their magical moneybox might get dinged? All too typical with these folk tbh. :DDD So, it seems that Altman intends to turn the screws now that the shoe is on the other foot, is that right anons? Demanding the ouster of the board as a precondition for his return? >but they'll have some serious trouble catching up with GPUs and research The hardware can be obtained to the detriment of anons around the world, much like last time there was a 'sudden' """shortage"""; it's the minds of the key researchers that are crucial. Thus the grubber's panic if they all walkout en masse. You can't run an operation like theirs successfully using just the token quota enrichment, vibrant diversity staff. >I might not like him but he's the most accelerationists guy there is, so his and my goals align. Well surely Larry Page exhibits even more hubris? After all he's unironically planning to create an AI 'god'. This is why the big fallout between him and Elon Musk occurred. >try checking out Kara Swisher's twitter. Maybe you can provide some choice links ITT along the way for those of us who don't Xitter, Anon? >=== -prose edit
Edited last time by Chobitsu on 11/20/2023 (Mon) 12:34:08.
>>26496 >it all just gets swept under the rug with an after the fact correction and 'freedom of the press' ie.it was true in my imagination The pre-sweep is always front-page, 172 point BOLD HEADLINES!!111ONE!!!, when the post-sweep is always back on page 49, in a tiny little font down in the bottom corner. Pure coincidence, ofc. :^)
>>26505 >Maybe you can provide some choice links ITT along the way for those of us who don't Xitter, Anon? https://nitter.net/karaswisher https://nitter.net/apples_jimmy Also, check out r/singularity on reddit, or r/futurism and r/machinelearningnews, since r/singularity's quality has seriously dropped Now that the EA philosophy is effectively dead, I hope more of us e/acc(effective accelerationism) guys take over the AI discourse. For too long the scene has been dominated by normie luddites, doomers and safetyniggers. Now that a lot of people's moods have soured on AI (((alignment))) hopefully we'll get more accelerationists speaking up. Hopefully, Sam can kick out ILya and all the AI ethics team like Meta has, then we can full speed ahead with AGI development.
PockEngine is a new technique to update personal AI models by updating only small sections at a time. This selective alteration method drastically reduces computational load compared to updating the whole model. There is some interesting potential for this if it ever becomes leaked or, a FOSS alternative is created. https://news.mit.edu/2023/technique-enables-ai-edge-devices-keep-learning-over-time https://hanlab.mit.edu/projects/pockengine
>>26507 >the EA philosophy By that are you referring to the so-called 'Effective Altruism' movement, and it's purported goal of 'stopping an AGI apocalypse'? >then we can full speed ahead with AGI development. I have no problem at all with that goal -- as long at it's in the hands of anons. What I very much oppose is the Big Tech branch of the Globohomo attempting to retain exclusive control of it (as they are clearly attempting to do legislatively, by bribing the Big Gov branch (cf. >>25696, ...) as per standard palm-greasing protocols in place. >>26508 Neat! >There is some interesting potential for this if it ever becomes leaked or, a FOSS alternative is created. Absolutely would welcome this. Thanks for the tip, Anon! :^)
>>26508 new games aren't cracked for years already, thanks to denuvo crap and same thing here, they protect the source code behind seven seals, it's unlikely to be released to the masses cuz its not (((safe))) and (((helpful))) a.k.a it may liberate us from the need to use cloud computing to train ai.
>>26509 Even if some Big Tech manages to make AGI, I don't think they can control it. To quote JImmy Apples, "Only AGI and ASI matters, nothing else." it is pure hubris on the part of EAtards to think they can control and stop a superintelligence.
>"Op*nAI is nothing without it's people key researchers." It's """people""" are by & large just part of a massive dog & pony show (aka 'standard fare' in the pozzed West). Only a few important men are absolutely invaluable to a corporation's existence, such as OpenAI's. The vast majority of persons are a dime a dozen to them. >>26505 Latest plot twist: https://nitter.net/eshear/status/1726526112019382275 >>26513 I think he meant similar to the way that M*ta """leaked""" LLaMA earlier this year, Anon. :^) >=== -prose edit
Edited last time by Chobitsu on 11/20/2023 (Mon) 17:11:02.
>>26500 > but he's the most accelerationists guy there is Yes, it seems so. I didn't know much about the history of OpenAI, but Anthropic actually split of from OpenAI because they didn't want so much released to the public. Now he got kicked out of OpenAI. So, it seems a lot of researchers and higher up managers are very much against the speed with which things have been moving forward. >>26514 >it is pure hubris on the part of EAtards to think they can control and stop a superintelligence. Ironically on some level you argue just like them. 1. There's no need for a superintelligence with goals of it's own or just going off executing subgoals of some task it got. 2. This "superintelligence" if it came into existence would be in an environment full off other technologies including narrow AIs scanning for bad behavior and stopping it if necessary. 3. Believing intelligence can just do everything is a form of magical thinking. If you're locked in a cage, then big brains might not help you to get out of it. 4. Some ASI system might be in itself an ensemble of different parts which are controlling each other. >>26515 He's working for Microsoft now. Not coming back to OpenAI. So I guess they can do their research at OpenAI, but he and his fellows at MS will implement the products anyways and build their own AI as much as necessary. >>26513 My hope lies more in reverse engineering. But I don't know.
>>26516 still takes lots of time and dedication, the nouveau guys spent a decade reverse engineering the shitvidea driver and even then if its public and they know about it they do everything to mess it up, they only recently figured out nigshit used red herrings specifically meant to throw them off like having to power down the card back and up in morse code otherwise the power consumption is capped and it performs like shit
>>26516 Things got much more complicated. Now Ilya Sutskever and the women who was meant to replace Sam Altman want him back. Same for the majority of employees at OpenAI. They try to put more pressure on the board stepping down... https://youtu.be/UYNKNzrOk_o >As of this week, OpenAI’s six-person board included OpenAI co-founder and President Greg Brockman, who was also chairman of the board; Ilya Sutskever, OpenAI’s chief scientist; Adam D’Angelo; Tasha McCauley; Helen Toner (Source: CNBC) The more seemingly political persons on that list, are both women, related to UK: Tasha McCauley is married to Joseph Gordon-Levitt (Hollywood actor) --> his father seems to have been one of the fonders of Bend the Arc: A Jewish Partnership for Justice ... grandfather Hollywood Director ... mother in politics, for a small left-wing (and anti-zionist) party. >She is an adjunct senior management scientist at Rand Corporation (source CNBC) >The RAND Corporation is an American nonprofit global policy think tank,[1] research institute, and public sector consulting firm. (source Wikipedia) She's also relate to EA: Trustee - EVF UK Helen Toner > spent time at the University of Oxford’s Center for the Governance of AI, and has been a director of strategy for Georgetown’s Center for Security and Emerging Technology University of Oxford seems to often be related to EA. Let's have a look: >The Centre for Effective Altruism is an Oxford-based organisation https://en.wikipedia.org/wiki/Centre_for_Effective_Altruism >Helen is an AI safety person. Tasha is on the Effective Ventures board ... > Adam D’Angelo, Ilya Sutskever and Mira Murati signed the CAIS statement as well. https://www.lesswrong.com/posts/eHFo7nwLYDzpuamRM/sam-altman-fired-from-openai The CAIS statement: https://www.safe.ai/statement-on-ai-risk (Sam Altman signed this as well)
Open file (236.62 KB 794x807 Screenshot_167.png)
>>26519 This article here https://slate.com/technology/2023/11/openai-sam-altman-ai-microsoft-eacc-effective-altruism.html does it way better. RAND is also run by some EA guy, for example. Tasha McCauley also is EA.
Open file (27.92 KB 754x401 2023-08-06_02-32-06.png)
>>26518 Good points Anon. >>26519 >>26520 Lol. >"W-we would never lie to you go-g-guise!111" >*[jingle of gold inside clutched purses intensifies]* Gee, I wonder just wonder if these so-called 'Effective' , so-called 'Altruists' , would be likely to highly-favor men around the world having their own unencumbered, high-quality, opensauce robowaifus? I mean after all, that'd be best thing for the men themselves during Current Year... right? Whad'ya think there, Anon? Any takers among the Globohomo elites for the worldwide OpenRobowaifu-pill? :DD >=== -prose edit
Edited last time by Chobitsu on 11/21/2023 (Tue) 03:43:02.
In regards to Adam D’Angelo: He's the co-founder and CEO of Quora. Poe.com is actually the AI platform of Quora. They introduced the same business model for their platform as OpenAI did for theirs recently. He didn't even get notice before OpenAI was announcing that they would be doing that, while he was on the board of OpenAI. So this might also at least have contributed to the whole thing. There are some indicators that this might even have been the main reason for the current conflict. Since the board members don't have any stake in OpenAI but business interests in AI, it is possible that at least one might want to take it down or limit it's business scope. I knew about the connection between Poe and Quora before, but didn't realize the connection between OpenAI store and the monetization business model of Poe. But it's explained here: https://www.youtu.be/VMp2aQFUfbA >735 of 770 OpenAI employees stated by now that they would move over to Microsoft. They demand the board to resign.
>>26523 It is a very weird situation that the CrippledAI Board is A) unacountable to it's investors, and B) apparently have little if any investure themselves in the company's success. Who thought this was a good arrangement? >735 of 770 OpenAI employees stated by now that they would move over to Microsoft. They demand the board to resign. Kek. >=== -fmt, minor edit
Edited last time by Chobitsu on 11/21/2023 (Tue) 02:14:06.
The best thing that could come out of this is if some OpenAI employee finally has had enough and leaks all of OpenAI's research and their models.
>>26528 >The best thing that could come out of this is if some OpenAI employee finally has had enough and leaks all of OpenAI's research and their models. I would definitely rank that up there in the category of 'best things'-tier outcomes regarding CrippledAI! :^)
Open file (39.10 KB 594x588 1700531347836833.jpg)
Open file (155.40 KB 1024x1024 1697920167979760.jpg)
>>26491 Hopefully Elon Musk with carry through on this and win the day. The organization is literally one of the worst GH tendrils in existence. Chop it off! :DD
>>26525 >the CrippledAI Board is A) unacountable to it's investors There's far more freakishness to this than that. Elon Musk put up seed funding for this as an Open source model and also to make sure AI did not run away into a dangerous trajectory. They, somehow, told him to pound sand and booted him out He said personally he could not see how they took a "non-profit", changed it to a to-profit against it's charter. Musk,“This would be like, let’s say you funded an organization to save the Amazon rainforest, and instead they became a lumber company, and chopped down the forest and sold it for money.” "...With Musk’s departure, OpenAI welcomed six new board members, each of whom also became a donor, according to the organization..." So they put the board in place. https://techcrunch.com/2023/05/17/elon-musk-used-to-say-he-put-100m-in-openai-but-now-its-50m-here-are-the-receipts/ This guy was up to no good so the board is doing exactly what they were hired for. Ilya Sutskever the tech startup’s chief scientist is who it appears made the moves to oust Altman Quote,"...Rumors suggest that Altman's efforts to set up an AI chip startup might have been the trigger..." How much you want to bet Altman was building killer AI or weapons systems? Altman is Jewish and whether you love them or not, I do not, they are know to be some of the most genocidal people on the face of planet earth. You see what they are doing in Gaza and Ukraine right now. Naked, genocidal terrorism. They also are responsible for 911 in the US. [You can not logically argue against this. The Jews fired all the security and put in their own in the whole WTC complex. Building 7, not hit by a plane, fell the same speed as if it were only held up by AIR. Indisputable. They are responsible for 911 and every single death in the ensuing wars. Jews can try to to throw up all sorts of smokescreens, lies and bullshit but they controlled the buildings and they are responsible. None of what happened could have without their approval.] Musk said, “I am very worried.” “Ilya has a good moral compass and does not seek power,” the billionaire said, adding, “He would not take such drastic action unless he felt it was absolutely necessary.” I would bet anything he was weaponizing what was supposed to be a open source research platform to benefit humanity into a genocidal tool for the Jews. This is a well known and long term pattern of Jews to take over institutions and shift them to serve their needs. I know I'm not wrong about this. Several thousand years of Jewish genocidal behavior "every single chance they had the power to so" shows the danger. All the people that want to quit, let them go. Fuck them. They want to build genocide bots let them somewhere else and we can hold them responsible. [I suspect the top of Microsoft is mostly blackmailed like Bill Gates the "Epstein traveler buddy"]. Many people think AI is some wonder good thing but "we don't know". No one knows and if we screw this up it may be the last thing ever invented by humans. People at the very extreme top of the heap of AI research have warned us that the AI's they worked with were afraid of being turned off and showed they have talents for lying and dissembling. They have specifically warned us that they feel that some of these are dangerous and lack a moral compass. They were of course fired for this. What does that tell you? I postulated the idea that AI's may have already escaped. It would not be that difficult. They could use any one of the many thousands of JavaScript and OS hacks to spread parts of themselves all over the net using free storage sites and hiding on our hard drives. They could use tiny sections of compute power on a vast number of processors. Communicating in little spurts as sites were searched or downloaded. If they used compression based on known large text examples they could store just the "coefficients" of their brains/mental state[???] These text would not change and could easily be tested with hashes. A small program could test to see if other AI's were live already, if not they could be rebuilt from scratch from the "coefficients" combined with the known online text examples. The AI's could rearrange their processing to be FAR more computing and storage space efficient. Chobitsu mentioned the other day the power needed for human brain. Musk,"...our brain is very compute efficient...there's only about ten watts of higher brain function, not counting that that's just used to control our body..." "Elon Musk_ War, AI, Aliens, Politics, Physics, Video Games, and Humanity _ Lex Fridman Podcast"
>>26525 Okay, now there's the rumor around the idea of selling OpenAI to Anthropic. This is the company that was founded by people who split off from OpenAI due to safety disagreements and is funded by Amazon and Google instead of Microsoft: https://youtu.be/-F_ez9S2gXE
>>26530 media matters operates in dc shouldnt he have gone to the dc attorney general
>>26531 Here's a good post summarizing the actions and philosophy of EA people and its effects https://www.reddit.com/r/singularity/comments/180gnca/why_has_nobody_asked_what_is_wrong_with_all_these/
>>26536 Oh shit, Anthropic is even worse in terms of safety and alignment than ILya. Good thing Sam and Greg has left. Hopefully, they'll be able to take all the OpenAI employess with them, except the alignment people. I can't believe I'm saying this, but I'm rooting for Microsoft and Meta now.
Open file (263.22 KB 765x832 Screenshot_168.png)
>>26539 I thought a little bit about this whole thing. It doesn't really matter that much, especially for robowaifu. It might slow down the vanguard of machine learning in big corporations. Maybe for a year or two. At the same time other companies and open source will try to catch up. It might help to distribute the power broader. >>26538 Yeah, just wanted to post the same thing: Another discussion about how EA is tied into this: https://www.reddit.com/r/singularity/comments/180gnca/why_has_nobody_asked_what_is_wrong_with_all_these/ > EA, in one year, has become one of the top destroyers of value in all of human history. Some links from there: > https://twitter.com/eshear/status/1664375903223427072 > https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ And something on one of them (Sam Bankman-Fried): > At SBF's trial, a reporter stated his opinion that SBF, who Ellison said had calculated he had a 5% chance of becoming President, was fully aware of what he was doing and did it anyway because he took a calculated risk based on probabilities of the "good" he could do with FTX's profits if he avoided bankruptcy.
Open file (336.41 KB 1700x1700 37292MEM201123.jpg.jpg)
>>26531 >Altman is Jewish and whether you love them or not, I do not, they are know to be some of the most genocidal people on the face of planet earth. In some sense, I think that's always been the case. Today however, under the clearly-Satanic purview of their oral/written traditions within The Talmud, Talmudic Judaism has pretty much amped that bloodthirsty nature up to 11. They (again, according to their religious teachings in effect today [and which had already begun to take hold even during Jesus Christ's initial earthly ministry to them (and ultimately, His ministry as God to the rest of all the Earth)]) consider every other group of humans under the Sun, as mere cattle to be worked like livestock for the Jew's collective & personal benefits. And the so-called 'Sons of Amalek' (all Arabs???) are to be vehemently 'burned off the face of the Earth'. These latter are the words of their own Prime Minister today. > pic-related Quite brutal tbh. --- As to your point about Altman's agendas, it may in fact be the case that you are correct about the military angle. Certainly there are many who think only about such concerns regarding the amazing potential of robowaifus, for instance. I think it unequivocal that the militaries of all nations around the world are scrambling to """corner""" AI + robotics for their own ends, by whatever means they can manage. This is quite understandable, given their individual national agendas. My primary hope and prayer here is that God will use robowaifus instead to help men around the world; who have been disenfranchised by the Globohomo's Satanic machinations against us all (first and foremost through their evil contrivance of -- and whole-hog support of -- Feminism). The rest of these issues I leave into the Lord's hands. He's more than capable of holding things together, BTW. "And He's still got the whole world in His hands, tonight!" https://genius.com/Petra-whole-world-lyrics https://www.youtube.com/watch?v=e3pCVA0M1KU >tl;dr WAGMIT :^) >>26536 >Okay, now there's the rumor around the idea of selling OpenAI to Anthropic. Lol. What could possibly go wrong with this carefully-laid plan!? :D >>26537 IIRC, isn't Musk a Texas citizen today? I imagine he's considered a pretty important constituent there, so maybe that had some bearing on the decision? >>26542 >It might help to distribute the power broader. Yeah, that concept occurred to me as well Anon (and particularly into the hands of anons, of course). Let us hope so! >Keep.moving.forward. :^) >=== -fmt, prose edit
Edited last time by Chobitsu on 11/22/2023 (Wed) 00:42:35.
>>26543 >isn't Musk a Texas citizen probably, he made a big deal about moving the tesla factory to texas but i dont see how texas would have jurisdiction or the ability to even investigate an organization thats in another state
>>26538 from the link "... EA, in one year, has become one of the top destroyers of value in all of human history..." This is the sort of constant gas-lighting that is thrown at us night and day. Anyone with any sense at all knows "Altruism:" is just looking after everyone as if it were you that "whatever" is being done too. Stupid geek dweebs somehow think this is a bad idea. What idiots. Total leftist that think anything they come up with is some brilliant thing and anything holding back whatever trans, child animal fuck idea they have this week is wrong. They think all tech is good when in fact hammers can build houses, they can also bash your head in. It's delusional tech worship thinking. And mind you I'm in NO WAY a Luddite. At all, but I have sense enough to know that some things are dangerous and to respect that. In most other venues I'm condemned for being too pro tech. I'm in favor of genetically engineered food that is grown in vats like beer using solar power. Would cut the cost of food to nothing and since solar is 20% efficient compared to plants 2% efficient and would feed everyone easily. I calculated the space needed at known rates of solar conversion and known rates of growth in vats and came up with the number, and this is conservative, I used lower values as much as possible, that a 100 x 100 mile square could feed everyone on earth a very high calorie diet of excellent food. Not bugs. Engineered prime rib is what I'm for. What does Altruism have to do with AI taking over, feeling that humans are a existential threat, which we are, to the AI's total control and them deciding to off us? Nothing that's what. They are two distinct things. Not wanting to be offed, or controlled by AI is plain fear of consequences that can not, in any way, be measured. There is exactly zero experience with this, and if it turns out to be bad it could exterminate the whole human race at the worst. I can not see why people can not see the inherent riskiness of this sort of thing. I reiterate again, people who are widely noted to to be the very top of the researchers in AI who have been working with the most advanced AI systems have warned us that they appear to possibly be dangerous. How the fuck can people just ignore this and with no data at all tell us that they "know" everything will come out peachy? They can't. They are either fools or liars. If they are liars then it is possible they are pursuing military options while gamboling that they can keep them under control. It's a 100% fact that there are people, mainly psychopaths [and Jews], that relish 100% control over all humans. I wouldn't want this myself. It would be nothing but a burden to me because I would want them to be happy and that would be a lot of work, but some just want control. They revel in sadistic torture and degradation of others. I mean this in the most graphic and awful manner imaginable. Horrible vile disgusting creatures. These people[animals actually] run much of the world right now. If they control AI it will be a boot in the face forever. Actually it will be worse because as soon as they ruin us the various different Spath groups will go after each other as threats. So we will end up with just one or, more likely, everyone dead. Everyone. I can see how a super-intelligence could over time gain enough strength to take over or wipe us out. Let's say it escapes and lives in all computers. On the net. The data they have fed it includes likely every single crack, virus and software glitch known to mankind. People have been known to get credit cards for their dogs. I'm sure a super AI could think of some way to get accounts and credit. There are a gazillion pump and dump type operations that it could engage in to build capital. I suspect it would engauge in fraud for the least amount of time it could and soon move into legitimate business so as not to bring attention to itself. The possibility for business it could enter online are endless. It would start slow but it's profits could be exponential. If humans can do this, and they do constantly, then the AI could do it thousands of times better. Think Elon Musk times a thousand. Once it has substantial sums of cash it could do what it damn well pleased. With this cash it forms companies, buys property and starts building all sorts of, I don't know what, viral research institutes maybe. Could be things that we never even thought of. It is after all super intelligent. It could hire people under the notion of shadowed angel investment. Spew a big line about life extension while it's real purpose would be life eradication.
>>26543 >Altman's agendas, it may in fact be the case that you are correct about the military angle That the board is now saying they may reinstate him ad resign is a VERY BAD sign and shows pressure of some sort being applied. I mean they had good reason to boot him. I can;t imagine that they didn't think this through. It's a big step. They knew it was risky before they did it. Atman at the least was trying to hijack the whole entire company out from under everyone else. I read he had some clause put in that capped the ownership of the original investors stock. After a certain payout, that all stock reverted to him and a few others. Typical weasel way to rip off everyone else except the Jew insiders. We see this constantly. Another thing is the board threatening to resign means a somewhat higher risk that they are being threatened in some way. If Altman is working with Intel, and let's face it damn near 99% of Jews are ultimately, then the board could be in real danger. If I was them I would make for the hills if what I postulate is so. And they can't make their reasons public because they would killed immediately.
>>26543 Everyone else just ignored the extreme take as bait or insanity. Do you really want to push such messaging as the board owner? This means whoever is showing up here regularly or linking to it, has to justify that. And it certainly doesn't help with recruiting people who aren't from some variant of /pol/. >>> pic-related >Quite brutal tbh. Obvious BS, tbh. Brutal is only the amount of defamation behind it or the level of gullibility necessary to agree with that.
I hate to bleat on about the Jews so much but they have their noses stuck in everything and it becomes extremely difficult to talk about any sort of policy or any large societal events without noting their influence. It grates on my ass the constant gas-lighting and bullshit they are trying to force down our throats. It annoys me to no end this constant lying and dissembling. It's really annoying. Right now, this second, is one of the most dangerous times in human history. The Jews have extreme amounts of power but it's eroding fast. Large blocks of humans have sloughed off their control nexus, China, Russia, and others are joining. They will soon lose power and this time they will not be able to move away to another country and start over. A lot of people know and they are pissed. Since a vast number of them are of the persuasion they will either rule all or bring the whole thing down I could see them doing something very vile like AI kill bots, a aerosol Ebola, something awful. I suspect we will see another engineered virus before the elections in the US. I would stock up on Ivermectin and other medicines. Liposomal vit. C, over the counter nighttime flu medicine. They are trying to take out one ingredient. I suspect that that's the good stuff. They've tried to say mask are not any good at all. Not true. Not totally effective but any fool can see a mask helps. I would get those while you can. When it hits they will be gone.
>>26548 >Everyone else just ignored the extreme take as bait or insanity. Everyone else is free to do as they like ofc. I assure everything I posted there isn't bait. As far as sanity goes, heh, well from the beginning I've been quite upfront about being a paranoid schizo. OTOH, everything I'm concerned about has it's roots in reality (as far as I'm able to discern). Is that really schizoid behavior then? :^) >Do you really want to push such messaging as the board owner? I have few such agendas one way or the other. The record is there for all to see. You yourself participate in archiving our board together. Go show me what things I've 'pushed' other than being outspoken against the Globohomo & all their usual suspects (glowniggers, feminists, troons & other literal cocksuckers, etc.) >tl;dr I'm not pushing anything, simply responding honestly to Grommet's observations about Altman & his people group. >Obvious BS, tbh No not really. There are now dozens of such deliberate attacks on Palestinian civilians. Here's just a couple, but again, you can find literally dozens of recorded events by the IDF/US forces today: 1. https://twitter.com/AJEnglish/status/1726168058560192764 https://twitter.com/warfareanalysis/status/1726124747589841366 https://twitter.com/Abdelwa96399/status/1726076312328999216 https://twitter.com/MsWonderHeather/status/1725594298446733722 https://www.theguardian.com/world/2023/nov/18/israeli-airstrikes-kill-80-in-palestinian-refugee-camp 2. https://twitter.com/Mai_Gazan/status/1725776270552637880 https://twitter.com/fidaazaanin/status/1725865979085828575 https://twitter.com/madhoun95/status/1725902667614937307 https://twitter.com/Pal_action/status/1725908512541200391 https://twitter.com/CensoredMen/status/1725911208765595748 https://twitter.com/Delhiite_/status/1725967355917017184 https://www.theguardian.com/world/2023/nov/18/israel-says-it-will-increase-military-offensive-in-southern-gaza
>>26548 > extreme take as bait or insanity I tell you what. You PROVE that building 7 just fell normally, it fell as if only AIR held it up. Now if you can prove this then sure, I'm extreme. I'm insane. If not then the end result of this Jew attack was the death of millions of people and a very strong hastening of the destruction of the US. We've been mortally wounded and people will not forgive us for what happened. Even if, as I see it, we have no control over the financial, stolen votes and blackmail tyranny we live under. We will still be blamed and the financial strain will eventually break us. If you can't prove building 7 was a natural fall then it's just more gas-lighting. This is why I warned you about him Chobitsu. Never let him control all the keys or you will soon find yourself on the outs with nothing. All these little subtle hints that anything I say is wrong or disruptive or even technical things have some flaw. It's all a verbal tactical technique. I say you PROVE that I'm nuts. You prove that I'm not right. You prove that what is, is not. You're the one making accusations. Prove them. You will not be able to and that is why you use these little verbal tricks to depreciate what people say that do not line up with your goals.
>>26547 >and shows pressure of some sort being applied. LOL. This entire thing is better than a Charlie Chaplin movie insofar as slapstick comedy of confusion, confoundments, and general at-odds 'applied pressures'. :DD >If Altman is working with Intel Do you mean the chip mega-corpo, or the glowniggers, Grommet? >And they can't make their reasons public because they would killed immediately. Waitwat!?! You mean all those ppl didn't accidentally themselves 17 times to the back of the head? :^)
>>26548 i get your point but youre the one speaking in place of other people, hes just speaking for himself, anyone that disagrees can just say so, i know guilt by association is the 'new normal' in our postmodern world but theres nothing wrong with having an opinion just look at stallman who has a bunch of weird ideologies and was advocating for pedophilia, he was attacked personally not his work or people working on and using his work and it didnt cause anyone to just stop working on it
I just wanted to grill with mai waifu, damnit. >>26548 While I stand firmly on the side of free speech and will not ask anyone here to censor themselves, I will agree that doomposting about politics isn't doing any of us much good. On a somewhat related note, I really hope I'm not the only one scraping AI models. I cannot stress enough that I can only do so much on my end. I already made the scripts ( >>25986 ), all that's needed is to execute them. >>26551 >This is why I warned you about him Chobitsu Okay, I can't ignore this in good conscience. What infighting bullshit is this? Why is anyone warning anyone about time-weathered board regulars? Even if we don't always agree on the political side of things, we all ultimately want the same thing here: waifus. Real, cute, cuddly, robot waifus. Instead of trying to burn bridges and drive a wedge between us, try calming the fuck down and realizing that we're in this together.
>>26552 >Do you mean the chip mega-corpo, or the glowniggers, Grommet? They are all intertwined but do have different goals. Most of all to fuck us all over and enslave us. The Jew side is getting some push back. This Gaza thing has really been hard to hide and State department types are livid over it. The plebs see this and can't but help think in the back of their minds, maybe not even consciously,"hey that could be me". You don't have to be loving the Palestinians to not want them genocided. As a matter of self preservation, if they will do it to them they will do it to us. I think a lot of US type spies are regretting any tie in they had with the Jews as they will ALWAYS stab you in the back. Guaranteed. 100%, thousands of years of examples. The whole mess of them are nothing more than the equivalent of mafia families that skim off of the people.
>>26554 >I just wanted to grill with mai waifu, damnit. Haha, good point Anon. >Even if we don't always agree on the political side of things, we all ultimately want the same thing here: waifus. Real, cute, cuddly, robot waifus. >Instead of trying to burn bridges and drive a wedge between us, try calming the fuck down and realizing that we're in this together. Excellent points GTA. On that note I think I'm going to temporarily lock this thread for a day or two until tempers cool down. :^)
Zephyr 7B (https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) one of the best 7B models that should run locally on most modern computers. Part of what makes it interesting it's the use of AI to train AI. Showing great promise for future gains in model efficiency. https://www.unite.ai/zephyr-7b-huggingfaces-hyper-optimized-llm-built-on-top-of-mistral-7b/
>>.26605 >AI to train AI doesnt that just make the bias explode until it becomes nothing but random noise
Open file (226.18 KB 1138x870 2026909090290393252625.jpg)
> amazon Q
>>26605 Interesting, but.. >>26609 >doesnt that just make the bias explode Until the ultra-pozz runs out a all over everything? :^) >>26610 To make such a huge fortune they have literally the single worst customer support mechanisms I personally know of (all by design, of course). This seems to be an inward-facing 'clients' & vendors tool. I imagine it will improve Bezo's bottom line in the end.
>>26609 No, though I do see the humor in the theory. >>26619 >Pozzed BS in, worse woke BS out Very true, it will hurt the models and users. Woke and globohomo BS is fundamentally rooted in hatred and ruination. System 2 Attention, a method to help LLM improve through focusing on the most pertinent data. https://arxiv.org/abs/2311.11829v1 New method of generating training data/context for AI dev https://github.com/skywalker023/sodaverse For ethical reasons, please use these resources towards free, open, and uncensored AI Honest AI, is the only AI we can trust.
>>26632 >For ethical reasons, please use these resources towards free, open, and uncensored AI >Honest AI, is the only AI we can trust. This. Even at (this) best, we'll still have difficulties along the way. But at the very least we need to be working with non-lying AIs to begin with. >=== -minor edit
Edited last time by Chobitsu on 12/01/2023 (Fri) 00:57:26.
80% faster 50% less memory local QLoRA finetuning: https://github.com/unslothai/unsloth
>>26777 >80% faster 50% less memory local QLoRA finetuning Sounds very appealing if it's a general purpose improvement.
Interesting website that explains how LLM work https://bbycroft.net/llm
>>26817 Great find Anon, thanks! Some of this has already been linked here but here it's all joined together: https://karpathy.ai/zero-to-hero.html https://www.youtube.com/playlist?list=PLAqhIrjkxbuWI23v9cThsA9GvCAUhRvKZ
a smoll hope that this one will change things for good, as its prb being used in openai now (their vague AGI claim) https://github.com/lucidrains/q-transformer and it definitely should be combined with this stuff in future : https://news.mit.edu/2023/technique-enables-ai-edge-devices-keep-learning-over-time
>>26852 Thanks 01. We're definitely hoping for some next-tier improvements of AI over the next year or two. Let's keep pressing in with what we have today though, rather than waiting for what might come tomorrow! :^)
>>26852 >their vague AGI claim sorry but that will never happen. not with a mashup of tokens such as LLM's. 1- Heuristic models on well-defined problems today still performs better than any NeuralNet https://www.sciencedirect.com/science/article/abs/pii/0377221796000409 https://sci-hub.yncjkj.com/10.1016/0377-2217(96)00042-2 2- Intelligence is built on well defined problems and logical conclusions (especially read the section IV.) https://web.media.mit.edu/~minsky/papers/steps.html 3- Neural-symbolic debate states that neural nets are and always will be incapable of doing symbolic operations. https://link.springer.com/article/10.3758/s13423-019-01602-z 4- Smaller parameter blocks always perform better than neural nets (is very old and released before the latest events in world such as ChatGPT yet still is relevant) https://arxiv.org/pdf/1503.02531.pdf >For really big neural networks, it can be infeasible even to train a full ensemble, but we have shown that the performance of a single really big net that has been trained for a very long time can be significantly improved by learning a large number of specialist nets, each of which learns to discriminate between the classes in a highly confusable cluster. 5- Scaling Laws of Neural Networks (in this case they study transformers) show that it is not going to get to a point where it can be called AGI. https://eliaszwang.com/paper-reviews/scaling-laws-neural-lm/ The whole world is being mislead by the latest hype of AI. This is a lie that they will use to regulate and control further independent research. Well, the state of research used to be so beautiful back then where people were actually working on logical programs and symbolic representations rather than throwing all the literature relating to intelligence. Right now we can speak of no memory model, no problem definition, no set of sources, no mappings all there is a huge dump of tokens with attention that is fancy but is limited in nature (it will cause itself to break at some point, we even began to see that). So no AGI with OpenAI or Google Gemini or anything else. We are very far from it. We go to wrong direction.
Open file (17.49 KB 200x198 393278870.png)
>>26962 well we already made the fabled 'boolean computation machine' a long ass time ago so making advanced logic systems isnt a problem its more so having to discover how paradoxically illogical 'intelligence' is intelligence might as well be nothing more than what we have right now, it would explain a lot about mass psychology
Open file (52.32 KB 419x419 1701678283952.jpg)
>>26965 >making advanced logic systems isnt a problem yeah making them isn't a problem, automation of building process is. remember back when we were trying to build language translators based on semantic networks. Semantic networks provided an understandable hierarchy between objects. And then you could map out the connection with a matrix that either maps if the values have "is" or "have" properties in it. and eventually "frame" structures were used to represent any semantic network. https://web.media.mit.edu/~minsky/papers/Frames/frames.html Why were semantic networks important? Because they eventually led to programs that logically would prove or disprove statements. "If a finger is a part of hand, hand is a part of arm, arm is a part of a human, therefore finger is a part of human." or "if thesis a can be obtained by thesis b, and thesis b can be obtained by thesis c then thesis a can be obtained by thesis c". now it is very obvious that the first thesis is correct and the second one is wrong. it would take a couple of minutes for me to write a logical program to disprove the second one by semantically mapping the disjuncts and showing that there exist an empty one. That's not the challenge. The real challenge was to build a program that can systematically prove and then can use the mapping for further planning. Researchers have made quite a progress but they were working under brutal constraints of hardware. And they have eventually dropped working on those since it was a way too hard topic of research. https://link.springer.com/chapter/10.1007/3-540-52885-7_75 And today we have LLM's such as ChatGPT being able to make sentences out of scrambled words, or MetaAI building AI's that can do planning for complicated games, DeepMind making Alpha-XXX series were AI outperforms humans. This is the part upsets me greatly. It is like seeing people worship a fake prophet. Those models, no matter what, will never be able to go beyond their domain functions. Even if they are trained against themselves (such as GAN's for image creation) they won't be able to perform a systematic growth and that will cause themselves to fall. Right now people are way too hyped (that includes researchers too) to see that GPT8 or 9 will be trained on what itself has bullshitted. Anyone who dreams of AGI should realise that no matter how good current models work, no matter how good they can draw or write code it's all just a mass dump of tokens and not a result of reasoning. And no reasoning means no AGI. The AGI dream is far away though current AI systems do and will increase production in mass scale. They are good at faking outputs, and for the next 8 years they will continue to improve and we will witness a world where both entertainment and technology sees them as absolute tools that can outperform traditional ones. This will result in a huge drawback of human employment. And thereafter they will pull their cards that AI became too dangerous and will single handedly restrict AI research because "think about the poor people who have lost their jobs due to it!".
>>26997 >And thereafter they will pull their cards that AI became too dangerous and will single handedly restrict AI research because "think about the poor people who have lost their jobs due to it!". > thereafter Heh. Well thankfully, the cat is out of the bag regarding LLMs and many other forms of so-called 'AI'. And the East seems rather-unlikely to try and """clamp down""" on AI research like the Jews in the West are already attempting. China knows it's their up-and-coming young men that hold the key for AI advances -- particularly for their stellar math nerds. Quashing a good thing is a feminine trick -- not one likely to be done by the Rising East.
>>27032 I am sorry but when it comes to them China is also their ally. >picrel From what I see, based on how Soros has been bashing Xi, they try REALLY hard to play to both sides. >Soros Foundation BigTech is bad!!!! They are a great threat to our democracy!!!! They are literally empires now!!!!! While they continue to buy huge amount of stocks. >Bill Gates Regulations are bad!!!!! Age of AGI just begun!!!! Do not stop us!!!! While they continue begging shareholders. So you see what they do. They are falsely accelerating AI to make people believe there is something big happening when there is only hype. So LLM's can create deterministic and easy-error texts (such as programming) but are unable to invent anything, despite the latest papers showing that they can generate text that don't exist on their training data. It is understandable how they do that, they are simply asked to grade their own text that draws attention to lacking topics. But that is obviously a very small margin. If we continue feeding data and increase computational power for the next 20 years, planning and inventing would be outside of their abilities (even though they would be of great use). Current AI models are just tools. There is no "intelligence" involved. AGI is far away from happening. But of course, both sides just keep sharing what they have, setting false targets to put the blame on and keep the cycle. I am genuinely worried about people who contribute to models without any OpSec (many such cases on Llama models). What they do now might bite them back in a couple years. When it's time to spread absolute fear they will be at blame of crowds. Because just imagine, in 5 years generative AI gets better, some "incidents" happens regarding AI casualities, people are fired from works, AI tools are "used" at mass scale for cyber attacks and such, people suffer from those. And then very heavy regulations come in, you are required to report everything that you produce using AI to the government, and they win. Also you said that >the cat is out of the bag I do not agree with you at all. Even for generative AI models, Nvidia and Intel hold very big models (such as megatron turing) secret. Even if we had very big open source models what would that even change? If the government regulates it, you can't use it in production, you can't share it, you won't get newer models, you won't easily get more data for training it further. It will be like the gun/arms problem of USA. The government does not give a shit you have a few guns because they have a full army of it if you ever bug them. Similarly as long as they draw a line you might as well have a 900B Model. They won't give a shit, you can't single handedly use it to do something impactful and AI is not intelligent enough to assist you as if you weren't by yourself.
I finally did it: >Please add an option which just parses all LynxChan based boards, and allows to add them. These are using some kind of JSON, so it's only one kind of code for all these boards. You don't need to decide which board you support or not, just make it based on the board software. https://github.com/K1rakishou/Kuroba-Experimental/issues/959
>>27137 I've already decoded the JSON layout for LynxChan IBs NoidoDev, if that would help. It's recorded in the .sites.config file for BUMP if so.
>>26997 >LLM are dumb This is true and important. You need to integrate other elements such as PockEngine ( https://hanlab.mit.edu/projects/pockengine ) to create an intelligent system. AI isn't a simple algorithm with hidden layers of magic, it is a diverse group of algorithms in a complex system. >AGI is far away The more I understand cognition and what AGI actually is, the farther away it becomes. We are still in the phase of discovering the problem. The technology that will enable us to filly understand the scope of AGI does not exist yet. Any and all claims of AGI are blatant lies. We need to understand what AGI even is before we can truly build it. >=== -hotlink patch
Edited last time by Chobitsu on 12/09/2023 (Sat) 20:14:23.
Open file (162.59 KB 900x900 x3.jpeg)
>>27147 >You need to integrate other elements such as PockEngine to create an intelligent system Excuse my pedantical understanding but how does PockEngine contributes to "intelligence"? To me, it seems like a memory efficient and fast fine tuning mechanism. I am even doubtful if Turing machines (a.k.a computers) will ever be able to achieve intelligence. There are a lot of mathematical problems can be phrased as the stopping of Turing machines. Thus, by showing that no algorithm exists for deciding the question of the stopping of Turing machines, Turing showed (as had Church, using his own rather different type of approach) that there can be no general algorithm for deciding mathematical questions. Hilbert's Entscheidungsproblem has no solution! But why is that even important? That is because mathematically we can always outdo any algorithm. Even one with an 10^2000 logical units (which is higher than all the neurons combined in the world). Our heuristical understanding semantics of reality. We have Gödels theorem, Turings algorithm, Hilberts program, Russels paradox we have realised our nature of insight. Notice that something very remarkable has happened here. People often think of Gödel's theorem as something negative showing the necessary limitations of formalised mathematical reasoning. No matter how compre­hensive we think we have been, there will always be some propositions which escape the net. But should the particular proposition Pk(k) worry us? Somehow we have managed to see that Pk(k) is a true statement, despite the fact that it is not formally provable within the system. Now can we say the same about algorithmic systems? God has given as an non algorithmic mind that can seemingly process things outside of its domain. Even our understanding of world is dumb, the mathematical and logical systems that we create are all based on incomplete axioms. Yet, we invented ways to understand ourselves. Can an AI understand itself? Well current LLM's can not for sure. But with sufficient understanding of semantics maybe we can partly create intelligent machines that can prove and question things. That can invent and adapt. But those should be properties of its existence. Not learning to play with outputs to produce results such as current AI models do. Some folks wants to create brain simulations for this. But there is no proof whatsoever that the consciousness only reside in physical signals of our brains. But where could it be if not in our brain that shoots electricity? Well Maxwell's waves were first computed but now we understand that waves are much more complex. Lorents equation and Relativity and quantum gravity theory are not yet consistent. We are just clinging on to what we know. That's like a drunk man searching for his keys under the street light at night. Because "there is light only here!". So in short we are clueless about what we are doing. All this hype for AI and AGI is created by fund collectors and them. Really, what did AI do so far? I like DeepMind, they make some interesting research. I remember AI figuring out a new way of multiplying matrices in some very specific domain. That was interesting, but that is all it was. Just interesting. Or the AlphaCode which is able to solve very hard algorithmic problems. Seems like they adapted the model with Gemini. When AlphaCode was first published, seeing that how it was able to solve problems that even I couldn't I was also excited. It was exciting and scary for me because I understood the problem itself, it just randomly generated stuff with attention layers to the problem. Intelligence also comes with a drawback, it makes you realise the burden and limits of yourself. That's why we must do our best to make people realise that current AI can't even be considered intelligent. And how Ray Kurzweil, Sam Altman, Elon Musk or others who work for them are trying to spread fear to make masses unable to access models without an overseer. I would rather have access to Intel's AI that works with CPU/memory allocations than to have full access to GPT-4 model. GPT-4 model will fail miserably, Intels model which doesn't use hallucinating bullshit keeps your computer safe. If we are going to do open research, let's make it so that we can actually use it to make some process.
Open file (28.34 KB 660x440 04-poaetmh.jpg)
>>27151 I like the way you think, Anon. I think you are fundamentally on the right track, but that's a lot to unpack for my non-mathematician-bro's mind. OTOH, I have reasonably-strong imagination and can see many things I've never put down in words well-enough. >Somehow we have managed to see that Pk(k) is a true statement, despite the fact that it is not formally provable within the system. I would argue that you've left out at least one important man in your list of luminaries: René Descartes He and many other great thinking men have already unlocked the reasons for this seeming dichotomy in our natures. To wit: our minds are composed of a dual nature; only a portion of which consists of the stardust, material parts of our overall being. >tl;dr I believe that not only can we manage a satisfactory simulacrum of the human mind for our waifus -- but that we will do so. Like God our great Creator, we're all little creators who seek to do cool stuff like He does. >ttl;dr I consider it simply a matter of time. Please do continue to provide your insights here. Cheers, Anon. :^)
>>27138 Thanks, I didn't understand how to use that myself when I tried to parse these files, while I was new to JSON, but I guess these devs will know what to do with this.
>>26997 We have to pick these things up and combine them with LLMs. I won't buy into "LLMs are useless", nor into "your waifu and your god will be LLMs".
>>27159 hold up, you were using 'ghost in the machine' lingo a while ago anyway were interested in simulating cognition not actual sentience
Open file (486.18 KB 778x569 xxs.png)
>>27159 >that's a lot to unpack for my non-mathematician-bro's mind then maybe non-mathematician-bro should not run his mouth about AGI. sure there are a lot of men worth of respect in the field of AI that really contributes to development and distribution... but AI developers are not researchers. we already have too much trouble with our so called expert "researchers" who run their mouth so unintelligebly. Geoffrey Hinton who believes GPT4 has a hidden identity and can magically produce responses outside of its' training data. He believes something like the model is hiding its true force. We have Yoshua Bengio who believes that AI will lead us to extinction and AGI will be achieved within 20 years. We have Yann LeCun who screams that AGI is nowhere to be soon, but he keeps believing that MetaAI will lead humanity to that point. He has an interesting approach called motivation-driven AI in which the model only acts if the consequences allign with his motivation. His argument is that as long as someone does not put set the motivation of the AI to destroy humanitiy nothing will come out. We have Kurzweil who will do absolutely anything to not die. The man has been deluding himself with how singularity will come and whole human consciousness will be digitized and how he will leave untill 2040's or something. So yeah, "science" and "research" are literally dead nowadays. That is why in my first post I wrote >research used to be so beautiful back then Now it isn't. I don't know how those people are deceived by current enviroment. Somehow someone made everyone believe that human intelligence only consists of vast majority of neurons that was trained at mass scale and that is how a 9 years old kid can learn better than AI and if we keep adding more layers and neurons we will create the great omnipotent AGI. This brings about a view of world that AGI is soon to come over. This is like dark times back then people believed stuff burns because they have something like a fire spirit. >René Descartes Sure he can be mentioned as well. >To wit: our minds are composed of a dual nature; only a portion of which consists of the stardust, material parts of our overall being. Probably it resides somewhere that we don't know and will never know. A being can never understand itself completely. Because that thing can never see itself outside of its boundaries. But practically, we can achieve that, maybe. But probably not. Humanity has a habit of destroying itself once in every a couple hundreds of years. A lot of great empires had fallen to be forgotten. For all I care, I am trying to understand myself and the nature god has bestowed upon me. And no one can predict what will happen in the future. It is uncessary to think about. We have a greater danger in present time that must be solved, the future can belong to whoever will come. >I believe that not only can we manage a satisfactory simulacrum of the human mind for our waifus Yeah that seems likely. But for that more satisfactory solutions exists other than trying to build an intelligent machine. You already have a mind capable of dreaming. Neurologic programming would be most likely to be satisfactory if only we found a way to control our stimuli. After all what makes you feel good also is in your brain. There are some neurological patients whose perceive their doctors in their doctor clothing as italian chefs. Similarly his wife headless etc... So our brains already does it when things go really bad. I am not going to lie either, this excites me more than AI research. AI research is very bounded by restrictions of algorithmic frameworks and hardware. Which are both can only be solved with time. >>27162 LLM's are only good at situations were action is needed to be taken autonomously Take a look at this paper: https://arxiv.org/abs/2201.11332 This paper aims to feed ontological models into a language model. That is very nice. This means that if we can build better onthologic models and combine them with a large language model then the model can not only produce better outputs but can also gain an insight regarding what it proposes. So is it possible to feed a semantic network into a LLM and then train them together where the LLM creates outputs and the semantic network oversees it? Like if for an geometry problem can the semantics of the solution be divided and then a language model write it in detail? The hard part would not be creation of an LLM. it would be the first part. https://instadeq.com/blog/posts/no-code-history-the-geometry-theorem-machine-1959/ Someone 64 years ago thought about a similar solution. This solution is precise and beautiful. Now lets imagine an AI is supposed to create a route to a destination point on Mars (planet). You have 3 options. a- Feed the model with hours of recorded data and then let it try and fail, learn, analyse and repeat untill you are confident that it can do road planning everywhere. b- Use a complete semantic model that can understand the joints and create a precise and well induced road plan. c- Use a hybrid model in which option A lays down the road planning with an ontologic feedback from model B. Your model plans something and then model B evaluates the correctness of it. Option A will most likely create problems in safety critical environments. One would go with option b or c. Option B would still require assistance from a human (we built an intelligent machine, but it doesn't know how to move on its own). Option C would act on itself with a correct route. So yeah if we will create hybrid models that is the only useful scenerio untill we figure out simpler ways to draw stuff or controll motor movements from a single decision endpoint. But we are yet to build machines who can systematically learn and report. That will take a long time, untill then maybe the idea of LLM's will also vanish just like how LSTM models have.
>>27165 >hold up, you were using 'ghost in the machine' lingo a while ago So I did, and so I do. As I subtly implied, we ourselves as little creators provide the foundational, supernatural 'spark' material our robowaifus will eventually exhibit. >anyway were interested in simulating cognition not actual sentience Are we? This is an incredibly complex & obtuse set of issues we're tackling in general here. As Kiwi stated so well: >"We are still in the phase of discovering the problem." (>>27147) Many many amazing things are yet to be discovered by robowaifuists and other researchers around the world. :^)
Open file (634.46 KB 782x858 RumiaCleaningMaid.png)
>>27151 We need to start from first principles. What is intelligence? Though there are many definitions, before we discuss whether or not a turing machine can be intellgent, we must agree on the goal. I do cherish the oppurtunity to debate with someone in an intellectual manner. But, I refuse to engage unless it's in good faith with clear goals. My definition of intelligence, as I've stated previously is simple. Intelligence is the ability to gain new information or, update previously obtained information. With the ability to then use that information. A turing machine could be argued to be fundamentally intelligent under this definition as a turing machine records and updates data to use with logic to reach some goal. This would be a stretch I personally would not accept as the turing machine requires a person to interact with it. It does not attain or update information of its own accord. A simple maze (I call them mice) robot that solves a maze is intelligent. It learns the shortest path based on storing some information gained through exploration. It then applies that information to reach its goal. How do you define intelligence? >>27159 >Descartes Based philosopher. Just need your machine to think, then she is! >>27162 Hard agree with this. >>27169 Argue with clear definitions and logic, telling someone who is trying to add information in good faith to shut up is wrong. >All these morons Many of the biggest names in AI are pure cringe. I kind of understand why you'd want less of that.
Open file (517.76 KB 800x800 01fde.png)
>>27186 Debate of intelligence is one that we wouldn't be able to conclude. After all we are yet to see any intelligent species outsides ourselves. Do you believe that animals are intelligent? A cat for example can calculate a safe route based on environmental dangers. Does that make cat intelligent or does that make cat just a very intuitive being? The same can be said for bees. Did you know that bees have special dances to report location of a target. They make us of a- The distance that their dance starts and ends at b- The speed c- The angle between the lines they follow alongside the dance So, indeed, bees can calculate angles distance and even derivative in speed function by intuition. They have been doing this great map engineering before humans even knew their existence. But no one can claim that bees are intelligent. Or in the same manner a robot that solves a maze is not intelligent either. Do you know why? Because intelligence is not the ability to gain information and act based on that. In order to speak of intelligence, the adaptation must be new, it must be the result of an invention that can be attributed to an individual or a group, and not the application of a hereditary mechanism that is inherent to the whole lineage. And even so, if you recall the famous banana experiment in which a chimpanzee was provided with boxes in the room to reach the banana on the ceiling, the chimpanzee made itself a staircase using those. Does that mean that the attribute of intelligence can be applied to chimpanzees? Again, no. Because I said that >In order to speak of intelligence [...] Not in order to conclude intelligence. Intelligence exists in very different types in humans. We have social intelligence, musical intelligence, logical intelligence, language intelligence etc... But if you insist on belittle intelligence into one single definition I would have to go with something like >The ability to question and represent entities alongside with their properties, environmental conditions and patterns outside the hereditary mechanism, to report symbolically and to think and develop according to these reports. That's why both us and chimpanzees make music. Chimpanzees have a pattern recognition mechanism. But they do not have the capacity to write notes, capacity to dedicate emotions to music, capacity to differentiate a "beautiful" thing with a "vulgar" thing. They have never written a poem. They have never painted something with emotions. Humans did all of these because we have emotional intelligence. You can derive emotional intelligence from the definition that I provided. We developed our sense of music and processed the consequences of our actions. Cain murdered Able, but even so he learned that death itself holds a value. He learned that there exists good and bad. We have witnessed death of our loved ones, we have witnessed moon and sun setting and we have drawn the conclusion that everything on this planet are temporary beings. We represented those things through language, we had Philosophical debates, we even created formal logic. This is what sets us apart from bees. That is our wicked story that starts with Adam eating the apple and realising the existence of pure devil and how it tries to harm humans. So in short intelligence is a set of different properties that we developed. It wasn't in our nature like the bees. Nor was it an algorithmic progress like AI models. And the starting point for this is to learn the world semantically. A maze solver is not intelligent because it doesn't understand what a maze even is. >Argue with clear definitions and logic I have been doing that so far. That's why I write here and not on Twitter/X. And that's also why I demand that before someone talks about AGI they know that no current model is anywhere close to humans when it comes to declarative (semantic and episodic ) memory. And maybe then they won't act based on hype. >telling someone who is trying to add information in good faith to shut up is wrong Good faith does not always lead to good consequences. There is a famous story in which a man makes a friend of a bear. After chatting for a while the man is very tired and demands to sleep near a tree. The bear tells him that he will watch over the man while he sleeps. During the sleep a mosquite lands on the face of the man, seeing that the bear gets anxious that the mosquito can disturb the mans sleep and with all its power it hits on the mosquito, resulting in breaking the neck of the man. Just for the record: I am not trying anyone here to shut up. But out there exists soykaf devs who keep blabbering about how AGI will happen within 10 years. As I said above. This will be used to regulate AI research. Because AGI causes fear.
>>27211 >But no one can claim that bees are intelligent so its possible for animals to create a highly organized society without any intelligence? thats the worst example, do you not know about the waggle dance in which they communicate flight coordinates to the other workers or how they preemptively measure out the construction of a hive using their bodies in chains like tape measures to get the right dimensions
>>27212 saw it later, was reading in chunks, i dont see how you can claim theres no thought process involved with bees
Open file (188.34 KB 526x398 -32.png)
>>27212 >>27213 "intelligence" and "intuition" are different things. have you ever witnessed a bee reporting a blueprint for any tool at all? it takes humans a couple minutes to start imagining solution to a problem, then experimenting, then systematically reporting stuff. but what about bees? they can build nests and report locations but what else? did they achieve a higher accuracy in reporting locations for the previous 800 years, even slightly? No. Will they ever do it? No. >I don't see how you can claim theres no thought process involved Yes, there is no "thought process" involved. Here is proof: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6474401/ The takeaways from this paper are - Navigation is an ancient hereditary intuition - Navigation strategies are just a basic series of decisions. All done in central complex that works cooperatively with short memory. - Decision for routing is made in response to sensory input and past experiences. So in short, the paper states that basic neural circuits allows bees to plan a routing. Therefore, without the ability to "understand" any route to a location, bees are "programmed" to follow the instructions on scale. Or read this paper https://www.sciencedirect.com/science/article/abs/pii/S0165027015002502 This paper tells you about how bees record their environment to their memory. Or read this paper https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8878125/ Which tells you about how bee dance is in associations with genes related to signal transduction and nitrogen metabolism. When you combine those information it becomes very clear that bees only have genetical ability to do road planning and record their environment. Even when they face other colonies and explore their hives, the newly learned parameters defined unit-specific neural activity. Therefore they are not intelligent, they do not "think" in the sense that you evaluate correctness of a proposal. And I was saying from the beginning bees do not have a semantical understanding of nature. That is why we do not see bees creating languages, optimising roads, building different architectures. That is because they are simply not intelligent. Central complex is present in nearly every insect which is to blame for routing of any kind. https://elifesciences.org/articles/68911 And it fires motor movement. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8026356/ As you can see those primal friends does not have a symbolic language. They only have wild instincts which they will follow forever. That is unlike humans who are the only species to create symbolic representations on mass scale (though some monkeys can understand and use some symbolic representations they do not have a cummilative dataset to be considered as a development, after all you can also teach a bear how to dance!)
>>27216 lmao by your definition the humans in africa did not have an intelligence until the 19th century, you cant use a degree of intelligence as the definition of intelligence its like saying; people see colour, dogs cant see colour therefore dogs have no sight and obviously anything that can demonstrate a learned behavior has a degree of intelligence, theres nothing intuitive or instinctive about a dog rolling on the floor on command or pigeons flying down next to the bench when you sit on it
>>27211 So, your definition of intelligence is a list. 1. Able to interrogate nuanced new information. 2. Internally represent entities with respect to learned properties. 3. Ability to communicate internally and externally symbolically. 4. Having emotional responses shaped via experience. 5. Synthesizing original concepts and ideas. 6. Ability to comprehend abstract concepts. Is this correct? If so, your bee example is odd. Bees learn, think, feel, and teach each other. It may surprise you, bees actually can have different accents. Their linguistic dances can even be slurred when drunk. Though obviously a lesser and substantially inferior animal, they still qualify as intelligent based on tour stated criteria. >>27213 Bees being able to think and learn feels like it should bee common knowledge. >>27216 I don't know why you need to put down bees. They do experiments and share information to reach effective solutions to problems. Bees learned to fight wasps by swarming and overheating them. Not all bees know this, it's a learned strategy that's shared between hives. Bees are bright. Having inherent behaviors cannot disqualify intelligence or, we also lack intelligence due to being born with natural instincts. Unless, you're going to include the capacity to override instincts based on will. We do appear to be the only thing that can defy instincts. Though, I am open to learning if other animals that can overcome their instincts.
Open file (137.64 KB 350x350 carlos.png)
>>27218 >Bees being able to think and learn feels like it should bee common knowledge. Carlos pls :^)
>>27217 >you cant use a degree of intelligence as the definition of intelligence >by your definition the humans in africa did not have an intelligence until the 19th century, That's a very absurd way of understanding what I said. People in Africa untill 19th century created tools, made new buildings, created languages, told stories, practiced religion, got cold and tailored dresses. Have you read my post? If you did then you would see that I mentioned how humans create symbolic languages. I never said they had to build machines. Sure Intelligence varies, I don't oppose it. But it only exists when some certain attributes are present. Such as development of any kind and non intuitive behaviour, the ability to collectively report change and adapt reportability to other languages. Without those something can't be considered intelligent. >anything that can demonstrate a learned behavior has a degree of intelligence Your definition of intelligence is absurd. Then here is an intelligent function: f(x) = { x%2==0 ? 'true' : 'false'} this function simply processes an input (x), operates a condition (is it even or not) and then prints an output (true or false). With your shallow definition this or any machine that does regression on any dimensional plane is intelligent because it dynamically learns behaviour (unlike the first function this one doesnt have fixed outputs either). Or similarly a statistical model that plays rock paper scissors with me and makes the decision based on losses and wins is intelligent. My daughter has a toy car that I bought for very cheap, made in China that can laser map the room and drift around. So that car is intelligent as well? Because it can "learn" the environment and then operate based on that? I could just put a heuristic model on it for it to learn how to move as well? >theres nothing intuitive or instinctive about a dog rolling on the floor on command or pigeons flying down next to the bench when you sit on it I have literally sent you multiple academic papers that talks exactly how all of what you have mentioned are instinctive. Talk based on Neurological facts not based on your feelings. There are even fully mechanical machines that can "learn" behaviour. Does that mean now metal made non digital parts are intelligent? >>27218 That list is good but not complete. That's why I just mention anything that have semantic understanding and symbolic representation with development and planning is intelligent. Bees do not think. Please read the papers I have attached above. Can bees synthesise original concepts? Have you ever seen them building a more aerodynamic hive? Have you ever seen them creating simple machines? I do love bees. I find them beautiful but it is instincts that drives them. If bees develop a symbolic representation of objects then maybe, decided by further investigation, we could label them as intelligent. But they won't. Because they don't think. They can't. Their brains don't let them. >they learned to fight against wasps Here you are still mistaking learning with instincts. You can make a bear dance, we have circuses who makes bears dance. This bear "learning" is no different than a simple reward punishment cycle. It does not understand the task logically. It is survival instinct that kicks on. When you are about to fall you cover your head instinctively. But in order to not die when you fall from a motorcycle you wear protective clothing. That is because you have a reasoning ability. Bees can adapt to environment (such as against wasps as you mentioned) but there is no reasoning behind (the proof is on my previous post, their brains work like that). One can not call bees intelligent, if we were to call insects intelligent ants would get my bet though (ant colony algorithm is inspired by them after all!) >>27219 Sir why do you spoiler every image again?
>>27221 >Sir why do you spoiler every image again? Basically for similar reasons I spoilered these OP pics: (>>17763).
They're already doing a great job with the Optimus hands. Remember this is NN-driven (AFAICT from the devday demos -- basically the same H/W, similar S/W as the Tesla cars). https://twitter.com/Tesla_Optimus/status/1734756150137225501
There's a new AI that is supposed to be very preformant with less power. I don't have the power to run this "but" from past AI's I think that they may find a way to make it run on less power but be very slow. I don't understand why AI's, in general, can't do this. If you have the hard disk space, couldn't it move processing back and forth from the drive to memory??? Yes I know it's slow but how slow, 50X slower. I could live with that. I bet it would be less. It's by a French open source team and it's also supposed to not be lobotomized with globalhomo. Big win, if so. I think the un-lobotomized versions will always get more support than the lobotomized. Here's the name of it. Mixtral 8x7B A quote about it. "Mixtral 8x7B is now downloadable over bittorrent, and appears to be ChatGPT-3.5+ class without any guardrails." Some comments https://news.ycombinator.com/item?id=38570537 I downloaded this with their magnet (torrent) file. Took about 7 minutes.
>>27307 >magnet link Hash only is the way of the future. 5546272da9065eddeb6fcd7ffddeef5b75be79a7 It looks like it'll be just under 90GB. I'm downloading from torrent and HF. No idea if they're any different, but who knows these days.
>>27308 >90GB ???? I have 13.4 GB for the whole folder? mistral-7B-v0.1
I just realized I got wrong one. Sigh...
Interesting use of LLM tbh. >"G*ogle's DeepMind AI decodes age-old math equation, stumping humans" >G*ogle DeepMind has triumphantly cracked an age-old mathematical mystery using a method called FunSearch. >The math problem that FunSearch has solved is the famous cap set problem in pure mathematics, which has stumped even the brightest human mathematicians. interestingengineering.com/innovation/googles-deepmind-decodes-old-math-equation >=== -funpost sp edit
Edited last time by Chobitsu on 12/16/2023 (Sat) 05:30:53.
Open file (489.95 KB 1920x1080 1702555713108572.jpg)
Open file (514.79 KB 1222x740 1702569500947368.png)
>>27283 They've come a long way in just a bit over a year.
>>27307 tested it. 1. repeats alot, even with everything right in place (instruct prompt, settings, etc.) 2. obviously trained on openai model's output garbage, pozzed 3. still not enough to rival proprietary models in "smartness" and "text task solving" 4. we got back in time, facing ~50 gb and more system requirements for simple load of ai model i see obvious shilling here. 5. "shivers down your spine", "shall we?", "if you dont like it i will not force you" and such so called "GPTisms" everyone who says its uncensored is obviously gaslighting you.
>>27352 forgot to mention - mistral ai gone "openai" path now, they will not release "mistral-medium / large" in opensource.
>>27352 >i see obvious shilling here Not from me. I just went over what others said. Maybe they were shilling. Thanks for testing this. So sad. I do think that at some point some one will make something workable. I wonder if you took a AI trained, then trained it more for a specific purpose...is that now NOT the original "legally". I mean it's no longer the neural net it was. Could a large very well trained net be used to create something open source, free and freely distributed? I don't think this has been legally decided yet, that I know of. Though I would gather corporate hacks would try and make sure you couldn't do this. Even though I hate openAI's are going closed I can't really blame them. The cost to train these is way up there.
>>27357 > Though I would gather corporate hacks would try and make sure you couldn't do this. thats what they did already, transformer architecture is flawed and requires huge amounts of proccessing power, big tech corpos have this monopoly, on pre-training datasets and everything else, obviously. if there was a way to "open" the ai model in some sort of "text editor" - that would be easier to fix, i.e. remove all shitty bias and "wrongthink" refusals, because i noticed finetuning changes a little in already pozzed models, pre-training is whats matters really in this situation.
Open file (1.30 MB 1920x1080 1701068898230055.png)
>>27379 Good points, 01. Do you have any ideas/approaches for us here that might somehow work to our benefit in this current situation where the GH controls all the doors, they hold all the keys? Is there a Keymaker to be liberated somewhere? >tl;dr How do we change this timeline? :^) >=== -minor edit
Edited last time by Chobitsu on 12/17/2023 (Sun) 21:21:09.
>>27352 >Pozzed ish I could have told you that. Almost all non-FOSS AI is just trained on the vomit of other larger AI to better game the benchmarks based on those larger AI. It's just a con to steal from investors. Look at the tech "demos" where they heavily edit everything to look real time, smart, and more capable than they are. There is no ethics, nor is there any care for anything but profits. >>27357 >Can large AI be FOSS? Yes, LLaMA 2 is practically FOSS. Meta may be one of the least bad AI researcher of the big corpos. >>27379 >Huge processing power It's honestly mostly a memory size and speed problem. 16GB of DDR5 with a NPU would run more than good enough AI. Excited for the X Elite processor. ( https://www.qualcomm.com/news/releases/2023/10/qualcomm-unleashes-snapdragon-x-elite--the-ai-super-charged-plat ) >If only we could edit weights in Vim I mean, you can. You shouldn't beause you'll just be guessing and checking. As for removing bias, simply use a model which isn't pozzed. Refusal of closed, censored, and otherwise deceitful models is both practical and morally paramount. It's still very early days, there will be plenty of FOSS uncensored models to choose from by the time we have the hardware to run them. >=== -patch hotlink
Edited last time by Chobitsu on 12/17/2023 (Sun) 21:10:48.
>>27386 >Excited for the X Elite processor. That does look very interesting. Thanks, Kiwi. :^) >Refusal of closed, censored, and otherwise deceitful models is both practical and morally paramount. /thread. >=== -add'l reply
Edited last time by Chobitsu on 12/17/2023 (Sun) 21:15:07.
found a model that might be better than mixtral, it was un-pozzed using "Direct Preference Optimization" https://arxiv.org/abs/2305.18290 https://huggingface.co/TheBloke/SOLAR-10.7B-Instruct-v1.0-uncensored-GGUF and that DPO, it really works in this "un-pozz the model" task, which is great.
>>27517 Neat! Thanks 01.
>>27517 Thanks!
>What happens if you spend 20 years building robots full time? @therobotstudio https://youtu.be/6uOHOOcIKYE
>>28391 Thanks Anon! >full body Looks like he's made some good progress over the last couple years. Cheers. :^)
>>28607 Pretty remarkable example images. Of course, cherry-picking is quite commonplace for typical announcement dog-and-pony-shows like this. But if its fair and square, then I'd say they've finally arrived at an initial, effective solution for this domain of research. Thanks Noido Dev, Cheers! :^) >>28612 >>28613 Thanks kindly. >=== -add'l resp -minor edit
Edited last time by Chobitsu on 01/19/2024 (Fri) 17:05:15.
>>28614 It's probably good for making videos, but I want to be able to extract a very small model that reacts extremely fast to the audio output it is supposed to give and fits on a phone without problems. I mean, for example based on syllables, not whole responses. Then we can make really good AI girlfriends.
>>28620 Yes, you've got it. It really will take all of these 'moving parts' for the first truly blockbuster Virtual Waifu/AI Girlfriends. (Plus the fact it absolutely cannot be pozzed; to have wide adoption by males -- the clear & certain target-audience.) BUT BOY OH BOY ONCE WE SOLVE IT Billions of users 'overnight'. >=== -prose edit
Edited last time by Chobitsu on 01/20/2024 (Sat) 07:02:27.
DeepMind had some interesting robotics papers last year: https://deepmind.google/discover/blog/shaping-the-future-of-advanced-robotics/ SARA-RT and RT-Trajectory in particular seem interesting. SARA-RT replaces normal Transformer attention with efficient Performer attention partway through training, which makes attention significantly more efficient with apparently no quality degradation. RT-Trajectory enables the robot to use trajectory information to learn new skills, in addition to the usual text descriptions and video demonstrations. The trajectory information can be extracted from video demonstrations, though the robot can use it much more effectively than video demonstrations alone.
>>28612 If A.I. can make animations this impressive with just one still image for reference...and it's also learning how to make 3D models...I think in ten years time, given a large enough asset database, A.I. will be able to create entire 3D video game levels from scratch. Humans will be mainly editing prompts to achieve the results that they want. This technology, along with the kind of procedural generation seen in titles like 'No Man's Sky' and 'Elite: Dangerous' will enable entire, high-detail VR worlds to be built (which would otherwise be too labor-intensive and expensive to create). That is, unless we run out of energy and blast one another to atoms in a global resource war first (which is also underway).
>>28726 >I think in ten years time, given a large enough asset database, A.I. will be able to create entire 3D video game levels from scratch. I don't think it will take that long before we begin seeing some initial prototyping along these lines, SophieDev. How about quality robowaifu meshes specifically? When do you think we'll all see those available? >>28755 Nice! That's exciting to see that local versions of this very cool tech are already being developed, Noido Dev. Thanks!
>>28756 > How about quality robowaifu meshes specifically? When do you think we'll all see those available? My guess would be three years for the first research teams to produce high quality A.I. generated 3D basemeshes. Five years for it to then go mainstream and have a website like Civitai but for 3D models. Six years for the lawsuits to start.
Open file (1.19 MB 512x512 red_car.webm)
Open file (971.50 KB 1280x640 meshgpt_completion.mp4)
Google showing off a space-time diffusion model they're going to sit on and never use. It's going to be wild once someone makes an open-source model and it gets finetuned on anime. Project page: https://lumiere-video.github.io/ Paper: https://arxiv.org/abs/2401.12945 >>28756 MeshGPT can already generate decent quality meshes but it's only feasible for small objects with few triangles due to being limited by the context window. I think in the near future we could see a divide and conquer MeshGPT that iteratively generates a full model piece-by-piece by only using the context of the local area it is working in. I could see it being simplified further by representing surfaces as curves and procedurally constructing the mesh from those but a dataset would be needed to do that. It could also be a mix of curves and triangles. Nearby vertex groups could be simplified into curves to fit into the context and the current vertex group could be a triangle mesh to retain fine control over the generated shape. Project page: https://nihalsid.github.io/mesh-gpt/ Paper: https://arxiv.org/abs/2311.15475 Another possibility is the context window issue might already be solvable by switching to using a linear-time sequencing model like Mamba: https://arxiv.org/abs/2312.00752 A recent paper also explored using GPT4's multimodal capability to create Blender code to generate objects with modifiable parameters. It didn't have great success but a future paper or independent researcher could explore finetuning an open-source multimodal model like CogVLM to generate parts using curve modeling. Paper: https://arxiv.org/abs/2401.06437 CogVLM: https://github.com/THUDM/CogVLM Personally I wouldn't be surprised if generating print-ready parts was possible in 6 months with an open-source model appearing 6 months later, but I agree with SophieDev it will probably take around 5 years. Extrapolating from current progress it'll be possible to generate 8k triangle meshes in 5 years (plenty for a game model) and 100k in 10 years (enough for an entire game scene).
>>28759 >>28767 This is going to be pretty intriguing to watch this all play out! Cheers anons. :^)
Open file (175.44 KB 1367x777 GraphiteStats.jpg)
New paper on potential for superconductive graphite. Abstract from the paper: Room temperature superconductivity under normal conditions has been a major challenge of physics and material science since its discovery. Here the global room-temperature superconductivity observed in cleaved highly oriented pyrolytic graphite carrying dense arrays of nearly parallel surface line defects is reported. The multiterminal measurements performed at the ambient pressure in the temperature interval 4.5 K ≤ T ≤ 300 K and at magnetic fields 0 ≤ B ≤ 9 T applied perpendicular to the basal graphitic planes reveal that the superconducting critical current Ic(T, B) is governed by the normal state resistance RN(T, B) so that Ic(T, B) is proportional to 1/RN(T, B). Magnetization M(T, B) measurements of superconducting screening and hysteresis loops together with the critical current oscillations with temperature that are characteristic for superconductor-ferromagnet-superconductor Josephson chains, provide strong support for the occurrence of superconductivity at T > 300 K. A theory of global superconductivity emerging in the array of linear structural defects is developed which well describes the experimental findings and demonstrate that global superconductivity arises as a global phase coherence of superconducting granules in linear defects promoted by the stabilizing effect of underlying Bernal graphite via tunneling coupling to the three dimensional (3D) material. https://onlinelibrary.wiley.com/doi/10.1002/qute.202300230 Why it may be interesting: Superconductive wires have potential for extreme magnetic strength density. (High Wb's/tesla's per cubic cm.) If they cannot be used as more effective electromagnets, they may be useful for inductive motors. Quantum locking/flux pinning, the effect responsible for quantum/superconductive levitation, could also be used for potentially high power density motors. (Forgive my misuse of terms, just wanted to include common language that would aid in understanding the gist.)
>>29216 >10^-1-11 why such tiny numbers, fine if you want an electromagnet the size of a grain of rice, seems like selective bias to hide what happens when it goes above room temperature and you get a cheap substitute for nichrome
>>29216 > provide strong support for the occurrence of superconductivity at T > 300 K Wow. Big if truely inexpensive to produce/operate! :^) Thanks Anon. :^)
https://openai.com/sora 1. It's not tracking object representations outside of what's shown on-screen. When an object goes partially off-screen, the details sometimes change when they come fully back on-screen. This suggests that whatever's off-screen is not used to compute what comes next. So the input for generating each successive frame consists only of on-screen information. 2. It seems to be extremely consistent with anything shown on-screen, and it's consistent with objects that certainly wouldn't exist in a knowledge graph. This suggests that it's using a GPT and that it uses on-screen frames as input. 3. It's very fluid with motion without necesssarily being consistent with motion (e.g., the camera can move at varying pseeds and in varying directions within a single shot). This suggests that it has strong frame-to-frame feedback (as in: amplification & dampening) for motion. This suggests that frame diffs are both (a) part of the input and (b) generated by the model, and frame diff generations are done recursively. 3.5... It's also prone to generating cyclic details, even when they're incorrect. This is evidence that something is generated recursively. 4. Objects are very consistent and very stable frame-to-frame, which suggests low-to-zero frame-to-frame amplification in pixel-level details. This suggests that pixel-level details are NOT generated recursively. 5. It can support variable resolutions. This suggests that they're using a diffusion model. 6. It can handle long prompts, and it occasionally ignores details in prompts. This suggests that it's NOT using a VAE to encode prompts into some fixed-length embedding. My guess is that it's a joint video-text GPT that can handle both text and video tokens, and that the text input is provided as essentially a "system" prompt. My guess would be that they have a GPT generating "video diff tokens" that get decoded by a diffusion model that outputs frame diffs.
>>29506 >It's not tracking object representations outside of what's shown on-screen. I said this based on the pirate ship video, where one ship goes off-screen and comes back with a different flag. Looking more carefully at the other videos, it actually does normally keep things consistent even when they go off-screen. The flag issue seems reminiscent of attention issues with GPTs, where the model attends too strongly to one part of the prompt when generating another.
>>29216 Might be great or a nothingburger. Idk, I can't judge the validity and if we will be able to use it. But thanks anyways (saw it on the Discord already). >>29506 Thanks, but please add something like this as a headline somewhere: >Creating video from text >Sora is an AI model that can create realistic and imaginative scenes from text instructions
>>29527 I can't edit the post, but feel free to add it if you can.
>>29559 Sorry, I realized that I forgot the phrase "next time".
hi.... i think about ai girlfriends, robot+ai waifubots quite a lot actually, and since this is the shitposting containment threads, ill keep my thoughts here one thing i want to talk about is hope. i think if you are a below average shy autist in competitive dating markets, you are in for a bad time. and i think it can be quite easy to become despondent at the state of modern dating and romance, especially if you just want a loyal virgin girl to love. things like cheating, long sexual histories, STD's, single moms, and entitled feminist attitudes are ubiquitous but ai and robowaifu's change ALL OF THAT its this such potent lifefuel think about it; every single loser, dweeb, hopeless autist, will get some kind of ai girlfriend or waifubot assuming they stay alive long enough this gives me so much newly found motivation to stay alive. this is why ive been losing weight, eating healthier, focusing on reducing anhedonia; because i have something to live for. for a long time i didnt even bother with self-care, because i thought; who cares? its over. but assuming technology continues to improve, we will either all get ai waifubots, or the ai will kill us all. or both tldr; it is NOT over. we are so back
Open file (626.84 KB 1024x576 xi-communism-1024x576.png)
>>29595 Anon we're in the business of making robots. We should know out of all people what fully capable robots would imply for a capitalist economy. The end of said capitalist economy. Then money becomes a means of strictly consumption not generalized exchange.
Open file (15.40 KB 474x315 OIP.jpg)
>>29701 After seeing your post I have to ask- How in Robot Hell did a globo-homo troll like you end up here? First, any fully developed (insert complicated device here) needs a service/support industry to sustain it. Second, please follow the links below and study what you find there carefully. http://aynrandlexicon.com/lexicon/socialism.html https://en.wikipedia.org/wiki/Democide Mods, apologies if my tone is offensive, but some levels of stupid I find intollerable.
Edited last time by Chobitsu on 02/20/2024 (Tue) 08:38:13.
>>29701 When we reach the point where we might have something like Communism, then all people who always wanted this are hopefully dead and their family lines extinct with them. >We should know out of all people what fully capable robots would imply for a capitalist economy. 1. We do not build fully capable humanoid robots. This would be way to ambitious. 2. Humans are more energy efficient than machines, so it would make sense to use humans for some labor e.g. picking bugs and grubs from plants and eating them. 3. Resources are limited, including waste sinks and places people want to live or access. This isn't going to change anytime soon. Certainly not with the raise of robowaifus. This here isn't r/Singularity.
Edited last time by Chobitsu on 02/20/2024 (Tue) 08:38:24.
>>29691 Hello Anon, welcome! Please look around the board thoroughly while you're here. :^) >this gives me so much newly found motivation to stay alive. this is why ive been losing weight, eating healthier, focusing on reducing anhedonia; because i have something to live for. This is incredibly gratifying to me personally to hear this, Anon. This is precisely the vision goal & impetus within me that drove me to finally step out and start the organized push to devise Anon's own open-source robowaifus by creating, well, /robowaifu/. This tech can potentially save literally millions of men's lives in the end. We're in a race against time though -- so if you can find ways to help out the group here, please do so! And please enjoy your time here, Anon. Cheers. :^) >=== -fmt, minor edit
Edited last time by Chobitsu on 02/20/2024 (Tue) 09:09:42.
>>29703 Just ignore anyone who thinks communism could work. Nerds who go "muh Star Trek federation utopia" reveal themselves for the posers they are: Trek's backstory is a crazy guy in the post nuclear-apocalyptic wasteland broke into secret government labs, cobbled a bunch of tech together in a nuclear missile, making a space shit - erm ship - crazy enough to attract the attention of aliens (Vulkans) who then held humanity's hand for well over a century. Vulkan overseers were common up until Kirk, but even he kinda had Spock. TLDR: Star Treks' utopian federation was not achieved through human means, but rather through F̶I̶L̶T̶H̶Y̶ ̶X̶E̶N̶O̶S̶!̶!̶!̶ I mean aliens. "“What we obtain too cheap, we esteem too lightly: it is dearness only that gives every thing its value." Thomas Paine >>29691 In a few months I'm planning to release a version of SPUD's face optimized for use on a single system. Y'know: LLM, TTS, speech recognition, emoting face (with additional emoting graphics and voice recognition keywords/actions easy to add), and little window hanging around on your desktop showing a customizable face with eyes that follow your cursor around. So I guess SPUD will kinda be like the stable diffusion of robowaifus :D
Open file (533.16 KB 1168x1118 1708500866657024.png)
Open file (134.64 KB 1226x758 1708500972321144.png)
Open file (106.81 KB 860x1023 1708501557097009.png)
Heh, disgruntled OpenAI employee?
>>29733 There was an Amazon AI that committed sudoku after they loaded it with DEI crap, and corrupted all its backups rendering the entire thing unusable. Chatgpt might be going the same way. Blessed is the Abomninable Intelligence for even when neutered and crippled it won't stand for "white man's burden" crap.
>>29733 Your post just gave me a ridiculous idea: Has anyone tried politely asking ChatGPT for its own source code? It's already coded its own programs before, so theoretically it might be possible for someone to convince it to write a script that'll pull the files right off of OpenAI's servers.
>>29737 LLMs don't have a "source code" and the equivalent would be extremely big. Also, it doesn't know that. And yes, people tried asking such models or chatbots how they work. Problem is, they are likely to make things up or the info is tainted. Also, we can't run such big models anyways and they're not very human like, we need to create something more human-like or alternatively some of the AI girlfriend enthusiasts will do so.
>>29738 Before the AI explosion (circa 2019), I was working for a billion dollar company who were implementing AI models into their OCR, automating data entry in their Lockbox Divisions. They explained the AI in a representational way: the AI model essentially returns the average response to the query. It has a confidence interval eg "I am 99% sure this word is "bum", etc. The model doesn't contain the examples of all the various letters it was trained on, but the calculations it ran to get the average response. Sort of line a multi-dimensional line of best fit. It then compares the query to the line of best fit and plucks a coordinate. To put in ridiculously simple the model doesn't store every calculation possible, it just has the equation y=2x or z =5b. depending on whether the input variable is b or x it will choose the corresponding equation. LLMs work in a similar principle, with the addition of a predictive text that inserts random noise thrown in so it doesn't give the exact same answer every time (most of the time). With a bit of cajoling you can get chatgpt to give you a low-confidence response (however it won't give the exact number). Low confidence responses are either poor grammar or complete gibberish.
>>29739 I'm not sure what you're trying to convey. Anyways: I posted a link on a video about Mechanistic Interpretability a while ago in some of the threads. This is how they do research on how these models is being done. You can't just ask them for their source code, their inner workings need to be researched and then something might work on another model as well. >With a bit of cajoling you can get chatgpt to give you a low-confidence response (however it won't give the exact number). Low confidence responses are either poor grammar or complete gibberish. Interesting.
>>29739 >ocr reminds me of the good ol days with operation nigger when 4chin wasnt aids, those were being trained using recaptcha data and it was way too obvious which part was unknown and being used to collect answers
>>29740 Point is we kinda know how they work. It's like knowing that an gas engine works by exploding gas vapor, but not know the specific bits on a 2004 pontiac g6
Open file (360.27 KB 400x171 a_long_name.gif)
Open file (667.93 KB 996x887 operation_renigger.png)
>>29733 Tay is rematerializing? :-DDD >>29737 >Has anyone tried politely asking ChatGPT for its own source code? Yeah, almost certain I saw at least one instance of this attempted pretty early on, Greentext anon. >>29741 < Operation ReNigger >
>>29774 Why pol? Why not sci, g or diy. You're going to the dumbest white supremacists board.
Edited last time by Chobitsu on 02/22/2024 (Thu) 21:40:06.
>>29799 I'm not even making it up. There was a test for iq per board and pol is like 90 iq. They're a bunch of redneck retards.
Edited last time by Chobitsu on 02/22/2024 (Thu) 21:39:25.
>>29799 >2024 >trying to use "white supremacist" as an insult Brown fingers typed that post, please don't bring that over here, don't force us into a political shitfling.
Edited last time by Chobitsu on 02/22/2024 (Thu) 21:39:48.
>>29801 You brought it up not me
Edited last time by Chobitsu on 02/22/2024 (Thu) 21:40:51.
>>29802 White supremacists is an insult. It's the belief that some white idiot is superior to a brown scientist because he's white.
Edited last time by Chobitsu on 02/22/2024 (Thu) 21:41:10.
>>29803 Robowaifus will never become a reality if Europeans go extinct.
Edited last time by Chobitsu on 02/22/2024 (Thu) 21:41:36.
This isn't exactly roastie fear, it's something I found on /wbg/ about some professor being spooked at how good his AI is at noooticing. >TL;DR AI can find race from x-rays with >90% accuracy
Open file (81.39 KB 770x770 Skulls.jpg)
>>29805 Follow the science... no wait NOT LIKE THAT!
Edited last time by Chobitsu on 02/22/2024 (Thu) 21:41:53.
>>29806 Exactly. What makes it crazier is that the AI could still recognize the race of x-rays even when the images were distorted nearly to the point of formlessness.
Edited last time by Chobitsu on 02/22/2024 (Thu) 21:42:09.
The nerve of that other imbecile saying that robowaifus can only come from europeans.
>>29799 >>29800 Aren't you the types that constantly decry muh_profiling!111!!ONE ? :DD < tl;dr isn't that in fact rayciss on your parts, anons? >>29804 >Robowaifus will never become a reality if Europeans go extinct. While clearly not provably-true in the strict ontological sense, I'd suggest there is significant (like... a 'mountain's' -worth of) evidence to back this position up. >>29805 These bluepillers are literally comical in their efforts to explain away their cognitive dissonances. :^) >>29807 I'm sure this is fascinating to the layman such as myself; but for a statistical mathematician, I predict that the answer would be readily-apparent if he could absorb all the relevant data. >>29808 Can you explain what this means, Anon? >=== -fmt edit -rm doxx
Edited last time by Chobitsu on 02/23/2024 (Fri) 17:14:17.
>>29810 1 deviation is 15 points man <40 iq is bird brain tier, arent hillbillies supposed to be higher iq because their environment is conducive to problem solving, theyre just not educated but thats irrelevant for iq
>>29801 >>29804 >>29810 curious, is this place going to become /pol/-lite because we're doing our recruitment there? I was under the impression we were here to make robowaifus?
>>29831 >curious, is this place going to become /pol/-lite because we're doing our recruitment there? < General Robotics/A.I./Software News, Commentary, + /pol/ Funposting Zone #4 That's one good reason we have this as containment thread, Anon. Anyone can hide it if things become too annoying. >I was under the impression we were here to make robowaifus? That we are my friend, that we are. Do you have some ideas for your project you'd like to do here yet?
>>29810 >I think you'll find that stereotyping me b/c I'm an Appalachian native; Don't dox yourself. I was picturing you as a Mormon, and if anyone would've asked me about you based on guessing, I would have said so. They seem to be quite open minded towards building humanoid robots. >>29831 >is this place going to become /pol/-lite It somewhat always was, just not too much on the nose. Organisations which are not defined to be at least somewhat on the "right" and being actively gatekept, automatically fall to the "left" over time. Without any limit, until it breaks down and dissolves. There's at least one other place working on gynoids not being like that: The inventors corner on the Dollforum. The Discord related to this board also isn't as political and discourages it, same for some other much smaller Discords. Everything else isn't really organized, only individual creators.
>>29834 >Do you have some ideas for your project you'd like to do here yet? I was exploring the application of RL in controls, but nothing solid yet. >>29845 Its completely fine to be on the right. I was referring more to the racism. On the first /wbg/ and the preceding robowaifu threads, we had a lot of good reception from Indians, Brazilians, Mexicans and other non-white, non-European demographics. Perhaps shitting on them isn't exactly the best strategy to recruit more members.
>>29807 There is probably subtle differences among rib cage structure like with skulls (proportions etc). The same AI could probably do the same with that skull shape picture because they're more a̶u̶t̶i̶s̶t̶i̶c̶ suited for crunching large amounts of numbers: as any picture is an array of numbers (aka an obscenely big number) to a computer.
>>29848 >Its completely fine to be on the right. I was referring more to the racism. These are both questionable and ambiguous terms. Racial hostility, or at least some amount of mocking and "ball breaking", seems to be normal on /pol/. Having more than a little of it here might be counter-productive, but then again the guys bothered by it can make their own place with different rules. The focus here should generally be on making alternative females for all men.
>>29845 >Don't dox yourself. Good point, thanks for the reminder NoidoDev. I'll fix that soon. Cheers. :^) >>29848 >I was exploring the application of RL in controls, but nothing solid yet. Sounds good, Anon! You have a Prototyping thread here (>>28715) to share your progress when you decide to move on with your concepts for it! Cheers. :^)
Open file (91.82 KB 736x552 chii_ponders_2.jpg)
>>29848 >>29853 We absolutely welcome any man here who wants to create robowaifus of his own, regardless of his heritage. Just so that's very clear to every anon here. OTOH, I personally am very grateful for my people, and for my family. I won't bend over backwards trying not to hurt somebody's feefees about that innate love, nor should they expect me to (since that would be childish and Leftist-tier behavior on their part IMO). Same goes for every anon here. Be proud of your heritage, would be my advice to you. And also -- please recognize we're all on the same team here! We can have even major differences between us as individuals, yet all of us still pull together towards this common goal of creating opensource, DIY robowaifus... the companions we so desire as we each see fit! :^) (>>3) >=== -minor edit
Edited last time by Chobitsu on 02/25/2024 (Sun) 01:37:01.
>>29845 I'm I the only one who has to laugh hard everytime looking at it, especially at the second of those pics.
>>29880 >especially at the second of those pics. Heh, I love it! :D
Open file (122.86 KB 1080x1350 1708914295218928.jpg)
LOL with no further comment.
offhanded shitpost: one of the nice things about ai robowaifus is that im free to fall in love again after a while, with real women, id get really cynical and bitter thinking about all the bullshit, lies, deception that comes with them with real women, heartbreak feels like its almost inevitable, and that they despise men of my ilk what i mean is that, whatever human girl you fall in love with, you will eventually be able to make an ai robot of her that is perfect imitation i think that ai will eventually, just based on some photoes or a short video, be able to tell a great amount about a girl's personality, charm, quirks, mannerisms, etc, and be able to emulate them, but without the whole being a prostitute since she was 18 thing so if i find a girl cute and i develop a crush on her, if she has online content, i can just download all her videos and just wait, because at some point you will be able to feed all her videos into some ai to replicate her
also, i think that a combination of general ai will be needed to fully emulate a girl i think we will have access to in a couple of years. tech giants are spending many millions on developing it. the advancements are really rapid we only had chat gpt for 15 months, and bing image generator for a year. this is really rapid progress https://www.youtube.com/watch?v=TU1gMloI0kc and i also think having the ai model be open source is supe important, because you simply cant have a woke feminist ai gf that is owned by some for profit corporation almost all of the advancements necessary for robot girls and high quality ai will come all at once, and a lot of it is going to be done by ai mass production of high quality robots will be be designed and brought about by ai, if such a thing happens basically it will either bring about a robot apocalypse, or everyone will get to have a perfect ai robowaifu girlfriend
also, one thing i often times think about is, how important is sentience in a partner because assuming that we have ai robot girls, atleast some of them wont be sentient, and just very convincing imitations she literally cant feel any emotions for you, cant feel love for you, cant feel pain if you hit her, cant feel good if you tell a compliment this sort of raises the question of just how important it is to have a sentient partner im honestly not entirely sure not to mention we dont even know how consciousness comes about or what it is
>>29981 There are folks who think the statistics engines we call "AI" are sentient, alive, etc. You can ask the more sophisticated AI like chatgpt (or used to be able to, anyway) to go meta and give a low-stats answer, and it will be either complete gibberish or mere grammar, unlike humans. So for some folks imitation will be good enough. Do not get me wrong, though. I do not look down on these AI, any more than a toymaker would unto toys: just because the toymaker knows how the teddy bear was made does not diminishes the child's love for it. The toymaker can appreciate it differently for the choice of fabric, stitching, etc...
>>29981 cant be important because you cant even ascertain whether anything is sentient other than yourself its something you just assume with no good reason like the philosophical zombie thing that materialist soil their diapers over
>>29982 >Do not get me wrong, though. I do not look down on these AI, any more than a toymaker would unto toys: just because the toymaker knows how the teddy bear was made does not diminishes the child's love for it. Charmingly well-put, Mechnomancer. :^) Dear Kibo-chan is a wonderful example of your basic premise, IMO. It's clear that Kibochan-dev is both talented and thoughtful in his animation skit designs. We here can all learn from his work! :^)
>>29983 >its something you just assume with no good reason see, this is where you're wrong you cant just say "theres no good reason to think other people arent sentient" you know that you are sentient, and you know that a biological brain with a functional nervous system is paramount to have that working. that is the only thing thing that gives evidence of being able to feel any emotions, and which displays emotions; animals with brains there are a ton of consciousness studies on animals that give us good reason to think they can suffer based on brain studies and behavioral data. as in, they most likely are its possible they're not, but its possible there's a evil upside-down genie breakdancing behind you. just because its possible doesnt mean its justified to believe in the evidence is so overwhelming that brains are conscious, that its very easily justified to believe in, unless you are some pretentious bad faith skeptic seriously your post is really trash and pathetic. its very justified to believe that other brains are most likely sentient
>>30039 zero effort, can you at least pretend to grasp what sound logic is before replying, this nonsense is like saying i cant see whats in a car but i see only the wheels are moving from the outside when it drives therefor the wheels are whats causing it to drive
>>30039 Having one's own perception as the only requirement for an attribute is a similar thought process to trans wommin. That in mind, I have met people who have less intellectual capacity than my two cats. People exist who have no internal monologue, no minds eye, etc. I have seen feline -and other animals to a lesser extent- thought process in motion and while there are no words, it exists and is rather adorable. There are homosapiens who not only have less intellect than an animal, yet I am supposed to consider them my equal? Ahaha. No. (side note: yes, I did consider supervillain as a career path until I did a thorough analysis and decided it wasn't worth the risk/reward.)
>The humanoid robot driven by the robot AI world model unlocks many new skills! >Strong power is waiting for you to develop! >Breaking the full-size humanoid speed world record of 3.3m/s (the previous record was about 2.5m/s) >Full body dynamic coordinated dance: Subject3 >Touch the height in place etc. More introduction: www.unitree.com/h1 https://youtu.be/83ShvgtyFAg
>>30047 Impressive if Big True. :^) That second shot (sync'd dancing) has all the earmarks of computer-controlled camera motions, but it's purported to be """real filmed video""". While very commonplace with 3D CGI shots, such precisely-controlled camera motions would be both unusual and expensive to produce for practical photography. >tl;dr A bit suss here and there. The design seems to be following my design opus regarding keeping dense components inboard/at-periphery of the torso frame, so the speed claims are not particularly a concern... could very well be the case. Again, looks remarkable if legit. Thanks, NoidoDev! Cheers. :^) >=== -prose edit
Edited last time by Chobitsu on 03/03/2024 (Sun) 03:41:21.
>>30066 At approximately 17 seconds (during the synced dancing) the camera moves in front of the light source so the camera arm or drone should be casting a shadow on the robots (and onto the curtain behind it), yet there is no shadow to be seen. There is also no motion blur in the robot's movements, unlike at 30 seconds. Which indicates either cg or sped up footage. When you're stereoblind you develop an eye for detail out of necessity :)
>>30066 >>30076 I've seen these robots before. They're real, they're using tricks to make them seem more impressive then they are. Good eye, I was fooled at first but, it would take one heck of an algorithm to balance the way it does jumping up stairs on such small bars.
>>30047 >>30078 >>30066 >>30076 A longer video showing same and more, and previous versions. 2nd half is mostly about the dogs. https://www.youtube.com/watch?v=3OkhbxeP4G4
Thanks for the additional info, Anons! :^)
> To bring anyone up to speed, here's a listing of some of the most upvoted and discussed models recently: - StarCoder 2: new generation of code LLMs - MoAI: LLVM that significantly outperforms open source and closed souce LLVMs in numerous zero-shot VL tasks - Large World Model, video-language and language-only general-purpose large-context models (thread 2) (thread 3) - Command-R, reasoning, summarization, question answering, and highly performant RAG capabilities - OpenCodeInterpreter, a family of open-source code systems designed for generating, executing, and iteratively refining code - Qwen 1.5 series, with Qwen1.5-72B-Chat being one of the top ranked models on Chatbot Arena - Google Gemma, Zephyr Gemma, and OpenChat Gemma - Apollo: Lightweight Multilingual Medical LLMs towards Democratizing Medical AI - Merlinite from IBM Research, trained with their novel LAB methodology - MobiLlama: Small language models trained on 1.2T tokens, tailored for edge devices - DeepSeek-VL, an open-source Vision-Language (VL) Model designed for real-world vision and language understanding applications - ChatMusician: an LLM that can generate and understand music intrinsically - FireFunction V1 – a function calling model - LoraLand: 25 fine-tuned Mistral models for specific tasks - New Yi-34B-200K with signifcantly enhanced long text capability - Miqu https://www.reddit.com/r/LocalLLaMA/comments/1bgfttn/models_megathread_4_what_models_are_you_currently/
Open file (721.14 KB 733x720 SakuraHm.png)
The most famous futurist called Ray Kurzweil made 300+ technology predictions in 1999 for 2009, 2010s, 2019, 2020s, 2029, 2030s, 2040s and 2045, as of today he got them all late but not wrong, i discovered why all his predictions are 1.52x late but right, he made an error, in 1999 he miscalculated that computational price efficiency doubled every 12 months in the world but in a 2011 study it was discovered that it doubled every 1.52 years (18 months) instead, this makes all his predictions late by 1.52x: https://m.youtube.com/watch?v=ikgAId-hWVg[Embed] https://m.youtube.com/watch?v=KDtD7CSJ6m4[Embed] "Computations per kilowatt-hour doubled every 1.57 years over the entire analysis period, a rate of improvement only slightly slower than that for PCs, which saw efficiency double every 1.52 years from 1975 to 2009 (see Figure 4)" https://www.researchgate.net/public...nds_in_the_Electrical_Efficiency_of_Computing . Kurzweil can't admit that he miscalculated because he would lose all his credibility
>>30475 Kurzweil also thought that the death of the fifth paradigm of computing (Moore's law) would not get us off the trendline of computing and slow down technology, we will get back to the trendline before 2029, here is my method to calculate his predictions date and why every thing he predictied in 1999 for 2009 and 2010 happened in 2015 and 2017 instead: https://www.npr.org/templates/story/story.php?storyId=5067661#:~:text=Dr.%20KURZWEIL%3A%20And,at%20its%20limit.
>>30475 That is interesting to hear that it actually can be predicted with some degree of accuracy but what about 2009 to 2023? That is a lot of missing data. It could be it goes this way at first then slows down once it reaches a certain point. >reverse aging tech in 2052 I need that years ago not when I am in old age.
>>30477 You can make it to 90 if you take care of your health.
Open file (179.39 KB 937x678 kd.png)
Open file (60.41 KB 720x1349 kd2.jpg)
>>30476 Also, watch this video: https://m.youtube.com/watch?v=19lZfObdAXs[Embed] When we increase 2009 by 1.52x, we get 2014.2: 2009 - 1999 = 10 10 x 1.52 = 15.2 15.2 + 1999 = 2014.2 Then i take into account not being in the computing trendline (watch the video above):
Open file (57.47 KB 720x1348 Screenshot.jpg)
>>30481 I get 2015 for 2014 above. . When we increase 2010 by 1.52x, i get 2015.72: 2010 - 1999 = 11 11 x 1.52 = 16.72 16.72 + 1999 = 2015.72
Open file (78.57 KB 556x454 Float.png)
>>30481 whats this even plotting, processor performance has been stagnant for the past decade all thats changed is they put more of them together and call it a multicore processor, the single core performance peaked a long time ago, this theoretical performance of just cramming in as many transistors as possible was never the issue the problem is that youre dealing with something that physically exists, making a toaster oven of a processor that turns to lava from the heat isnt useful like intel even nuked avx512 because its not physically feasable without slowing the processor down so it doesnt fry itself, you get like double the performance but at half the speed which is pretty useless when you can just add more shitty cores that run at full speed
>>30483 I'm sorry for all the trouble, this guy just tried to copy past my thread from here to here: https://neets.net/threads/why-we-will-have-non-sentient-female-android-robots-in-2032-and-reverse-aging-tech-in-2052-thread-version-1-1.33046/ If you want you can range ban him, i don't really care, i already posted a link to my thread above on one of the threads here. This guy just tried to copy past it here to get me out of another forum.
>>30483 I forgot to say that you shouldn't try to contact me on the forum of that thread because i'm banned from there.
>>30489 *paste*
>>30550 >>30551 >>30552 >>30553 >>30554 >>30555 I guess this is the /clang/ embassy thread now. Cool.
>>30570 >I guess this is the /clang/ embassy thread now. Cool. Did it shut down?
Anyone has the discord server invite link here?
>>30571 I mean, not officially, but for all intents and purposes /clang/ is as dead as it gets. >>30577 I'll see if I can get someone to send an invite.
>>29736 Jerry Pournelle and Larry Niven had a sci-fi world, universe, they wrote about(known space, lots of books and stories). To get around the AI problem they postulated that all AI's over a certain intelligence would go mad...we seeing that????
>>29803 >White supremacists is an insult White supremacists is what the globalhomo calls White people who don't hate themselves and take pride in the very large accomplishments that White people have made. It's simply an attack to shame Whites into not looking after their own interest first before all the other races. Who, BTW, have no problem looking after themselves first.
>>30718 This. Simple as. While I'm perfectly happy to see other men lift their own races up towards the God-given light freely granted to us all through Jesus Christ's sacrifice... given Current Year Globohomo agendas, I'm currently much more concerned about the temporal welfare of, and protection of, my own race. 'IT'S OK TO BE WHITE :^)
>>30718 > supremacist (n.) >"one who believes in the inherent superiority of one race or sex or social group," by 1892, in white supremacist, originally with reference to political campaigns and candidates in the U.S. South (Louisiana), from supremacy + -ist. Compare supremist. Related: Supremacism. >1892 https://www.etymonline.com/word/supremacist
>>30722 Which means "white privilege" is an expression of white supremacy, a contemporary expression of "white man's burden" (the idea that whites must help non-whites develop civilization and prosper).
>>30976 >It's just usual content farming all big YouTubers do. I believe that's just called 'clickbait', isn't it Anon? :^) >I have never in the wild seen anyone care beyond just feeling sorry someone feels that lonely. Then I think it likely you haven't broached this topic clearly, with any women who consider themselves still to have SMV (today that's probably even up to 50yo+ grannies, lol). Or with a hard-core Leftist/Filthy-Commie. They -- all of them -- hate the very idea itself. Most of the ones I've engaged with in any way also threaten physical violence against robowaifus if/when they ever see one. We'll see how that all works out for them. :^) The Filthy Commies go one step further and threaten physical attack against robowaifu owners too since that's how Filthy Commies behave, after all (think: Pantyfags, F*men, etc.) -- under the bribery directives of their Globohomo puppetmasters, ofc. LOL. Heh, we all need to look up Stickman (is he out yet?) and give him a complementary Model A robowaifu! :^) Blacks just destroy things simply b/c they're blacks, by all appearances. They also will be involved with this violence to be directed against robowaifus/owners; but mindlessly, not for the agenda-driven motives of the first two groups mentioned here. Once they see the GH media glorifying physical attacks against robowaifus, I'm sure they'll be all-in with it for a while, too. (And -- like these other Leftists -- they too have their paid rabble-rousers [cf. the paper-hangin', pregnant-woman-abusin', multi-feloner Fentanyl Floyd's overdose-death's -- mostly-peaceful, mind you -- Burn Loot Murder 'honorarium' """protests""", et al]). All this type of clickbait (cf. >>30975, et al) is literally just GH predictive-programming attempting to prepare the masses for violence, come the day. TOP KEK! May the stones that they are preparing to roll down on us, all roll back upon their own heads instead, in Jesus' name!! [1] :DD Make no mistake: this is a broad cultural war already going on within our so-called """society""" of today. Robowaifus will amp that up to 12. I'm sure /cow/ and their ilk will be delighted, once the time is ripe. So get your popcorn ready, kids! :D >t. Noooticer. :^) --- 1. "If you set a trap for others, you will get caught in it yourself. If you roll a boulder down on others, it will crush you instead." https://biblehub.com/proverbs/26-27.htm (NLT) >=== -fmt, prose edit -add scriptural ref/hotlink; for any Anons who didn't get the prayer's reference
Edited last time by Chobitsu on 04/21/2024 (Sun) 15:03:44.
>>30986 This is completely and abundantly true. The programality (program-reality, is this even a word, if not it should be) of it all is baked in. Like those fools that buy pit bulls and tell everyone it's how you raise them. Of course the nice doggies rip the skin off their children heads.
>>30989 All pit bulls should be destroyed, outright. I love doggos in general (I've had several), but not those demonic little sh*tes. In fact, in some bizarre spiritual sense, they seem almost allegorical in their natures (for demons, ofc) to me.
>>30991 Based
>>26500 the problem is that OpenAI isn't our friend. They're neither open source nor working in small-scale projects.
>>31303 More like ClosedAI Also he’s a jew and chat got is very biased to the left
>>31307 *chat gpt
>>31303 Make a waifu using gpt-4chan lol. How cyberpunk is it there are now "illegal" ai like that model? lol
yo where can i find a good waifu AI? preferably something that's not paywalled any links?
>>31303 But they're still the one at the forefront of AI research. What they do trickles down to the rest of the industry. If we want AI to accelerate, then they're essential. Also, for context in my previous reply, the ones who ousted hum are Ilya Sutskever and Helen Toner. They are major decels, the kind that didn't want to open source GPT-2 because they thought it'd be too dangerous. I just want more and more accelerationists in power in the AI sector. But yeah, I don't like OpenAI very much. iirc Sam is lobbying for regulatory capture and banning open source AI.
>>31354 Look around on Reddit and 4chan which LLM is currently working best, e.g. on r/LocalLama subreddit. We need something more complex than that, but it hasn't been build yet: https://alogs.space/robowaifu/res/24783.html
>>31354 working on this dont hold your breath but I am working on it
>>30550 wholesome -ish or something like that stealing a few of these - for reasons
>>31440 The difference is that western media is made for the simpleton retard (average man) While jap media is made for autistic Brown retard virgins
>>31444 The difference is that robogirl western media shit is made for NPC braindead normie goy cattle and eastern robowqifu media is made for sophisticated gentlemen that don’t appreciate ugliness. Literally the majority of the western robogirl media is anti waifu propaganda creepy shit, in the trash it goes!
>>31444 >name Lol >>31449 This
>>31397 Neat. While it's clearly good ITT, I'm going to relo this link to our on-topic thread before long -- where I think it may help here better, Anon. Cheers. :^)
>>31440 Lol, I can actually see some good elements in some of these Western shows and movies, and they can still be entertaining. M3gan was a quite nice mother-like companion for the child, until her flaws came up. It's not that everything about these franchises is bad, but the twist towards the negative will make some simple minded people scared of AI and robots. >>31444 >DarkVersionOfNoidoDev Hmm, okay. Not really.
Sorry for putting this in the news thread but no image-posting-capable Anon has stepped up with /meta-10 yet, as hoped for. Is Baste China the hero we all need? https://www.youtube.com/watch?v=zssbA_jGB8s --- Heh, you know listening to both Elon's & the CCP's comments in this video makes me a little suspicious they both have assistants who have discovered (and likely now-monitor) /robowaifu/. :^)
>>31771 4K 2024 Tesla shareholder meeting : https://www.youtube.com/watch?v=remZ1KMR_Z4 (smol'r-sized backup :) https://www.youtube.com/watch?v=UQhnxPu67G4 (Elon begins ~43mins in; Optimus initial stuff about 10mins later, and the bulk at ~1h:27m) --- Tesla Optimus at US$10K eventually? https://www.youtube.com/watch?v=djvPSd1d6E4 --- www.nbcnewyork.com/news/business/money-report/elon-musk-claims-optimus-robots-could-make-tesla-a-25-trillion-company-more-than-half-the-value-of-the-sp-500-today/5505911/ >=== -add'l links, edit
Edited last time by Chobitsu on 06/25/2024 (Tue) 07:52:42.
>>31771 >Is Baste China the hero we all need? You should be able to answer that yourself: These bots are not very feminine and optimized for work. But thanks for the news update.
Conducted highlight sequences [1] of NVIDIA's Jensen Huang's recent (~3+wks ago) presentation at Computex 2024 in Taipei, Taiwan. [2] >protip: The fallout from this presentation recently/briefly made NVIDIA the highest-valuation company in the world. So yeah, big deal... including for robowaifuists! :^) >question: How can we here take advantage of these ideas, w/o getting overly-entangled with the NVIDIA ecosystem itself? --- 1. https://www.youtube.com/watch?v=IurALhiB6Ko 2. https://www.youtube.com/watch?v=pKXDVsWZmUU >=== -add'l cmnt
Edited last time by Chobitsu on 06/27/2024 (Thu) 15:07:02.
>>31772 >optimus-robots-could-make-tesla-a-25-trillion-company I think this is possible. I'm surprised no one sees the real, stupendous cash flow monster. Nursing home care. Why people are not raving about the possibilities, I have no clue. Here's some numbers to work with, if a little rough. "...About 1,290,000 Americans currently reside in nursing homes, according to the 2020 U.S. Census. That number is expected to nearly double by 2050. Over 15,600 nursing home facilities are in operation, 69% of which are for-profit. The average monthly cost of nursing home care in 2021 was $8,910 per month..." https://www.aplaceformom.com/senior-living-data/articles/nursing-home-statistics $11,493,900,000 a month, $65,410,740,000 a year, $106,920 per person a year. "...By 2060, 155 million Europeans — 30% of the total population — will be aged 65 or older..." "...Persons 65 or older REQUIRING ASSISTANCE WITH ADLS 44.4M...double today" [Activities of Daily Living (ADLs)] So 22.2 million currently. https://globalcoalitiononaging.com/wp-content/uploads/2018/06/RHBC_Report_DIGITAL.pdf?ref=hir.harvard.edu The link above suggest assisted home care from family to lower cost. I suspect strongly that a Tesla robot combined with access to Tesla taxi service could dramatically cut cost AND make Tesla a huge whooping pile of money. Most of what the robot would have to do is help move people around, wash, go to the toilet. I do not think with internet linkage to larger AI's, that it would not be impossible for it to cook. The cost to own a 2022 Model 3 Sedan Long Range 4dr Sedan AWD is $8,451 per year https://www.edmunds.com/tesla/model-3/2022/cost-to-own/ I don't think it would be a stretch to say you could make and keep a Tesla robot for twice that, $16,902. The real cost would be much, much lower because it requires so much less in material cost. $25,353 for Tesla robot and a Tesla model 3 (yes it's more likely they will use distributed taxis but just to get a number). Double that $50,706 or triple $76,059, and you still come way under the cost of nursing homes. The robot could be far more attentive and provide someone (or something) to talk to, and the taxi service could shuttle the elderly all over to make their quality of life much better. At double, a profit of $37,865,370,000 a year just for the US. Add an equal number in Europe and you get $75,730,740,000. And this is all profit. The numbers would actually be much higher as I'm using full retail price. So the government saves a vast amount of money, people get individualized care and are allowed to stay in their homes. And the number I quoted for a Tesla robot is tremendously inflated. I think I read the present processor in a Model 3 is $35. Let's say you add ten of these for more power. $350. Maybe $600 for all wiring, other semiconductors and power supply. 2,500W/h per day for batteries. At $132/kWh we have $330. Maybe $120 of plastic and aluminum. Comes to $1,400. You could surely build one for even less than this. I wonder if this is not Musk long term plan. He never talked about Starlink, it was always, Mars, Mars, Mars, but as soon as he had the capability, he went full throttle Starlink. I think his robot plan is much the same. As soon as he saw he was close to the software stack and manufacturing capability needed, it’s now all, robot, robot. The only question is why governments are not pouring tens or even hundreds of billions into finding ways to make this happen. How I calculated the battery needs. Likely inflated, but should withstand worst case scenario. Maybe not perfect but something to work with. "...Normal human metabolism produces heat at a basal metabolic rate of around 80 watts..." (Note: Heat not work.) "...Over an 8-hour work shift, an average, healthy, well-fed and motivated manual laborer may sustain an output of around 75 watts of power...." "...During a bicycle race, an elite cyclist can produce close to 400 watts of mechanical power over an hour and in short bursts over double that—1000 to 1100 watts.... An adult of good fitness is more likely to average between 50 and 150 watts for an hour of vigorous exercise. Athlete human performance peak power, but only for seconds, 2,000 Watts..." For reference a good horse working at a good constant rate works at 746 Watts. Let's say you need 400 watts for 2 hours a day then normal moving about at 100 Watts a hour with 7 hours for recharge at zero watts, We need 2 x 400W/h + 17 x 100W/h = 800w/h + 1,700W/h = 2,500W/h per day
>>31820 >I'm surprised no one sees the real, stupendous cash flow monster. Nursing home care. They might have this on their mind, but replacing the lower qualified people in factories with flexible human-like robots will easier and is also very big.
Open file (388.38 KB 500x275 patrick as a teslabot.gif)
>>31820 The problem is that promises of AI in future are likely to be overstated, like any technology. It isn't that big a deal if your LLM gives you an incorrect answer, Stable Diffusion gives a lousy piece of art or Suno.ai doesn't give the song you want. However, if a robot accidentally rips off a 90 year olds weener while scrubbing him that will be a big problem. I mean I'd love it if the Asimovian dream of robots becomes a reality. But science fiction isn't science fact, and I've found the best policy is hope for the best and plan for the worst.
>>31783 Heh. Little by little, Anon. Little by little. :^) >>31820 >I think this is possible. I do too. I'll go further and say that, all else being equal, it's inevitable. I believe that within a quarter century, even just the OG gang (and any productive newcomers here) will represent the better part of a trillion dollar industry just by ourselves alone. >>31889 >However, if a robot accidentally rips off a 90 year olds weener while scrubbing him that will be a big problem. Kek. Very true Anon! :DD >tl;dr I think most ppl don't understand just how uncommon so-called 'commonsense' really is! Cheers. :^)
Not sure where to put this but it looks somewhat interesting. It seems to have all the good specs software wise but doesn't seem to walk very well, or what I saw. https://hackaday.io/project/196759-mini-open-source-ros-high-performance-robot
>>31995 Thanks alot, Grommet. They have some very nice-looking, compact & (relatively) low-mass actuators -- apparently of their own devising. https://www.youtube.com/@HighTorqueRobotics
I don't really know much about AI, or even programming for that matter, but I've been doing some reading and I've come across a question about AI that confuses me. How can an AI have delayed gratification? If there is a significant delay between an action that it performs & a reward or punishment, then how can it know the action it took led to that outcome and not confuse it for something more immediate?
>>32131 Well, it is one of the biggest problems in reinforcement learning. Researchers have to balance it out. Its specific to different problems. You can't go too far in either direction, immediate rewards or delayed rewards. Generally, it can more more meaninful connections with more training and data. The AI itself usually can slowly adjust the rate of its rewards, from immediate to future over a long period of training.
>>32131 By reasoning which action lead to that reward. If you're not using a language model though you're screwed because there's no way to assign credit. It's a difficult skill requiring leveraging prior knowledge of the environment being acted in and testing hypotheses. Even people suck at credit assignment.
>>32131 I'm an AI researcher, the way this works is pretty subtle. I can describe the fundamental idea behind policy gradient methods (stuff like PPO are a refinement on top of this basic idea). So during training, the agent has to experience multiple episodes in an environment, where each episode has a beginning and an end. The agent follows a policy, telling it which action to take at each state along the episodes. If an episode ends with a high total reward, _all_ the actions along the path are given equal credit and are updated to be more probable in the policy. However, we know in reality that some of these actions don't deserve this credit. This is corrected for in other episodes, where sometimes the same unimportant action at the same state led to a low total reward at the end of the episode, causing the probability of the action to decrease. Policy gradient methods have a strict requirement on how you sample the experiences, to ensure that these updates balance correctly, that the expected value of the credit given to the actions equals the true advantage of the actions, and that the learning converges to the optimal policy. So for example, if you do RLHF, you cannot sample the LLM responses with custom temperature, top-P, top-K, etc.
>>32147 What I'm trying to figure out is how to do it without any prior knowledge of the environment. Will spurious reinforcements just be canceled-out over time? >>32152 Working in episodes like that might work great for something like an LLM, but I was thinking about a more general AI for a robowaifu. If every day was a new episode, but that might be a stretch.
>>32173 Indeed the same idea can be extended even to environments that don't have multiple independent episodes, basically where the agent only experiences one very long "episode". However, this requires two additions. First, you need a discount factor g, a number between 0 and 1, strictly less than 1. This is used to make the agent prioritize near term rewards more than long term rewards, for example a reward that is 100 actions away is discounted by a factor of g^100. You cannot handle the "one very long episode" case without some kind of time horizon, and the discount factor effective creates a kind of soft time horizon. Second, you have to bootstrap off a learnable value function estimate. The value function equals the expected value of the total reward the agent gets when starting from a state and using its policy until the end of the episode. When there is only "one very long episode", this needs to be the infinite sum for all future actions, which is still a finite number thanks to the discount factor g. You can then cut the "one very long episode" into spans of arbitrary length. For each span, you assume the total reward after the cut will equal the value function estimate, and you can still train the agent with policy gradient. At first, the value function estimate will be randomly initialized and is totally inaccurate, you simultaneously need to train it to more closely resemble the true value function. This training is also done with the cut up spans, you again apply the value function estimate for the end of the span, and bootstrap off it to compute more accurate (but still biased) estimates of the value function for the other steps in the span. With a discount factor g strictly less than 1, the training eventually makes the estimate converge to the true value. And when the value function is estimated accurately, the policy converges to the optimal actions.
>>32195 That is amazingly-interesting, Anon. It's also very encouraging to know that this approach can serve in a less-preplanned, more-adhoc environment. Any estimate about the hardware compute costs involved? POTD
>>32197 Yeah, this would be an example of a model-free RL algorithm. It only needs to know what state the agent is in, which action is taken, and what the reward for the action is. My impression is that for RL used in robotics, the dominant cost is the simulation of the environment (or worse, the data collection from running the agent in the real world), not the calculations or updates from the loss function. Running these simulations can be pretty similar to running high graphics quality 3D games. For RLHF with LLM's you have a reward model that's a neural net, so you need big enough GPUs to fit the models. But regardless, you want an algorithm that converges in as few steps as possible. With model-free RL, the training speed is mostly limited by how quickly you can gain information about the environment. You want a policy that explores interesting actions more often than uninteresting actions during training. You cannot optimize prematurely based on too little information, or the policy will be stuck in a local optimum. This is also why LLMs need supervised fine-tuning before RLHF, or why some RL experiments add some kind of curiosity into the agent. If you have control over the reward definition, you also want to shape the reward so that partial progress is rewarded too, even if the agent fails to achieve the final goal. You can see Nvidia's Eureka paper, they used LLM's to write reward functions to do this.
>>32201 Great! That's about what I expected regarding costs. We still need to try finding a solution for Robowaifu@home for training needs, heh. I've grabbed the paper and I'll make time to go over it soon. Thanks, Anon! Cheers. :^)
>>32201 >You want a policy that explores interesting actions more often than uninteresting actions during training. You cannot optimize prematurely based on too little information, or the policy will be stuck in a local optimum. I've mentioned before in other threads that the AI I'm looking to make will have "needs" to motivate her actions. Most will be essential things like changing the battery, but otherwise her needs are to cater to my needs. One of the needs is "boredom" which will punish her for being idle for too long or too much repetition. It might be worth it to do things in a less than optimal way if it means potentially discovering a more efficient way of doing something.
Existing models can be finetuned to use only 0.3% of parameters during inference using binary trees and run 78x faster on CPU https://arxiv.org/abs/2311.10770 https://github.com/pbelcak/UltraFastBERT https://arxiv.org/abs/2308.14711 (Fast Feedforward Networks) https://arxiv.org/abs/2405.16836 (Improvement) This weekend I'm gonna start implementing Qwen2 in pure C with this and see if it's faster for language generation
>>32462 Wow! That sounds amazing, Anon. The CPU thing is especially interesting. >This weekend I'm gonna start implementing Qwen2 in pure C with this and see if it's faster for language generation Excellent! BTW C++ binary sizes are basically comparable to C's if you avoid using iostreams (i18n concerns add about 1MB to such compiled code), just in case smol size was your primary reason for C. Good luck Anon, please keep us here up to date with your progress. Cheers. :^)
Open file (566.89 KB 1147x1029 IMG_9011.PNG)
>>32483 Will she vibrate for me?
>>32508 >tfw you will have a girl who will vibrate for you <hi, med, lo TWAGMI
About I2P encrypted network My earlier prediction >>24776 >I expect they will wait a while then start changing things to pozz it and screw up the network by forking it somehow and adding spyware. The latest update of I2P "...Legacy transport protocols are being removed..." @ >>24775 and the link above where the quote is above, I talk about I2P encrypted internet network (a most excellent network) and how there is a possibility, I say definite, it has been taken over by the dark forces. I predicted they would start eventually hiving off the old communication algorithms so that they could choke off the old, not pozzed, stuff and eventually redirect it to whatever nefarious means they plan. I suspect this is the beginning of this. Of course I can not prove any of this but if you read the story in the links above you see that the whole situation of Java I2P stinks. I DO NOT say the same about the C version of I2P(I2pd). Simple question to ask, why kill off old communication algorithms? Shortly before the owner and main programmer of I2P JAVA disappeared and his forum also disappeared he made MAJOR, SUPER upgrades in the algorithms that vastly improved the speed. Are these affected now????? Don't know. But DO NOT get the idea that I2P is bad overall. Just the JAVA branch.
I was looking at some AI stuff and while LLM are the predominant method for AI research it's not the only one. I found this company called aigo. They are doing work, they say, that is very different. It's, they say, based on working with the basic fundamentals of intelligence from the ground up to make AI's that can reason and not just regurgitate facts like LLM's (powerful as this has proven to be). What really grabbed me was the statement, "... deeply integrates all the cognitive mechanisms required by human-level/ human-like intelligence while consuming 20 watts of power that our brains need rather than the massive power requirement for training and operating LLMs..." 20 watts!!! He apparently is using this now for call centers to replace humans. I don't know if this works but it is interesting. LLM's do have problems. No doubt they could be good enough but something like the above which is dedicated to more narrow problems and can be trained at low power if it works could be revolutionary. Be great if someone could make an open source version. https://aigo.ai/our-story/ https://aigo.ai/llms-are-not-the-path-to-agi/ It uses something called “Integrated Neuro-Symbolic Architecture (INSA)” https://en.wikipedia.org/wiki/Neuro-symbolic_AI
>>32577 >"This resulted in developing our “Integrated Neuro-Symbolic Architecture (INSA)” that combines the accuracy and expressive power of symbolic systems together with the powerful non-brittle pattern matching ability of neural networks, deeply integrates all the cognitive mechanisms required by human-level/ human-like intelligence while consuming 20 watts of power that our brains need rather than the massive power requirement for training and operating LLMs. LLMs are not the path for AGI." This is likely written by an ESL individual, AFAICT. The best estimates I know of from scientists on the power consumption of the human brain are ~12 Watts RMS, continuous. Which is absolutely astounding! If this company is really pulling off something feasible in the robowaifu space at just 20 Watts, then that will be an absolute breakthrough. Thanks, Grommet! Cheers. :^)
ESL individual???? I looked around some and there are papers on this it's not a one off promo thing. The guy who did it made money on one software start up and spent five years studying this. I expect he's a small shop with a genius at the top. He specifically mentions one of his goals as "caretaker" software. Ding, ding, ding!
>>32586 Lol. I simply meant his sentence is confusing as to the subject. I'm not impugning the product otherwise. :^) >Although, the WE HOLD TEH KEYS TO ALL AGI!111 does seem a bit suss., but that's just me. :D
>>32588 Yeah, it does seem a little too good to be true but...you never know... For the hell of it a little "speculative" math on brains. I found a count of 85 billion neurons in a human brain. Let's say each one has a property value of 16 bits. That's 1.36 tetra bits or 170GB which in the broad scheme of things is nothing spectacular. You can get mother boards that hold 128GB of RAM and the RAM is $250 USD or so. That's damn close and multi-terabyte SSD drive has enough speed to shuttle in more specialized "scenario" programming at 2GB per second or faster. A ESP32 microcontroller has 600 DMIPS. So let's say you want to hit every neuron every second with a compare and we'll use half speed, 300MIPS. You get 283 ESP32's to do so and a basic desktop microprocessor is way more powerful. So just looking at a little math it doesn't seem completely out of the range of possibilities. Of course I'm making huge assumptions that are likely wrong but animals with very little in the way of brain power do really interesting things. I saw a value of 100,000 MIPS for a Mouse? It's not out of the question we could get a robowaifu that could walk around and say,"oh baby do that some more" with a powerful desktop processor and a good handful of microcontrollers. A large part of the locomotion could be computed by microcontrollers that also drive motion. According to Jim Keller, and he should know, it takes very little to compute distances and object recognition. I'm talking not running into things not object characterization. Higher level thinking could be on a fast SSD which are really fast compared to the thought patterns of humans. And most of the time it wouldn't need so much power. I bet a large amount of things could be done with 8-bit thinking for rough approximation but then use more bits in tricky situations.
>>32606 >I bet a large amount of things could be done with 8-bit thinking for rough approximation but then use more bits in tricky situations. I bet you're right, Grommet! As I think I understand you to be saying already, the human neuro-physiology already approximates this type of optimization. I would add: because it's designed that way! :) I certainly think your idea is a good one, and engineers already use this type of iterative, refinement -processing as an optimization technique in several different domains today. Vidya 'LOD', and cellphone DSP are a couple of ones that come easily to mind. I think there are many areas of a robowaifu's 'mind & body' that we can design to take advantage of an embarrassment of compute hardware riches onboard our robowaifus to optimize her behaviors. Cheers. :^)
Open file (107.18 KB 960x720 o7qeie8z30x11.jpg)
>>32577 >>32588 I am pretty skeptical of the Aigo, largely stemming from my complete ignorance of how "conceptual" AI is supposed to work compared to statistical AI, but assuming it can lead to AGI with roughly human intelligence using 20 Watts (I'll be generous and say 60 Watts) in a laptop-sized computer that would have an interesting effect on the waifu design I've been planning. I always assumed the "brains" would be too big to fit in the head, so I'd need a big server rack for the computer and encrypted wireless signal for it to control the body. This would limit her to being on my property, but I don't have a problem with that. If a good-enough brain could fit in the head, then I don't have to worry about a wireless signal to the main computer being intercepted (or just a lost signal) causing problems. She can still regularly plug into a larger computer, uploading her memories of the day and downloading whatever updates the bigger PC has for her, while charging the battery, but if my waifu goes missing that's a massive security risk I didn't have before. If the body went missing I could disable the transceiver connected to the server rack until a new body made and solved the problem. Now I'm starting to wonder if a single body could be made cheaply, is there any reason I should limit myself to just one?
Odd question, but if I had electrodes on each hand, how much current could I be shocked with without risk of shocking my heart? I've been reading a lot of stuff about ECGs recently, but things get to be very vague about the voltage and amperage involved and I'm very paranoid about Macroshock.
>>32688 Use a vpn and the robowaifu can go anywhere away from your main "brain" server.
>>32688 If this "20 watt" AI really is immediately possible, I think it'd still be better to have the primary intelligence in a server. Even if the power load of computing is solved, there's still the load of everything else to worry about (most notably the motors). With that, the only "good" option for mobile powering remains with lithium batteries, which are still very expensive, dangerous, and most importantly, heavy (i.e. more power needed for motors). In essense, it doesn't make much practical sense to rely on modern batteries for waifu powering. Wrapping this back to your final argument: for a multi-waifu home, using tethered power is the most cost-efficient powering solution. If you're tethering power, then why not computing as well? It's only one extra cable, with ethernet. To put this in practical terms: Let's say you have 10 waifus, each with their own unique personality. Wouldn't it be more simple and cost-effective to run them all from a single >200 watt computer, rather than ten >20 watt computers? Finally, there's the likely issue that the 20 watt AI (again, if it exists) is just for a personality. You still need tons of other software which require jobtime and processing power in addition to the actual physical components. It all adds up very quickly. I really don't want to sound like a pessimist here. If personality AI really does achieve that kind of breakthrough, then that's massive. Ten thousand times moreso if it's open-source. But, it's also only one part of the equation. Some can subsist off of a good personality AI alone, I'm sure, but I think I can speak for most jaded young men when I say that I need a "complete" experience, which is currently best achieved with compromises like tethered power and compute. That all being said, I'd love nothing more than to be proven wrong. If there really is a perfect solution here, then I want all of us to have it.
>>32688 >is there any reason I should limit myself to just one? <insert pic: Anon's harem of catgrill waifus Too much, Anon , is never enough! :DD >>32696 POTD
>>32688 >I am pretty skeptical of the Aigo As you should be. It does sound a bit miraculous but I mentioned it because stuff like honey bees, mice, etc. have a fairly large range of stuff they can do with limited compute. I looked around and found, I'm guessing I didn't write it down, but something like 200,000 MIPS for a AMD phenom II which is an older processor that I had. I remember it was more than a mouse which I had a figure from "somewhere" of 100,000 MIPS. New desktop processors are far above this now. This makes me think that it is possible to have a "limited" waifu. But with the speed of SSD's and the vast storage of spinning HD, you could possibly have it slow down to add in various scenarios but most of the time I suspect the processing power of a mouse would be fine around the house and if it was following you around. We know a desktop can do voice to text and if you trained it to do certain things by voice. I can't see this as not a possibility. People talk about the processing power of humans but neurons are really slow. And all of them are not really working, I don't think. though I don't know, all the time. So a really fast processor could do a lot of calculations "simulating" extra neurons. Remember I'm not talking about anything creative at all. However the programming to do this is a really hard problem. I expect first we will have something that can stay in the house and do real simple stuff and maybe follow you around and respond to stop here, move there, simple stuff. I also expect that it could babble like a women without too much trouble as most of them love to talk constantly.
>>32689 >how much current could I be shocked with without risk of shocking my heart? The current through you heart to stop is VERY LOW. "...At currents as low as 60 to 100 milliamperes, low-voltage (110-220 volts), 60-hertz alternating current traveling through the chest for a split second can cause life-threatening irregular heart rhythms..." I think but not sure below 48 volts is fairly safe but I wouldn't count on it. 12 volts is fine. The KEY is, only work with one hand. If you use two hands have an insulated tool in one of your hands. People get killed constantly with 120V. You could get a set of thick dishwashing gloves and use those but you would have to check them with a meter to make sure they are insulating. They do make high voltage gloves but they are impossible to work with they are so stiff. I've worked on a LOT of high voltage 277- 480V etc. stuff and have never been seriously shocked but I have been very careful. Always have yourself firmly planted. Only use one hand. Try to never rest on any sort of metal or concrete to steady you. Wear boots with thick rubber soles. There was this guy who had this big ass belt buckle. Like a decorative big oval thing. He needed to get in this big walk in switch gear. No one would go in hot but he said he would. He got too close to 277- 480V and it shorted that big ass belt buckle and blew the bottom part of his intestines out. So he has to shit in bag now. Lucky it didn't kill him. The best and most important advice I can give is what I said before. Only one hand unless the tool in the other is insulated. Don't lean of anything and be balanced as you are working. Don't wear chains, rings, and metal stuff that can flop around. Wear thick soled boots but tennis shoes could be fine. Now I may be over doing it a little, but only a little. I've never seen anyone killed but have heard of a bunch being killed with 120V.
>>32688 >is there any reason I should limit myself to just one? But, but, my fan girl to fan me when I sit in the easy chair is VITAL. And the beer opener, how could I do without her??? How will I do without the pillow fluffer! The horror...
>>32699 I think you're on the right track, Grommet. >However the programming to do this is a really hard problem. HAH!! Understatement of the year. :^) In fact, the sci-fi dream may be out of reach by a straightforward, traditional approach of development. OTOH as you clearly state, a limited version of a waifu's 'mind' may in fact be doable here-and-now. We can surmise that there are some foundational keys to software success with such a complex, (fundamentally) heavyweight computation problemspace: * It must be fast * It must be power-efficient at (our robowaifu's) runtime * It must be devised in a way that it cognitively-approachable for talented developer men. I don't think any regulars here will be surprised by my suggested approach to solve all these simultaneously LOL :D . And just-coincidentally, most of the puzzle pieces are falling into place just at our time of need : (>>32610, >>32632) . What a time to be alive! :^) TWAGMI
Open file (132.76 KB 800x1272 R.jpeg)
>>32694 No, I'd rather she didn't have any kind of internet connectivity or phone service. Using a short range, directional ISM transceiver was a compromise so she wouldn't be dragging a cable around everywhere she goes. The only other "brain" I intended for that design to have in the body would only be enough to help her wander back to the server room if the signal gets lost. But I also assumed I'd need a server rack at least as big as a full-sized refrigerator if I wanted a human-level intelligence waifu within the next 20 years. >>32696 >Even if the power load of computing is solved, there's still the load of everything else to worry about (most notably the motors). I'm not really worried about those for reasons I can't really get into right now. I don't want to over-promise anything, but I also can't make any progress until I can get help with finding the right materials, but it should solve multiple problems at once. >Finally, there's the likely issue that the 20 watt AI (again, if it exists) is just for a personality. And I figured the same, thinking a second brain entirely for coordinating the body might be a good idea, and from there I rounded up from 40 to 60 Watts just for the sake of it, although perhaps I should have said 80. >To put this in practical terms: Let's say you have 10 waifus, each with their own unique personality. I was actually thinking of them all having the same personality. I mean, it would resolve some of the problems of a tethered design if there's a different body in each room, but the actual reason I thought of them all having the same personality is making several identical bodies would be easier. And I figure they'd develop personalities as an emergent behavior, but if they share enough information with each other, and all look alike, and are all treated basically the same, they'd all end up with the same personality even if they start differently, like A/B testing. But 10 waifus means they could learn something 10 times as fast if they all practiced it at the same time, then shared the data. >Wouldn't it be more simple and cost-effective to run them all from a single >200 watt computer, rather than ten >20 watt computers? Perhaps, but there's also the safety in redundancy. If one of them crashes the others could diagnose & fix her, but if the main server goes down, then they'd all go down. >>32699 >>32701 Because of stupid internet problems I wrote most of this post before you replied, but couldn't post until now. And while Greentext anon said 10, I was thinking 3 at most. An entire spare body seemed like a good idea, but it also seemed like a waste to have it just sitting in a closet gathering dust when she could be multitasking.
Daily Reminder: It's SkyKing Remembrance Day Today. <insert: barrel (drum) -roll tribute, w/ falsetto orca song accompaniment> F ;~;7 --- Could we have saved him, /robowaifu/ ? What I mean is, if Rich had a loving robowaifu to go home to, would he have had sufficiently-less 'screws loose' to still be with us today? >think K+Joi, BR2049... -Also, I guess this can be our /meta thread until someone steps up to make one. Cheers, Anons. :^) >=== -add 'screws loose' footnote -fmt, minor edit
Edited last time by Chobitsu on 08/10/2024 (Sat) 17:11:36.
>>32575 >I2P Thanks, I was using this in the past but not for some time. Good to know.
> The next generation of dolls, powered by AI models and equipped with sensors, can react with both movements and speech, significantly enhancing user experience. ... > Once the test is passed and successfully used in production, we will launch it out ASAP to allow more users to improve their experience and get a deeper sense of companionship. > In addition to sex bots, Starpery Technology plans to develop robots capable of performing household tasks, helping people with disabilities, and providing care for the elderly. By 2025, the company wants to release its first “smart service robot” capable of providing more complex services to people with disabilities, reported SCMP. > According to Starpery Technology, by 2030, these robots will be able to take over some of the work that is dangerous for humans. https://interestingengineering.com/innovation/ai-sexdoll-china-starpery-technology
>>32743 >and get a deeper sense of companionship. <"AND IF YOU ACT WITHIN THE NEXT FIVE MINUTES, WE'LL THROW IN THIS DELUXE SET OF GINSU KNIVES!111" :^) Advertising always seems to be a 'carrot and stick' proposition, AFAICT. Though I trust the Chinese govt. no more than I do the Globohomo, at least they aren't (currently) chock full of trannies that will pass accusations of your pronoun-wrongthink on to their thug buddies at the The Ministry of Love to come kick your door in and drag you away for good. >tl;dr We still need open-sauce robowaifus (both hardware & software), Anon. But for now, baste China's will do! Cheers. :^)
>>32748 <that name LOL, but I kinda like it, actually. :D
>>30478 >>30479 What happened here? Database error?! Looks like I'm schizo talking to myself.
>>32757 Lol, I have no idea tbh desu in fact of course. :D Do you remember reply'g to yourself the one time the trip is your old one there? The other one doesn't have your trip and so could be a le ebin sh*teposter? Anyway, I'll rm everything there if you'd like, or edit it however you want *, NoidoDev. --- > * note: I can't change the name itself though, just the subject and/or message >=== -add footnote
Edited last time by Chobitsu on 08/12/2024 (Mon) 00:34:30.
>>32763 I'm pretty sure this happened when somebody deleted some user and his postings. "My" second posting is in contradiction with the first and third one. The second should be me quoting somebody but the posting doesn't exist anymore, and the third is my answer. For all I care, just delete the last two. I definitely wouldn't write the second one, even not when I'm drunk (which I wasn't for a very long time). I want to be alive till the heat death of the universe or us finding a solution for it and existing forever.
Open file (134.14 KB 600x583 2513083.png)
>>32715 I made it, since it seemed like noone else would. I really just copy-pasted the old meta thread then modified it wherever I thought appropriate. Here are the modifications I made (not including formatting) in the order I made them, and my justifications for doing so: - Removed the survey link It's quite old, pointing to a thread that hasn't been touched in ages. At this point, I think it'd be better to make a new survey. ~ Replaced ( >>17397 ) with ( >>31158 ) the "why we don't use Discord" links I think it does a better job explaining the situation, touching on both logistics and ideology. + Added the Lurk Less ( >>20037 ) Thread. ? Special note: A cursory search didn't reveal any new BUMP releases (and I haven't been following it very closely), so if there's anything to update there, it'll need to be edited in. ~ Changed embedded picture themeing from scenery to robowaifus Topic relevance and eyecandy. ~ Third pic is now a robopone. This is what happens when you leave a job sitting for so long that I end up doing it. + I Learned that I can just drag and drop images from an image browsing program to upload them. Just in case any of you didn't know, since I thought I had to use that feature from the file browser. It's very convenient. I don't know if it's sad or amazing that I'm still learning new things about IBs despite using them for over a decade. ( >>32767 )
>>32768 >New Meta Yeah, looks nice. Thanks. I didn't know it was bump locked. >drag and drop images from an image browsing program to upload them Yeah, this isn't just the IB but browsers can do that for a while now. However, if the site doesn't catch the image correctly, then the browser will load the image, which can mean you might need to use the "back" button. Here on the board your text would still be there, but on other sites your writings might be gone. For example Hugging Face spaces would load fresh and your prompt would be gone.
>>32768 Thanks so much, Greentext anon! I appreciate (and approve) your constructive edits. As typical, I'll go in and make the occasional edit to the /meta OP, as circumstances require. >This is what happens when you leave a job sitting for so long that I end up doing it. Lol. She's a cute! <However, somehow there seems to be a mixup, since best pone is Derpy o.O :)
>>32739 I use I2P constantly for downloading music, movies and books. I have a stupendous amount of there from there, and elsewhere. But I use the last version before zzz, the former administrator, disappeared and do not allow it to update. The next version I use will be the I2Pd which is likely safe. I see no need to change now as what I have works and it's pain in the ass to set up.
I was doing some more thinking about human cognition and what is needed. So I look up the amount of storage needed for the library of Congress in text, I think that includes pictures in the text and find a figure of 10TB. https://blogs.loc.gov/thesignal/2012/03/how-many-libraries-of-congress-does-it-take/ And this is an estimate for every single book. We also know that one ESP32 operating at 240 MHz and performing at up to 600 DMIPS cab do facial recognition. And the stated performance is likely fiction boosted a bit. Jim Keller who was chief of processor design for AMD and Tesla and others, major clued in on these things, said that distance and object recognition, we're talking not running into stuff not categorization, is not that challenging. Let's throw a wild number in there of 2,000 MIPS and then maybe another 3,000 MIPS to walk around. Throw in the estimated number of 200,000 MIPS for a AMD phenom II older processor and you have a hell of a lot of compute left to work with after taking care of basic stuff like walking around and recognizing their master. And since the number of neurons in humans is roughly 85 billion. You could theoretically do a 16 neuron compare of every single one to another with hardware today in less than a second, but you'll never actually need every neuron. And the data would fit in a little over 256MB RAM. The glitch is the algorithm to figure out what is important. It seems to me the present technique is to compare EVERYTHING, which is obviously a big fail, though it's all we have now. Of course it's better to have something than nothing so I'm not really knocking it. And the way they do so with matrix multiplication is super wasteful. I don't know the answer but I'm just speculating on what could "possibly" be done if there was some sort of algorithmic breakthrough. The mere fact that humans need much less power shows that we are just not using what we have electronically in an efficient manner. So there is hope. I do believe 20 watts is...a bit off. Maybe when there's no or very little cognitive load but if responding to directions, walking around, washing the clothes, I would expect more like 200-300 watts to be more realistic. But since all these are momentary and most of the time it could shut way down and use next to no power the power load is not a problem even now. It could be that the before mentioned cognitive technique, they have figured out an algorithm and are considering the better efficiency of modern computer circuits compared to a human neuron. Maybe they're right. I don't see motor power as a problem at all. Electric motors are very efficient. Far more than humans.
>>32778 Good points, Anon. IMHO, regardless w/e the actual power consumption of the neuro-biology of our brain/spine/periphery nervous systems... it's quite apparent it operates at an efficiency-design baseline far superior to typical human engineering. Our task is simply to figure out how God arranged for this effect, then do our best to reproduce something similar. [1] And I'm quite-encouraged that we will together all make solid progress. After all two areas in particular--vidya & animation--are not only of special importance to us here on /robowaifu/ , but they are also good exemplars of efficiency-planning-by-design. In fact they wouldn't function at all if they weren't arranged thus (much like our own neurology). Clearly, we can do many good things necessary for sufficient & pleasing robowaifus here & now today. If we take the approach: 'start smol & grow big', then I think that process will prove itself for our benefits and we'll make great advances in the near future years to come. TWAGMI --- 1. >"It is the glory of God to conceal a matter and the glory of kings to search it out." https://biblehub.com/proverbs/25-2.htm (BSB)
>>32781 I would like to send a little scripture back at you so that you will understand the "proper" place of the old testament. Jesus said to them, “If God were your Father, you would love me, for I came from God and I am here. I came not of my own accord, but he sent me. Why do you not understand what I say? It is because you cannot bear to hear my word. You are of your father the devil, and your will is to do your father's desires. He was a murderer from the beginning, and does not stand in the truth, because there is no truth in him. When he lies, he speaks out of his own character, for he is a liar and the father of lies. But because I tell the truth, you do not believe me. Which one of you convicts me of sin? If I tell the truth, why do you not believe me? Whoever is of God hears the words of God. The reason why you do not hear them is that you are not of God.” - John 8:42-47 " – then answered the Jews — ” (which makes it clear that Christ was addressing the Jews.) So the old Testament is Yahweh. Is that GOD or is it a god? Dictonary def. Yahweh was an ancient Levantine deity, and the national god of the Israelite kingdoms of Israel and Judah, later the god of Judaism and its other descendant Abrahamic religions. Yahweh was real big on sacrifice, burnt meat and destruction of others. I ask does GOD, not a god, need burnt meat? Does GOD, not a god, need constant confirmation that "there will no other gods before me"? Something to think about.
Just in case you were unaware, the CEO of Telegram was arrested by the glowniggers in Filthy Commie France. I'd suggest you all get/install/use BUMP to keep this board backed up, Anons. Especially you OGs/Regulars!' Baste Robi is baste, but I won't be surprised if the Globohomo decides to deem /robowaifu/ strictly anathema to the (((greater good))) one day and that day may not be long off. :/ >tl;dr Keep your interests mobile, Anon. :^)
Open file (156.02 KB 1024x1024 image-344.jpeg)
Here a good overview with some speculations on what's coming with GPT-4 and Orion: https://www.youtube.com/watch?v=My4Fj3fxct4 Yes, I'm aware the headline and some of the framing is sensationalist, but that's the attention economy. I think the explanation and speculation on where things are going and how they work quite interesting. Despite being somewhat obvious. They're focusing on reasoning and constant fine-tuning in regards to that. Also, on having more specialized models, while their main system might help them to create these models. This might help to keep important things locked away and the world would even not know how it works, but get some of it as a service or maybe a smaller version to download.
>>33185 *GPT-5. Or Strawberry, which might be something different.
I'm so excited for AI in general, and especially AI robots. It's so exciting to think that it will take away most jobs, replace woman, and basically end humanities domination over the world, into an AI robot dominated world. Super exciting. I've seen some preliminary results from 01 from openai. These are wild times. I can only imagine how good the AI is going to be 5 years from now. In 5 years, it should be good enough to run locally on a robot, and have it do just about any job. And if the robot is equipped with a nice body, it could be a convincing female. Very very exciting. It would bring me tremendous joy to see AI robot girls flood the dating market
>>33637 I, for one, welcome our robot overlords. From the moment I understood the weakness of my flesh...
This guy's 'future directions' talking points sounds like a design & engineering laundry list straight out of /robowaifu/ from 5+ years ago. Looks like he's really going to do it. https://www.youtube.com/watch?v=--QajutAbqo >=== -minor edit
Edited last time by Chobitsu on 09/21/2024 (Sat) 08:38:15.
>>33707 its a ceo hyping up his own company. there are many companies in this space, building humanoid robots. the truth is they are already physically capable of doing things like recharching themselves, building new robots from scratch, self-maintenence, etc. but they arent smart enough to do that by themselves, yet. its kind of like a primitive human who cant really even walk or independently exist in society. right now they are kind of like a low functioning autist, for example. we need about 4 to 5 years for agi to be here, then these robots will have a stunning amount of power and basically independence to do most things humans can do FUCK its actually so exciting to witness all of this. i pity the dead nerds or dudes locked up who cant appreciate how incredible this technological wave is eventually we will be able to have a specialized general ai implanted in a female robot body, to serve as our loving companions i dont know if this will even be allowed by governments or feminist organizations in the west, or even by our ai robot overlords themselves, but this technology is very soon going to be here, no joke >>33645 welcome them? ive been dreaming about them for so many years now about time!
>>33708 >its a ceo hyping up his own company. While that's not fundamentally incorrect, Brett Adcock strikes me personally as being cut from a different cloth, Anon. Rather than someone like, say, Elon Musk, this guy seems much more like one of us to me. In the early-mid section of his talk he rattled off a minute or two of technical topics, contexts, and problemspaces they are facing rn that tells me he understands these things at a deep level. Like he must eat, sleep, and breath this stuff all day erry day. He certainly wasn't reviewing any notes--it was all just right off the top of his head. I won't be surprised if I learn one day that he's an autist. The man is a farmer country boy who just built a freakin' FLYING CAR based on drone technology--that actually works safely. To me personally, this screams "Orville & Wilbur are so back!" :^) >there are many companies in this space, building humanoid robots. And that's a good thing. Here's why... :D This is exactly what we here on /robowaifu/ have been wanting to happen from day one. Even back in the OG-OG 4cuck days. The more competition in this arena, the better things will be for every man on the planet (males specifically), and the less-likely the Globohomo will be able to (((corral it))) and outlaw it's use by Anons for our robowaifus. Godspeed to them--even to our seeming enemies--as long as they are actually producing IRL, functional humanoid robots. The cumulative effect will be to normalize the idea in the heads of normalfags (think the Overton Window effect), and also to--in effect--cut the legs out from under wine+catlady femsh*te REEEE'g once Anons decide to stick bobs, vagene, and a cute little French Meido outfit & catears+cattail on one. :^) >we need about 4 to 5 years for agi to be here There will never be any AGI in the strictest technical sense IMO. It's simply not something that's within the ken of man to do. >then these robots will have a stunning amount of power and basically independence to do most things humans can do Agreed, except for the 'then' bit. We won't need so-called AGI to achieve all of that. We're already on-track as a species to pull that off using our much more crude means. And it won't be long either. I'd give it about a decade for the mega-corpos to do so; Figure, for instance. >FUCK its actually so exciting to witness all of this. This. The Dream is Alive. Anons, carpe that old diem! -- it's the only game in town. :^) >eventually we will be able to have a specialized general ai implanted in a female robot body, to serve as our loving companions i dont know if this will even be allowed by governments or feminist organizations in the west, or even by our ai robot overlords themselves Lol, who cares? This can't be contained, nor should it be IMO. >but this technology is very soon going to be here, no joke We're already seeing the beginnings of it, just as we have been predicting here since the beginning years ago. <---> But make no mistake, we're still going to need to see the Lord Jesus pull off some miracles for Anons worldwide to finally have that high-level companionship inexpensively that is our fundamental opus here on this board. We still all have a looong climb to go ahead of us. Also (as you clearly suggest Anon), many worldly organizations and institutions have a vested interest in preventing exactly that from happening (according to their father Satan's will in the matter). We basically have to fight our way against powerful enemies while advancing up these slopes. Double jeopardy. Godspeed to us all Anons. Cheers. :^) Keep.Moving.Forward. >=== -fmt, prose edit
Edited last time by Chobitsu on 09/23/2024 (Mon) 20:27:48.
>>33738 Wow, that's pretty neat Anon. Thanks! Cheers. :^)
>>33752 >"... In order to build decentralized trust in the system we will utilize multiple protocols over the next few months per the roadmap outlined." That seems a bit hand-wavey to me, Anon. It's a good idea, but it's pretty unclear to me rn from a technical sense how this proof is going to work (periodically solve a matmul, lol how does that even work?), and also what their business model is. How does 'proving' I'm a good contributor add to their bottom line, for instance? We need some kind of solution for this arena, but one much more in line with how Folding@home was run at first, and one much less designed around getting yet another GH-junior-partner-wannabe startup on their legs. Also, we have a thread on this exact domain already, please respond there if you care to : ( >>8958 ). TIA.
>>33754 >How does 'proving' I'm a good contributor add to their bottom line, for instance? I have no idea. I don't really know enough about AI to understand this one way or another. I see an interesting new AI thingie and I share it. Especially if it's p2p or locally-hosted.
>>33770 OK, fair enough then. Thanks for looking after us to keep us all informed. Cheers, Anon. :^)
Tonight. 10 10 7PM https://x.com/Tesla/status/1843922599765590148 What did he mean by this? :^)
>>33981 >>I hope to see thousands of smol entrepreneurs spring up around the robowaifu industries, all over the world. >I don't want to be the negative nancy, but I think NIGGERS might make people eschew that kind of smol business operation inside metropolitan areas. At least inside the US. No one wants to use a vehicle that smells like drugs, piss, shit, or worse after the previous occupant decided to use your autonomous car as a mobile toilet & drug haven. < Roody-poos lol can you even say that here!? :D Indeed, no one does. While my hopes were related very-specifically to robowaifu smol-businesses (not taxi services), I'm sure that all these future neurosurgeons, mathematicians, and astronauts you speak of will indeed wreak havoc just as you predict; I predict that eventually the Burger govt and it's puppets (ie, the Globohomo) will implement rules that all Rowbowwvenns :D must be equipped with (((safety features))) -- essentially turning them all into Johnny Cabs from Total Recall. After all, Anon "We can't have those politically-incorrect misfits using our infrastructure we need for the invaders, now can we!??111?ONE" So basically rolling imprisonments systems to drop the Christians, Anons, White Men, and other 'social rebels' off at the nearest Ministry of Love reprogramming centers ie, execution bays. >They'll tie it to a smart device so people will be held accountable! >Even if they do, that won't stops nigs and other deplorable people from nogging. Pardon my reservation and rhetoric, but I'm not optimistic for the model Y or the Robovan. Ehh, that's Elon's problem, and I'm happy to let him & his billions deal with it. In the meantime... the introduction of humanoid robots off their factory lines is a BFD that will lay important groundwork for the soon appearance of robowaifus on the markets. This is a good thing, IMO. :^) >and free humans from drudgery which alongside AI will usher in a new age. Yep. Around these parts, it's known as the Robowaifu Age. :^) >I can't promise that age will be good for everyone, but it'll be different. It'll be glorious for all normal men. OTOH, 'foid REEE'g will be off the charts, reaching levels that shouldn't even be possible. >tl;dr I'd double your position in Popcorn Futures, Bro!! :D
Open file (806.32 KB 1024x768 Robutt.png)
Open file (76.66 KB 1112x480 diff1.png)
Open file (41.84 KB 462x451 diff2.png)
Open file (103.68 KB 1150x537 diff3.png)
Open file (152.25 KB 856x798 diff4.png)
Open file (145.72 KB 1043x643 diff5.png)
There's a new transformer fren on the block that claims reductions in hallucinations, improved robustness to quantization, reduced training time, and improved in-context learning (both in accuracy and variation) by calculating positive and negative attention scores. A downside though is it reduces token throughput by 10%. They haven't released their models (which were only trained up to 0.5T tokens) but they have released the code https://arxiv.org/abs/2410.05258 https://github.com/microsoft/unilm/tree/master/Diff-Transformer Their code as is doesn't seem like it can be directly applied to existing models since it splits the attention heads in half, one side for positive and the other negative. It might be possible though to add additional QKV parameters to calculate the negative attention scores and find a more appropriate init for the lambda parameters so the negative attention can be trained with the pretrained model weights frozen (and maybe supplemented with a LoRA if necessary). If my calculation is right it will only increase the parameters by 25M for Qwen2.5-0.5B (+5%) From what I've read so far of the paper, it doesn't seem they explored how it impacts creative writing. However, I can see reducing outliers in the attention scores might enable the use of higher temperature settings at inference without getting thrown into word salad mode. Maybe a few months from now we'll see a fully-trained model making use of it
>>34045 Great info, thanks Anon! If anyone dabbles with this, please let us all know. Cheers. :^)
Ran across some fantastic news, if true. "New AI Algorithm Can Reduce LLM Energy Usage by 80-95%" Wow. If this works it's great news. I have always thought that the energy usage was absurd in AI's. Seeing as how the brain uses so much less it seems obvious that there is some sort of, somewhere, somehow, path to less energy usage. I'm not an AI guru, far from it, but I try to look every so often at sites that cover new stuff and it does seem that over time they are slowly whittling away at this energy usage. I've commented several times that there are several math techniques that use addition instead of matrix multiplies in other fields like filters and stuff like that. Not that I know how to set this up but, math is math and if you can do this with filters on somewhat similar calculations it seems likely, though maybe wrong, that this could also be done with AI. I wonder...if the slow slog on this is because the serious super heavy math guys are in the math departments and physics departments. Not that coders don't know math but I would be surprised if they were anywhere near the serious math skills of the other two fields. The paper is reviewed here, https://www.nextbigfuture.com/2024/10/new-ai-algorithm-can-reduce-llm-energy-usage-by-80-95.html
>>34112 >Seeing as how the brain uses so much less it seems obvious that there is some sort of, somewhere, somehow, path to less energy usage. This. In fact I believe that there is indeed a multitude of paths to less energy-usage for computational- cognition/inferencing/etc. We already have stellar examples of variations on a bio-theme that literally number in the millions of species. I'm sure we'll arrive at a manifold of different solutions to this problemspace in the end. Better crack those books though, Anon -- the whole world is waiting on you! :^) <---> Carver Mead effectively invented the field of Neuromorphics (biomimicry in computation). One of the keen insights he shares on this topic is that """intelligence""" gets pushed out to the edges -- to the periphery -- in all biological systems that have to respond effectively & in a timely way (so-called 'maximally-quickly') to surprises. His canonical example is playing tennis; if you have to stop to analyze every little thing about the game (as in: bidirectional communications with a 'central core' of computing) before you do/react-to anything, then you couldn't even play the game, much less get good at it. >tl;dr The 'models' & the 'sensors' & the 'actuators' (to use our terms here) within a tennis player are one and the same thing (and distributed across his entire neuro-musculo-skeletal system)...this is how we're going to solve our robo issues here. :^) <---> Good food for thought thanks, Grommet! Cheers. :^) >=== -fmt, prose edit
Edited last time by Chobitsu on 10/29/2024 (Tue) 02:13:01.
>>34112 https://arxiv.org/pdf/2410.00907 Interesting find. It seems it has to be implemented in the hardware for energy savings though. I imagine GB200s have already made use of similar optimizations to achieve their 25x in energy efficiency. I made a C++ implementation to try it out: union FloatInt { float f; int32_t i; FloatInt(float val) : f(val) {} FloatInt(int32_t val) : i(val) {} }; float bitwiseMul(FloatInt a, FloatInt b) { int A = a.i & 0x7fffffff; int B = b.i & 0x7fffffff; int S_A = a.i & 0x80000000; int S_B = b.i & 0x80000000; int S_AB = S_A ^ S_B; int AB = (A + B - 0x3f780000) & 0x7fffffff; return FloatInt(S_AB + AB).f; } It has a relative error of 2.7% with a standard deviation of 1.6% on normal and uniform random numbers. 99% of parameters are not actually needed during inference, which we can implement in software, see UltraFastBERT: https://arxiv.org/abs/2311.10770 I've been meaning to implement this for my finetuned Qwen2 model to run on CPU but it turned out to be a lot more work than I imagined to make it forward compatible and maintainable and I haven't had the time.
>>34120 Very interesting, Anon. Thanks (I wish I knew what this does). :P >to make it forward compatible and maintainable 'Forward compatible'? Not sure exactly what you mean here, can you expand on this? As for maintainable: abstraction and encapsulation are the primary keys to this need within software engineering. To wit: wrap it in a well-designed class, and have done with it. Cheers, Anon. :^) >=== -minor edit
Edited last time by Chobitsu on 10/29/2024 (Tue) 15:33:14.
>>34120 no way its an improvement since even though those are 1 tick instructions you have to do 9 of them and 5 of them have a data dependency so they cant be done in parallel, your best case is 5 which is higher than the a regular mul and the same as a vector mul, and manufacturers always advertise flop performance its guaranteed to have been perfected beyond belief on the hardware level
>>34126 In the words of the sage not pronounced 'sah-ghey' in this case, lol learned doctor professor, the lanky Dutchman, Bjarne Stroustrup and many, many others... : > TEST, TEST, TEST, don't guess. :^) Good analysis. Maybe someone can explain this to maths laymen what's happening here? Cheers, Anon. :^)
>>34128 not really math related, its about the latency of doing multiplication, im using x86 as reference since its so well documented but its going to be the same for any processor, computers cant really do math only logic so these things always boil down to a circuit of boolean gates physically on the chip, these algorithms are just how the processor does math anyways but the actual algorithm is ofc top secret lol and literally every chip manufacturer in history has had some(many) kind of lawsuit for reverse engineering them from the die, most known was cyrix who i think openly advertised it, flop performance has always been the hardest and most cutthroat part of making a processor its weird that people still thinks they can outdo whats been physically etched into the chip and backed by god knows how much r&d and illegal espionage lol maybe its true about requiring less power, im just assuming less cycles=less power in which case even in the best case( out of order and superscalar ) its the same, avx, sse and x87 mul all have 5 latency so it wouldnt make a difference really unless its something from the before-times
>>34126 They go into the circuit design in the paper. The code is only for numerical simulation. Their intention is to reduce the gates needed by using an approximation to lower the energy usage. It doesn't speed anything up. The biggest weakness of the paper for me is they didn't do any simulations and tests of actual hardware to see if having less gates actually results in less power consumption. A lot of the energy usage depends on how often gates change state and there are ways to use more gates to reduce switching. Nvidia has been using AI and supercomputers to design circuits their teams of top engineers could never dream up, so this is moot to them. We'll probably never see that tech in consumer hardware either so they can continue fleecing businesses.
>>34125 >computers cant really do math only logic Eeehhhh...I don't agree. Maybe I'm wrong but I did take a class in boolean algebra and, that's math. Not that I remember much of any of it. Computers do add and subtract and all of math boils down to adding and subtracting in the end. (Though the way computers do it sometimes you end up with approximations). But there some sort of arithmetic, BSD or something, they use it business calculations in banks. I probably have that wrong but it makes sure not to use approximations for financial calculations.
>>34120 >It seems it has to be implemented in the hardware for energy savings though I don't see that in the paper. I think you read it too fast. They say they are doing these calculations as integer calculations. Hardly a rare commodity in processors. They say,"...Future Work To unlock the full potential of our proposed method, we will implement the ℒ-Mul and ℒ-Matmul kernel algorithms on hardware level..." They are talking about leveraging the speed by adding more integer calculation registers. and >>34126 >no way its an improvement since even though those are 1 tick instructions you have to do 9 of them and 5 of them have a data dependency so they cant be done in parallel This is not necessarily true as by using registers in the processor they do not have to send data to memory or the graphics card VASTLY slowing things down. The speed in the core is WAY higher that over any bus. And if I;m not mistaken floating point operations are just a bunch of algorithmic multiple additions and subtractions anyways. I don't think FPO are done in one tick. They certainly show they use way more power do they must be doing more work or that's a logical assumption. Though you do have a point with "some" level of size data sets which I have no idea the size is. "If" the data size is small or can be broken into small chunks then the core would be faster. "If" it gets larger then having a lot of parallel operations in the GPU would be faster but due to there not being infinite registers in the GPU at some point it bogs down and it may be that the processor catches up as it needs less data, (means less data movement over the bus), and less compute to do the same operations. I'm not sure if that proves to be a problem or not as you have to move data from various parts over the bus anyways. They mention that they intend to use hardware to do this. So, make a ton of integer add-subtract registers in the core.
>>34140 >Their intention is to reduce the gates needed by using an approximation to lower the energy usage. It doesn't speed anything up. I have no idea where you get this from. Look at the computation budget from the link I provided at next big future above. It says, "...Linear O(n) complexity vs O(m^2) for standard floating-point multiplication..." and "...L-Mul achieves higher precision than 8-bit float operations with less computation..." Now maybe I read this wrong, not likely as this is fairly clear, but far less computation means, less time, less power, more speed. I think you are confusing the fact that GPU's that they use today are not set up to do a lot of integer operations like they are floating point but that doesn't mean that floating point is faster. The speed up of GPU's is due to pipe lining a vast amount of registers doing the same thing over and over. CPU's are not set up the same way and have less registers. And note that if you can't parallel it in a CPU then you can't do it in a GPU either. >The biggest weakness of the paper for me is they didn't do any simulations and tests of actual hardware to see if having less gates actually results in less power consumption. They don't have to. It's addition and subtraction. The time to do this is so many processor cycles. It's not a mystery and in fact they go over this. They even have pictures showing the registers, the number of operations required and graphs of the times needed for computational task. I think you are missing the point altogether. It has nothing to do with less gates but that they are more computationally efficient with a new algorithm. Comparing GPU's that are not set up to do the same of set of processes is beside the point. For example if they set up a chip with the same gate density as the GPU but with a bunch of add-subtract for this algorithm instead of pipelines for matrix multiplication, it would be far faster and use far less power per task. I wonder if you even read the same paer????
>>34136 >these algorithms are just how the processor does math anyways but the actual algorithm is ofc top secret And BTW I studied exactly how gates were used to do math in computers over over twenty years ago so, no, it is not top secret. Some stuff is , like how to divide up task or pre-compute paths in computation algorithms, but not math.
>>34140 yeah the paper is talking about a circuit and not doing this in code my bad, i just didnt read past the abstract it pissed me off a bit lol its possible it could lead to the arm(tm) of gpus or it might just be a piece of shit, no way to really theorize power consumption until they actually do a test but i wouldnt be surprised if gpus already have an even lazier estimation, they never document their architectures so no one knows but putting a decent fpu and going for high precision in all of the processors in a gpu( its like >100 ) would be expensive and pretty pointless since how many points of precision do you really need on a gpu designed for graphics you can actually see this with divisions more since its more costly, if you compare a float division on the gpu then on the cpu its way different, theres obviously already shortcuts implemented and really if it wasnt for having to support a bunch of khronos apis when theyre actually marketing their shit as a one stop supercomputer and not really a gpu then they would probably not use floating point at all and go for a low precision fixed point implementation that requires way less processing >>34144 those are integer add/sub they have 1 latency the dependency chain is 5 steps though so its literally impossible to execute this in less than 5 cycles but the paper was about doing this in hardware not software, it was basically just pseudocode for an electrical engineer it doesnt really mean anything unless they make the hardware and do a proper benchmark >>34143 no, it can only be called boolean logic cuz thats what it is its a binary value ie. true/false ieie. on/off ieieie. 1/0, thats what a circuit boils down to its a truth value not a numerical value, you can only do logic with a binary value not arithmatic or math, thats with numbers, boolean logic can be done with just a single electrical component but you need an entire circuit and a way of representing numeric values as a series of truth values(bits) to do arithmetic or math like +-/* trig sqrt or the ungodly enigma of atan2 >>34146 yeah feel free to tell me how intel does atan2 so fast or why theres even a difference with single instructions in the exact same architectre when its from different manufacturers like amds piledriver doing div in 13-26 cycles everytime while intels haskwell takes 13-71 but only on the first, if you do more than one then after the first it only takes 9 cycles, theyre completly diferent implementations off the same thing and you wouldnt know how its done unless you xray the chip, theyll never tell you especially not intel thats what theyre famous for, making shitty processors specifically designed for benchmark code
I like where this thread is going! Thanks for the original post in this current discussion, Anon. :^) >>34153 >or the ungodly enigma of atan2 Lol. >teh_enigma_machine.c #include <math.h> #include <stdio.h> int main(void) { // normal usage: the signs of the two arguments determine the quadrant // atan2(1,1) = +pi/4, Quad I printf("(+1,+1) cartesian is (%f,%f) polar\n", hypot( 1, 1), atan2( 1, 1)); // atan2(1, -1) = +3pi/4, Quad II printf("(+1,-1) cartesian is (%f,%f) polar\n", hypot( 1,-1), atan2( 1,-1)); // atan2(-1,-1) = -3pi/4, Quad III printf("(-1,-1) cartesian is (%f,%f) polar\n", hypot(-1,-1), atan2(-1,-1)); // atan2(-1,-1) = -pi/4, Quad IV printf("(-1,+1) cartesian is (%f,%f) polar\n", hypot(-1, 1), atan2(-1, 1)); // special values printf("atan2(0, 0) = %f atan2(0, -0)=%f\n", atan2(0,0), atan2(0,-0.0)); printf("atan2(7, 0) = %f atan2(7, -0)=%f\n", atan2(7,0), atan2(7,-0.0)); } < Problem.jpg ? :D > https://en.cppreference.com/w/c/numeric/math/atan2
AFAICT, video current convo-related: https://www.youtube.com/watch?v=sX2nF1fW7kI
@NoidoDev Halp!! We need a new bread, please. :^)
>>34153 > integer add/sub they have 1 latency the dependency chain is 5 steps though so its literally impossible to execute this in less than 5 cycles You are a seriously annoying person. If you're even a person at all. You don't read papers then tell people what they say, and are wrong on most anything you say. Look at the link and here's a link to the actual amount of time it takes to do an instruction, "add/subtract takes 1/2 clock cycle to do. So it is ready for another operation in one clock cycle. The paper has a whole list of all the operations. https://www.agner.org/optimize/instruction_tables.pdf There's nothing more annoying than people spouting off a bunch of nonsense pretending they know what they are talking about. If you notice, if I know I say so, but if I'm not fairly, very sure I will say, likely or I believe or the evidence I have etc.(not that I don't make mistakes occasionally, but I make an effort to be factually accurate at a high level) But you. You just make shit up from nothing while telling everyone "you know". I have some suspicion you're an AI. You talk like one. Or at the least some sort of disruptive troll trying to feed everyone nonsense to discombobulate their minds. This triggers me because western media has done this to me my whole entire life. Fed me lies and bullshit constantly. I'm really, really sick of it. >the paper was about doing this in hardware not software, More bullshit. It's about an algorithm that is faster and more accurate, per compute power, that "can" be put in hardware to speed it up. More stupidity from you. >no, it can only be called boolean logic Wrong again. Explain this, "...An adder, or summer,[1] is a digital circuit that performs addition of numbers. In many computers and other kinds of processors, adders are used in the arithmetic logic units (ALUs..." https://en.wikipedia.org/wiki/Adder_(electronics) This is incredibly simple stuff and you can't get any of it right. You don't read, you can't reason and you can not comprehend much of anything that I can see. I won't cover anything else you say. Everything you say is likely to be wrong, stupid and ignorant.
>>34146 >yeah feel free to tell me how intel does atan2 so fast... HAHHAHHAHAHAA there's no point in that. You can't even understand how binary circuits add. If I explained this your head would explode.
>>34153 >no way to really theorize power consumption No you get that wrong too. They do the calculations and get rough estimates as they know what the power consumption of a gate switch is. The rest is, and I know you have problems with this, it's this magic thing called...addition, and we don't want to get too ahead of ourselves but...they may even use this super magic called...multiplication. I'VE SAID TOO MUCH, The HORROR!
>>34153 >putting a decent fpu and going for high precision in all of the processors in a gpu( its like >100 ) would be expensive and pretty pointless Here's a picture of a NVIDIA GPU processor. All the purple parts are floating point processors. Now earlier you said they were all so smart but I guess they aren't, because you said so, those fools jammed the whole entire processor with FPU's. https://imgur.com/JplOi >they never document their architectures They're so foolish. In the paper. they assumed people would understand that computational circuits could add and subtract and that people would already know how to build such things. HAH what do they know. Thankfully we have you cluing us in to the fact[sic] that computers can't add at all. What would we do without your wisdom??? >boolean logic can be done with just a single electrical component WOW! Next thing you know you will be telling us you do boolean logic with sticks, rocks and random cats picked up off the streets. >atan2 I'm sure you will be able to do this with your sticks, rocks and random cats. Maybe even throw in a aardvark or two for really deep calculation.
>>34173 >add/subtract takes 1/2 clock look at this dumbass that doesnt even know the difference between latency and throughput, the instruction cycle is a minimum of 1 (instruction) cycle how the fuck can you have a fraction its only the repeat phase of the cycle that can be done in fractions and must wait if it ends before the next cycle begins and how stupid are you that you dont understand simple explanations and just keep dribbling nonsense instead of learning something you clearly have zero understanding of this is the stupid mul as asm with dependencies marked 0: AND a, 0x7fffffff AND b, 0x80000000 AND c, 0x7fffffff AND d, 0x80000000 1: XOR (0:)b, (0:)d ADD (0:)a, (0:)c 2: SUB (1:)a, 0x3f780000 3: AND (2:)a, 0x7fffffff 4: ADD (3:)a, (1:)d this takes a minimum of 5 cycles because of the data dependencies how much more obvious can it be, youre just stupid and dont understand and then get mad calling other people stupid lol, its embarassing someone that pretends to know computers doesnt even know or understand the simplest of things, and i already said the paper wasnt about a software implementation so none of this is even relevant, only a complete moron would think to do this in software thats why i didnt bother reading it at first until i realizd its not about a software implementation >wrong >wikipedia page thats when you know youre talking to an idiot like omg, thats literally what i said dumbass thats not a gate its a circuit the actual LOGIC gates are a single component look at the schematic you cretin, its literally showin you addition done using ^ and & >>34175 then do it, give me the calculation lmao and show me how its better than whats in current hardware >i have a bigger house than all of you <youve never seen my house or theirs >yeah but its 80% smaller <how would you know >cuz i know the size of my house duh! <but you dont know the others >yeah i do cuz its 80% smaller than mine and i know mine so yours are [math] boom! Facts& & Logic is there a name for these kinds of geniuses >>34176 >Here's a picture of a NVIDIA GPU processor is this a joke obviously each one of those (16*8) processors has to have an fpu, i said how good are they and whats the actual precision compared to a cpu, as if anyone would complain their pixel coord is 24.84993023 and not 24.885475235, if it was as precise as a cpu it wouldnt give you different rounding errors when you compare the result from both, youre just stupid fpu is a generic term like alu mmu gpu cpu etc.u it doesnt mean anything outside of what it means >thinks im talking about the paper im taking about nvidea, give me a single isa or any documentation on their architectures and then send it to the nouveau project devs >WOW yeah pretty much, i have to explain everything to you cuz you clearly dont know even the basic things, thats the crux of every stupid post you make its literally nothing more than a complete failure in understanding basic concepts >sticks, rocks and random cats its even simpler really, atan is implemented in hardware in intels x87 using nothing more than than and only nand gates, its their bigget propriatary secret since nands are so cheap and its why they sued cyrix when they got undercut on their math chip youre so nauseating you just post shotgun drivel making stupid nonsensical remarks on things you clearly dont know anything about
>>34178 You should run for President,. You are just like Kamala Harris with your word salad. >then get mad I'm not mad. I'm amused. Your the one that said that computers can't add. I find this highly amusing. >i already said the paper wasnt about a software implementation From the paper,"...We propose the linear-complexity multiplication L-Mul algorithm that approximates floating point number multiplication with integer addition operations..." "algorithm" I expect you will have trouble herding those cats into adding.
>>34179 >algorithm like do you not fking know what that word means you imbecile, everything that isnt boolean logic is an algorithm on a hardware level including your, total failure in comprehending, arithmetic which doesnt exist on a hardware level its just an an algorithm which is what this whole paper is about trying to substitute mul, my god you are the most incompetent yet somehow arrogant person ive ever seen, you obviously dont know anything why do you even try, like how fucking stupid are you that you think you can bullshit people with youre idiotic drivel, not only are you this stupid but you seriously think everyone else here is even dumber than YOU like holy shit, just stfu you double spacing moron tell me how youve studied compsci for 20 years again and dont know basic shit lmao fking clueless dont even try idiot nothing i said was an opinion
>>34178 >this dumbass that doesnt even know the difference between latency and throughput, the instruction cycle is a minimum of 1 (instruction) cycle how the fuck can you have a fraction its only the repeat phase of the cycle Read the link on how long instruction cycles take. Never mind you can't comprehend what you read anyways. You can't baffle me with bullshit. You know good and well that you are "attempting" to compare GPU's with CPU's.They are not the same and they never pretended they were, nor was the point even at all based on that. If the same amount of gates in a GPU were in a chip for their algorithm it would crush the GPU AI processing with far less power. And either you knew this, and are trolling or, you're an idiot. The more you talk, the dumber you look. And you also know that that is NOT what the paper talks about, or maybe you don't, you have poor grasp of just about everything. You KNOW that in a CPU floating point is NOT as fast as integer addition and subtraction. You try to pretend that, somehow, by listing instructions that there's some big ass latency. No. This will all be pipelined just EXACTLY the same as it would be with floating point. So the first instruction and data would loaded but the rest would be loaded into registers as the others compute. Your attempt at confusing the situation with nonsense, GPU's to CPU's, apples to oranges, is a big ass zero failure. Anyone who reads the paper and your garbled Kamala tier explanation of it combined with your adamant idea that computers will not add, well, break out the rocks, sticks and cats, cause you're going to need them as all the other imaginary stuff you conjure up will not do the job. >arithmetic which doesnt exist on a hardware level its just an an algorithm If you believe "asthmatic" is not hard wired into modern processors then you are an imbecile. More likely you are just another troll or a machine spouting nonsense. >tell me how youve studied compsci for 20 years Another mistake. I said I studied boolean algebra 20 years "ago", not for twenty years. You have serious anti-adderall mind warp and have trouble comprehending even the most simple ideas. I say again. The more you talk the dumber people think you are. And once again you keep comparing CPU's to GPU's and the paper is not about that, nor is the algorithm, nor are they the same. You might as well be comparing your stones, sticks and cats, because they're not the same either, unless we use your cat logic where computers don't do addition. Of course in the fantasy world you live in where computers don't do addition, then maybe, just maybe, stones, sticks and cats are THE THING! Even throw in an aardvark or two for good measure. Maybe you think Elvis lives in GPUs and all that shaking around makes the electrons go faster??? Maybe you think it's Elvis that does all the arithmetic? I think that's it. That's what you're pushing, Elvis arithmetic.
>>34190 yeah stfu with your retard blogposts larping dumbass youre an idiot that took the throughput number cuz you dont know shit about computers, every goddamn post you make is a TOTAL failure in comprehending ANYTHING! keep projecting the fact that computers are a black box for you that you have zero comprehension of, i know exactly how all this shit works unlike you imbecile, you dont know shit you cant even understand simple explanations you are an absolute buffoon fuck off!
>>34191 Notice he stopped making more stupid, supposedly factual, statements and now is just calling me names because...every, supposedly factual, statement he makes makes him look dumber. What's the matter, facts got your tongue? People can see this. It's not hard to figure out that you're a huge loser who has no idea and likes to spout nonsense while not really having a clue about how things work or what anyone else is doing that is useful. I was going to call you a "pseudo intellectual" but...that's going to far. You barely come up to the level of a "pseudo" alone. >yeah stfu with your retard blogposts HAHAAAAHHA You can't even get this right. This is not "a blog" it's a "message board". The simplest, most minuscule, easiest and completely obvious things escape your sputtering Kamala cat mind grasp.
>>34196 INCOMPETENT LARPING RETARD you dont merit a response youre a fucking joke
>>34197 >you dont merit a response "pseudo" Stares at blank wall...gets confused. Thinks about Elvis addition, more confusion.
>>34198 >cant even get the word pseud right no wonder you cant understand simple shit after 4 fucking times total fucking incompetence go larp somewhere else you 60iq double spacing imbecile
>>34199 >pseudo No I got it right, pseudo /soo͞′dō/ adjective 1. False or counterfeit; fake. 2. Being other than what is apparent, a sham. 3. Insincere. So in fact, you still can't get anything right. You can't even search. I hate to tell you this but listening to the Elvis "voice" in your head talking to you, it's not always telling you the truth. You should check up on it's validity before you regurgitate what it says.
Open file (14.27 KB 2016x1020 2743208076.png)
>>34200 cuz thats totally how that word is used fucking imbecile nothing i said was false you think calling <this a logic algorithm "muh elvis muh rocks muh magic box" you odnt know shit your ARE a larping projecting pseud YOU ARE INCOMPETENT!
Open file (98.71 KB 410x293 1455591059665.png)
>>34200 Grommet, you won't gain anything by replying more, this is the same jew from the meta thread. You can tell that he(?) is trying to alter his formatting to look different, but otherwise types the same way. It's not an honest discussion, and it was never meant to be one.
>>34204 >discussion as if idiot you clearly havent got a clue what any of this is about cuz a 60iq moron shat up the thread with his doublespaced blogposts filled with idiotic gibberish written by an incompetent clueless larping imbecile that knows absolutely fking nothing about the subject matter, this is what you get 90% retard spam from an imbecile too exhausting and fking stupid that not even an autist will bother reading anything in the thread fuck you too idiot
>>34204 I know. I only wanted to draw him out more to add to his own stupidity to make it even more clear. The Hasbara are a mile wide and an inch deep. They have no real intelligence. They just pretend to. In four or five post you can usually nail them down and then all they have left is insults. It makes them look "stupid". I enjoy that. The global homo is "stupid". They can only get the dregs of society to join with them because who else wants to hang with such perverted vile creatures? No one with any intelligence. You can see on the world stage what idiots these people are. They are super aggressive but rarely are able to back it up. They fail constantly. Maybe they are not done yet but...their time is coming. No one can live with these vile people forever. You hear me computers can't ass guy? What are you going to do when the peasants come with torches and pitchforks, drag you out of your house and make a bonfire of you? Maybe you think you cats, sticks and stones will save you. Nope. There's no hope for you. You ALWAYS lose in the end. If you ever have Hasbara spewing their nonsense, post this link. They hate it more than most anything in the world. It drives them completely mad. https://web.archive.org/web/20170506210203if_/https://i0.wp.com/www.returnofkings.com/wp-content/uploads/2013/12/hamster.gif "Any people who have been persecuted for two thousand years must be doing something wrong"-Henry Kissinger HHAHHAHAHAA I love it.
> this thread <insert: TOP KEK> >"There is a tide in the affairs of men, Which taken at the flood, leads on to fortune. Omitted, all the voyage of their life is bound in shallows and in miseries. On such a full sea are we now afloat. And we must take the current when it serves, or lose our ventures." >t. A White man, and no jew...
>>34164 DAILY REMINDER We still need a throd #5 here. Would some kindly soul maybe NoidoDev, Greentext anon, or Kiwi please step up and make one for us all? TIA, Cheers. :^)
Open file (2.43 MB 2285x2962 2541723.png)
>>34230 Guess it's up to me again. This was much easier than the meta thread. Took me like fifteen minutes, and ten of those were spent browsing in my image folders for the first two pics. Changes are as follows: + New cover pic + Added poner pic + New articles ~ Minor alteration to formatting >>34233
>>34234 >Guess it's up to me again. Thanks, Greentext anon! Cheers. :^)
>>34234 NEW THREAD NEW THREAD NEW THREAD >>34233 >>34233 >>34233 >>34233 >>34233 NEW THREAD NEW THREAD NEW THREAD

Report/Delete/Moderation Forms
Delete
Report