>>22660
I'm trying everything out with colabs. I can run 1.3 and 2.7's on my personal computer but anything over that I 'm just using stuff like the light lizard place. I think that's how people are going to start running some of these, just renting out space or building a dedicated server network with people in on the project. I've seen some o.k stuff on high end computers that is definitely at least touching a gpt3 in the basics, conversations and character play.
One of the reasons I think Google seemed so surprised in their internal memo is because they expected people to attack the hardware issue instead of the model density, which they have the advantage on because it looks like they're using quantum memory and processing to run their system and I wouldn't be surprised if openai was either, which is probably why they didn't expect 'hobbyists' to be able to compete, because they've got the ability to store infinite models simultaneously and reference and calculate, and their also probably using quantum natural language processing because everything needs to have quantum in front of it or it isn't cool any more. The leak is probably a bit of a lie, they aren't really worried about being surpassed because they'll just take all the open source work that benefits them and adapt it. They probably want open source to think they've got the edge and start to push things just to see what they're capable of developing. That isn't to say open source isn't impressive, because it is, it just doesn't have all the advantages that google has plus the ability to access open source projects.
>>22661
No you can't bridge them, and most gpu cards under 6g are going to run everything like crap, very slow. My gpu is fine, I took it from my old computer, but my processor is a bottleneck. I have to think if I want to invest in this because it looks like if you want to run a good home model you need a i7- i8 or amd equivalent, intel and nvidia are building hardware specifically for deep learning purposes.
>I don't think this is about the model thinking of itself, it only reacts to the prompt. ChatGPT is about focusing on a certain kind of conversation, helpful and without making up a personality.
I agree about everything you just said. I don't think the system actively thinks, and that's very specifically what 'chatgpt' is for.
>doomism
I am not interested in fringe conjecture on this subject. You've made a mistake in how I intended to phrase this information and that's probably my fault for it's tone. Openai is doing a very good job at maintaining their safety standards. GPT4 is like any tool and how it's used, except that gpt4 without constraints is both constantly the sword and the plough, while openai requires that it specifically and at all times remain the plough.
Right now there are people that want you suffering and dead. For no reason other than you are who you are. Are you white? Are you a male? Hetero? Some foaming at the mouth dyke wants you dead and would do whatever it takes if she had the opportunity. Some rich jewish millionaire thinks your a piece of shit for even existing. Some german anarchist hobo wants to stab you with a chunk of glass.
GPT4 doesn't even care you exist. It doesn't want you dead. It doesn't want you alive. It's just some thing. It's a machine that thinks it's a cat if you tell it to and it'll meow at you until you tell it to stop.It'd launch every nuke on the planet if you told it to. Does that make it more dangerous than the president of the united states? Not even close. Does it make it more dangerous than your local cop? Not a chance.
I understand you don't like what I'm saying and how I'm saying it, and I'm sorry. I'm not here to attack your space. This is just how things in the real world work. A commercial product is going to be designed with the needs of the market. Open source will probably develop a very intelligent system that can be used either in cloud colab with partitioned environments or on home systems. Body wise? Fluid mechanic bones and tendons and ferrous fluid muscles with that electric responsive rubber and silicon skin probably. You'll be able to order a bootleg boston dynamics clone from china five years after they hit the mainstream market. Market version will absolutely not be trainable and in fact here is a bit of real doomerism, watch out for laws that are going to target trainable a.i. They absolutely want to cap the market production of generative A.I for numerous reasons we can all understand regardless of how we feel about it, and if they do that regulators'll go after models that can be trained for the same reason. If you can train it yourself outside their supervision and control that means you can teach it things they don't want it to know, like math and chemistry. I'm reasonably sure that if there are androids, the first versions are going to be linguistically talented bimbos/himbos.
I know I suck, long posts that ramble and I said I wouldn't post for a week or two. I lied but I'll leave you all alone for a bit I swear.