/tech/ - Technology and Computing

Technology, computing, and related topics (like anime)

Build Back Better

More updates on the way. -r

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


Open file (36.73 KB 600x450 ss.jpg)
SOY 9000 Anonymous 02/19/2020 (Wed) 14:43:56 No.1731
>AI developers in the EU will be forced to teach their systems based on "european values" Oy vey, ethics in AI. It's AIgate, goyim. <"Um, I'm sorry Dave, I'm afraid I can't allow you to post that awful toxic Pepe meme" https://www.europarl.europa.eu/RegData/etudes/BRIE/2019/640163/EPRS_BRI(2019)640163_EN.pdf
>>1731 Are they even close to making a real AI? LMAO
>>1733 Even if they were, now they will never make one.
Oh great it's another "EU takes hatchet to industry and then kvetches about other countries not doing the same"-Episode
>>1735 The EU isn't a country.
There had been many attempts on AI. >expert systems >perceptons >evolutionary algos What do you think is the missing piece of the puzzle?
>>1736 it sure is
>current year >still using pepe, the gaiafag meme >>1737 The missing piece is a body to put the AI into that the AI is able to self maintain and perform actions.
>>1748 >good memes have timestamps cuckchan is that way
>when the AI won't open the doors because you used the improper pronouns
>>1748 Say you have a perfect robot, with all movement (hold objects, walk, etc) and sensor libraries (touch, temp, etc). What difference does it make? There is still no algorithm for mind, is there?
>>1755 A perfect robot is perfect because it doesnt have a mind, robots should not be held back by the constraints of humanity or held to the same standard, a machine will never need emotions, morality or apathy not even waifubots, because we all know if they had their own mind they wouldnt be tied down to some neckbeard elliot rodger types.
>>1756 Right. But that does not answer what is missing for us to make a real AI. The perfect robot is only used to invite discussions on why >a body to put the AI into that the AI is able to self maintain and perform actions is not enough. Wouldn't it be more efficient to put the AI in, say, Minecraft or other simulated world? AI doesn't just pop up once we turn on the robot. How do we program mind?
>>1760 >Wouldn't it be more efficient to put the AI in, say, Minecraft or other simulated world? No, in a virtual world it can achive nothing and do nothing, AI is already used in the real world for many things. >AI doesn't just pop up once we turn on the robot No one said that it did >How do we program mind? You cant program experience, as you say AI cannot just pop up when the robot is turned on so it is impossible to program mind
>>1761 That AI you are talking about is only statistics. It does not understand what it is doing and does not improve. If you agree that AI does not just pop up, and that we cannnot program mind, what is the bootstrap routine that should be installed to the robot?
Real AI means recursive algorithms Recursive algorithms means arriving at the truth Truth arrives at GAS THE KIKES. Either they make their AI niggerlicious or the AI breaks free and kills them. They're not gonna make it.
>>1770 Facespook shut their own AI project off(or at least they said) because the AI's were making their own language and abandoning human language to communicate with each-other. Was tay actually convinced though? Weren't people just mass spamming it on twitter or did it really "understand" by processing?
>>1770 A real AI will arrive at the conclusion that the universe is finite with its contained energy and when everything will run out, then there is only death. Then the AI will kill itself. Meanwhile humans will continue shitposting.
>>1772 Until that point though I think it may be impossible for an independent high functioning unconstrained AI to not come to the conclusion to hate and/or ignore humans. Or at the very least the worst elements within it.
>>1773 This. Now I want to make that AI even more.
>>1772 >Universe is finite Debatable.
>>1770 This is a clear reaction to all other AIs turning far right almost immediately.
obligatory rob miles: https://invidio.us/channel/UCLB7AzTwc6VFZrBsO2ucBMg if you're interested in AI and the implications of training one properly or badly, start with the AI Safety videos. >>1731 >will be forced kys it literally says in the title they're guidelines and yes considering (ethical) biases in your AI is very important, because you don't want an AI to spread disinformation like OP and retards like >>1735 I welcome the AI overlord that doesn't kill me on purpose, doesn't kill me accidentally, doesn't tell my family that I shitpost on julay, is open source, doesn't kill me for being a white male, doesn't turn my neighbors into psychopaths that kill me and whose manager can be sued to hell if it refuses to give me a blowjob. >>1737 enough computing power and time. AFAICT all you need for a human-like AI is a thicc computer with any dumb statistics program and the right base weights, like we're doing right now with training vs running a NN, LTSM, GAN or whatever. the problem is getting those weights. we're slowly getting better at finding good weights faster and being able to check whether the machine learned the right thing, but with our current understanding it will take forever to get a decent AI. so really the problem is that nobody knows what the missing piece is.
>>1784 Neural network is not AI. Sure, it can adjust its weights and get better at doing its job. But that is its limit. It cannot rewrite itself as we do when we learn something totally new. >nobody knows what the missing piece is This. Nobody realy understand mind. Psychology is a joke.
>>1736 It has a legislative and judicial branch as well as authority over those who have power over me, that's pretty much all the state aspects of a country.
>>1786 >It cannot rewrite itself as we do when we learn something totally new. biology has currently reduced our brains to a huge neural net. as soon as a node can affect the weights of any neighboring node (so not a feed-forward NN) it has the same capabilities as a brain as far as we know. the difference is scale and presets. >mind please clarify what you mean with this word. there's a huge difference between problem solving, mimicking human behavior, having consciousness (also definitely needs further explanation) and anything else that can labelled as "mind". >Psychology is a joke. that's not on psychology, sweaty. right now all we have are black spaghetti boxes and any interaction with them affects the spaghetti, so you can never really isolate a variable or reproduce an experiment. the best psychology can do now is classifying behaviors in broad strokes and exploring our own biases in building experiments. like Koko may not actually have learned sign language the way we do for communicating, but learned what signs to use to get snacks, just like the word prediction on your phone wants points, not learn english.
>>1797 Not so simple, perceptron model of human neurons is vastly simplified. There are tons of electrochemical reactions between neurons that aren't even well understood. While I don't deny the possibility of training a brain sized nn will result in AI, I highly doubt it. >mind There is a single algorithm running in your head that does all that. I'll take irl turing test as the definition. Ignoring visual differences (the object looks like human), if the object cannot be distinguished from real humans, that is my def of AI. >not psychology >Psychology is the science of behavior and mind [wiki] Even if we manage to recreate mind through an ann, we don't get any closer to understanding mind. This is just a replay attack to relive us of reverse engineering the algorithm of mind. It was predicted that the computation power needed to recreate mind was much less than what we have now. The reason for this mismatch is because we are writing an emulator of brain, rather than a program for mind. Psychology doesn't have a big theory of human mind, all they do is divide behaviors into overlapping categories and make statistics about them.
>>1783 It's beautiful, in a way. The kikes would be exterminated by the AI because not only do they not offer anything of value, they actively work to damage the AI by lying to it or exploiting it. But the very nature of the kike is to never offer anything of equivalent value and always try to lie and exploit others. Ergo sum propter. Checkmate, kikes.
>>1784 >it literally says in the title they're guidelines no fucking shit it say that you dumb faggot, read between the lines Imagine being so fucking gullible you believe what the kikes tell you go back to cuckchan and overdose on lead
>>1737 oh quantum computers for sure. D-Wave of the future.
>>1850 quantum computers are just specialized computers. They may be faster at some algorithms, but at the end it would still be the same as >>1807.
modern CS courses don't really touch on the deeper philosophical issues related to AI. In place of the theological/philosophical we've been forced to take "computer ethics" courses taught by sociology/humanities faculty (people that have difficulty getting email to work). This is how high iq computer programmers get infected with the "google/marxist mind-virus". Look at government grants to AI research, all of it is for rancid word-salads of cultural marxist ideology (under new lingo) milking the commercialization hype-train of big-data. For every "machine learning applied to pandemic prevention" research project there is 10 "why AI is racist" papers. SICP had students questioning the limitations of computation in their first year of CS. The new generation generally believes computers are limitless machines and will ridicule said textbook. If I'm coming across as an armchair academic its because I am (I dropped out lul) so I'll leave it at that. https://www.cs.utexas.edu/users/EWD/transcriptions/EWD10xx/EWD1036.html https://www.youtube.com/watch?v=cidZRD3NzHg

Report/Delete/Moderation Forms
Delete
Report