/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

I Fucked Up

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


“If you are going through hell, keep going.” -t. Winston Churchill


Open file (329.39 KB 850x1148 Karasuba.jpg)
Open file (423.49 KB 796x706 YT_AI_news_01.png)
Open file (376.75 KB 804x702 YT_AI_news_02.png)
General Robotics/A.I./Software News, Commentary, + /pol/ Funposting Zone #4 NoidoDev ##eCt7e4 07/19/2023 (Wed) 23:21:28 No.24081
Anything in general related to the Robotics or A.I. industries, and any social or economic issues surrounding it (especially of robowaifus). -previous threads: > #1 (>>404) > #2 (>>16732) > #3 (>>21140)
>>33738 Wow, that's pretty neat Anon. Thanks! Cheers. :^)
>>33752 >"... In order to build decentralized trust in the system we will utilize multiple protocols over the next few months per the roadmap outlined." That seems a bit hand-wavey to me, Anon. It's a good idea, but it's pretty unclear to me rn from a technical sense how this proof is going to work (periodically solve a matmul, lol how does that even work?), and also what their business model is. How does 'proving' I'm a good contributor add to their bottom line, for instance? We need some kind of solution for this arena, but one much more in line with how Folding@home was run at first, and one much less designed around getting yet another GH-junior-partner-wannabe startup on their legs. Also, we have a thread on this exact domain already, please respond there if you care to : ( >>8958 ). TIA.
>>33754 >How does 'proving' I'm a good contributor add to their bottom line, for instance? I have no idea. I don't really know enough about AI to understand this one way or another. I see an interesting new AI thingie and I share it. Especially if it's p2p or locally-hosted.
>>33770 OK, fair enough then. Thanks for looking after us to keep us all informed. Cheers, Anon. :^)
Tonight. 10 10 7PM https://x.com/Tesla/status/1843922599765590148 What did he mean by this? :^)
>>33981 >>I hope to see thousands of smol entrepreneurs spring up around the robowaifu industries, all over the world. >I don't want to be the negative nancy, but I think NIGGERS might make people eschew that kind of smol business operation inside metropolitan areas. At least inside the US. No one wants to use a vehicle that smells like drugs, piss, shit, or worse after the previous occupant decided to use your autonomous car as a mobile toilet & drug haven. < Roody-poos lol can you even say that here!? :D Indeed, no one does. While my hopes were related very-specifically to robowaifu smol-businesses (not taxi services), I'm sure that all these future neurosurgeons, mathematicians, and astronauts you speak of will indeed wreak havoc just as you predict; I predict that eventually the Burger govt and it's puppets (ie, the Globohomo) will implement rules that all Rowbowwvenns :D must be equipped with (((safety features))) -- essentially turning them all into Johnny Cabs from Total Recall. After all, Anon "We can't have those politically-incorrect misfits using our infrastructure we need for the invaders, now can we!??111?ONE" So basically rolling imprisonments systems to drop the Christians, Anons, White Men, and other 'social rebels' off at the nearest Ministry of Love reprogramming centers ie, execution bays. >They'll tie it to a smart device so people will be held accountable! >Even if they do, that won't stops nigs and other deplorable people from nogging. Pardon my reservation and rhetoric, but I'm not optimistic for the model Y or the Robovan. Ehh, that's Elon's problem, and I'm happy to let him & his billions deal with it. In the meantime... the introduction of humanoid robots off their factory lines is a BFD that will lay important groundwork for the soon appearance of robowaifus on the markets. This is a good thing, IMO. :^) >and free humans from drudgery which alongside AI will usher in a new age. Yep. Around these parts, it's known as the Robowaifu Age. :^) >I can't promise that age will be good for everyone, but it'll be different. It'll be glorious for all normal men. OTOH, 'foid REEE'g will be off the charts, reaching levels that shouldn't even be possible. >tl;dr I'd double your position in Popcorn Futures, Bro!! :D
Open file (806.32 KB 1024x768 Robutt.png)
Open file (76.66 KB 1112x480 diff1.png)
Open file (41.84 KB 462x451 diff2.png)
Open file (103.68 KB 1150x537 diff3.png)
Open file (152.25 KB 856x798 diff4.png)
Open file (145.72 KB 1043x643 diff5.png)
There's a new transformer fren on the block that claims reductions in hallucinations, improved robustness to quantization, reduced training time, and improved in-context learning (both in accuracy and variation) by calculating positive and negative attention scores. A downside though is it reduces token throughput by 10%. They haven't released their models (which were only trained up to 0.5T tokens) but they have released the code https://arxiv.org/abs/2410.05258 https://github.com/microsoft/unilm/tree/master/Diff-Transformer Their code as is doesn't seem like it can be directly applied to existing models since it splits the attention heads in half, one side for positive and the other negative. It might be possible though to add additional QKV parameters to calculate the negative attention scores and find a more appropriate init for the lambda parameters so the negative attention can be trained with the pretrained model weights frozen (and maybe supplemented with a LoRA if necessary). If my calculation is right it will only increase the parameters by 25M for Qwen2.5-0.5B (+5%) From what I've read so far of the paper, it doesn't seem they explored how it impacts creative writing. However, I can see reducing outliers in the attention scores might enable the use of higher temperature settings at inference without getting thrown into word salad mode. Maybe a few months from now we'll see a fully-trained model making use of it
>>34045 Great info, thanks Anon! If anyone dabbles with this, please let us all know. Cheers. :^)
Ran across some fantastic news, if true. "New AI Algorithm Can Reduce LLM Energy Usage by 80-95%" Wow. If this works it's great news. I have always thought that the energy usage was absurd in AI's. Seeing as how the brain uses so much less it seems obvious that there is some sort of, somewhere, somehow, path to less energy usage. I'm not an AI guru, far from it, but I try to look every so often at sites that cover new stuff and it does seem that over time they are slowly whittling away at this energy usage. I've commented several times that there are several math techniques that use addition instead of matrix multiplies in other fields like filters and stuff like that. Not that I know how to set this up but, math is math and if you can do this with filters on somewhat similar calculations it seems likely, though maybe wrong, that this could also be done with AI. I wonder...if the slow slog on this is because the serious super heavy math guys are in the math departments and physics departments. Not that coders don't know math but I would be surprised if they were anywhere near the serious math skills of the other two fields. The paper is reviewed here, https://www.nextbigfuture.com/2024/10/new-ai-algorithm-can-reduce-llm-energy-usage-by-80-95.html
>>34112 >Seeing as how the brain uses so much less it seems obvious that there is some sort of, somewhere, somehow, path to less energy usage. This. In fact I believe that there is indeed a multitude of paths to less energy-usage for computational- cognition/inferencing/etc. We already have stellar examples of variations on a bio-theme that literally number in the millions of species. I'm sure we'll arrive at a manifold of different solutions to this problemspace in the end. Better crack those books though, Anon -- the whole world is waiting on you! :^) <---> Carver Mead effectively invented the field of Neuromorphics (biomimicry in computation). One of the keen insights he shares on this topic is that """intelligence""" gets pushed out to the edges -- to the periphery -- in all biological systems that have to respond effectively & in a timely way (so-called 'maximally-quickly') to surprises. His canonical example is playing tennis; if you have to stop to analyze every little thing about the game (as in: bidirectional communications with a 'central core' of computing) before you do/react-to anything, then you couldn't even play the game, much less get good at it. >tl;dr The 'models' & the 'sensors' & the 'actuators' (to use our terms here) within a tennis player are one and the same thing (and distributed across his entire neuro-musculo-skeletal system)...this is how we're going to solve our robo issues here. :^) <---> Good food for thought thanks, Grommet! Cheers. :^) >=== -fmt, prose edit
Edited last time by Chobitsu on 10/29/2024 (Tue) 02:13:01.
>>34112 https://arxiv.org/pdf/2410.00907 Interesting find. It seems it has to be implemented in the hardware for energy savings though. I imagine GB200s have already made use of similar optimizations to achieve their 25x in energy efficiency. I made a C++ implementation to try it out: union FloatInt { float f; int32_t i; FloatInt(float val) : f(val) {} FloatInt(int32_t val) : i(val) {} }; float bitwiseMul(FloatInt a, FloatInt b) { int A = a.i & 0x7fffffff; int B = b.i & 0x7fffffff; int S_A = a.i & 0x80000000; int S_B = b.i & 0x80000000; int S_AB = S_A ^ S_B; int AB = (A + B - 0x3f780000) & 0x7fffffff; return FloatInt(S_AB + AB).f; } It has a relative error of 2.7% with a standard deviation of 1.6% on normal and uniform random numbers. 99% of parameters are not actually needed during inference, which we can implement in software, see UltraFastBERT: https://arxiv.org/abs/2311.10770 I've been meaning to implement this for my finetuned Qwen2 model to run on CPU but it turned out to be a lot more work than I imagined to make it forward compatible and maintainable and I haven't had the time.
>>34120 Very interesting, Anon. Thanks (I wish I knew what this does). :P >to make it forward compatible and maintainable 'Forward compatible'? Not sure exactly what you mean here, can you expand on this? As for maintainable: abstraction and encapsulation are the primary keys to this need within software engineering. To wit: wrap it in a well-designed class, and have done with it. Cheers, Anon. :^) >=== -minor edit
Edited last time by Chobitsu on 10/29/2024 (Tue) 15:33:14.
>>34120 no way its an improvement since even though those are 1 tick instructions you have to do 9 of them and 5 of them have a data dependency so they cant be done in parallel, your best case is 5 which is higher than the a regular mul and the same as a vector mul, and manufacturers always advertise flop performance its guaranteed to have been perfected beyond belief on the hardware level
>>34126 In the words of the sage not pronounced 'sah-ghey' in this case, lol learned doctor professor, the lanky Dutchman, Bjarne Stroustrup and many, many others... : > TEST, TEST, TEST, don't guess. :^) Good analysis. Maybe someone can explain this to maths laymen what's happening here? Cheers, Anon. :^)
>>34128 not really math related, its about the latency of doing multiplication, im using x86 as reference since its so well documented but its going to be the same for any processor, computers cant really do math only logic so these things always boil down to a circuit of boolean gates physically on the chip, these algorithms are just how the processor does math anyways but the actual algorithm is ofc top secret lol and literally every chip manufacturer in history has had some(many) kind of lawsuit for reverse engineering them from the die, most known was cyrix who i think openly advertised it, flop performance has always been the hardest and most cutthroat part of making a processor its weird that people still thinks they can outdo whats been physically etched into the chip and backed by god knows how much r&d and illegal espionage lol maybe its true about requiring less power, im just assuming less cycles=less power in which case even in the best case( out of order and superscalar ) its the same, avx, sse and x87 mul all have 5 latency so it wouldnt make a difference really unless its something from the before-times
>>34126 They go into the circuit design in the paper. The code is only for numerical simulation. Their intention is to reduce the gates needed by using an approximation to lower the energy usage. It doesn't speed anything up. The biggest weakness of the paper for me is they didn't do any simulations and tests of actual hardware to see if having less gates actually results in less power consumption. A lot of the energy usage depends on how often gates change state and there are ways to use more gates to reduce switching. Nvidia has been using AI and supercomputers to design circuits their teams of top engineers could never dream up, so this is moot to them. We'll probably never see that tech in consumer hardware either so they can continue fleecing businesses.
>>34125 >computers cant really do math only logic Eeehhhh...I don't agree. Maybe I'm wrong but I did take a class in boolean algebra and, that's math. Not that I remember much of any of it. Computers do add and subtract and all of math boils down to adding and subtracting in the end. (Though the way computers do it sometimes you end up with approximations). But there some sort of arithmetic, BSD or something, they use it business calculations in banks. I probably have that wrong but it makes sure not to use approximations for financial calculations.
>>34120 >It seems it has to be implemented in the hardware for energy savings though I don't see that in the paper. I think you read it too fast. They say they are doing these calculations as integer calculations. Hardly a rare commodity in processors. They say,"...Future Work To unlock the full potential of our proposed method, we will implement the ℒ-Mul and ℒ-Matmul kernel algorithms on hardware level..." They are talking about leveraging the speed by adding more integer calculation registers. and >>34126 >no way its an improvement since even though those are 1 tick instructions you have to do 9 of them and 5 of them have a data dependency so they cant be done in parallel This is not necessarily true as by using registers in the processor they do not have to send data to memory or the graphics card VASTLY slowing things down. The speed in the core is WAY higher that over any bus. And if I;m not mistaken floating point operations are just a bunch of algorithmic multiple additions and subtractions anyways. I don't think FPO are done in one tick. They certainly show they use way more power do they must be doing more work or that's a logical assumption. Though you do have a point with "some" level of size data sets which I have no idea the size is. "If" the data size is small or can be broken into small chunks then the core would be faster. "If" it gets larger then having a lot of parallel operations in the GPU would be faster but due to there not being infinite registers in the GPU at some point it bogs down and it may be that the processor catches up as it needs less data, (means less data movement over the bus), and less compute to do the same operations. I'm not sure if that proves to be a problem or not as you have to move data from various parts over the bus anyways. They mention that they intend to use hardware to do this. So, make a ton of integer add-subtract registers in the core.
>>34140 >Their intention is to reduce the gates needed by using an approximation to lower the energy usage. It doesn't speed anything up. I have no idea where you get this from. Look at the computation budget from the link I provided at next big future above. It says, "...Linear O(n) complexity vs O(m^2) for standard floating-point multiplication..." and "...L-Mul achieves higher precision than 8-bit float operations with less computation..." Now maybe I read this wrong, not likely as this is fairly clear, but far less computation means, less time, less power, more speed. I think you are confusing the fact that GPU's that they use today are not set up to do a lot of integer operations like they are floating point but that doesn't mean that floating point is faster. The speed up of GPU's is due to pipe lining a vast amount of registers doing the same thing over and over. CPU's are not set up the same way and have less registers. And note that if you can't parallel it in a CPU then you can't do it in a GPU either. >The biggest weakness of the paper for me is they didn't do any simulations and tests of actual hardware to see if having less gates actually results in less power consumption. They don't have to. It's addition and subtraction. The time to do this is so many processor cycles. It's not a mystery and in fact they go over this. They even have pictures showing the registers, the number of operations required and graphs of the times needed for computational task. I think you are missing the point altogether. It has nothing to do with less gates but that they are more computationally efficient with a new algorithm. Comparing GPU's that are not set up to do the same of set of processes is beside the point. For example if they set up a chip with the same gate density as the GPU but with a bunch of add-subtract for this algorithm instead of pipelines for matrix multiplication, it would be far faster and use far less power per task. I wonder if you even read the same paer????
>>34136 >these algorithms are just how the processor does math anyways but the actual algorithm is ofc top secret And BTW I studied exactly how gates were used to do math in computers over over twenty years ago so, no, it is not top secret. Some stuff is , like how to divide up task or pre-compute paths in computation algorithms, but not math.
>>34140 yeah the paper is talking about a circuit and not doing this in code my bad, i just didnt read past the abstract it pissed me off a bit lol its possible it could lead to the arm(tm) of gpus or it might just be a piece of shit, no way to really theorize power consumption until they actually do a test but i wouldnt be surprised if gpus already have an even lazier estimation, they never document their architectures so no one knows but putting a decent fpu and going for high precision in all of the processors in a gpu( its like >100 ) would be expensive and pretty pointless since how many points of precision do you really need on a gpu designed for graphics you can actually see this with divisions more since its more costly, if you compare a float division on the gpu then on the cpu its way different, theres obviously already shortcuts implemented and really if it wasnt for having to support a bunch of khronos apis when theyre actually marketing their shit as a one stop supercomputer and not really a gpu then they would probably not use floating point at all and go for a low precision fixed point implementation that requires way less processing >>34144 those are integer add/sub they have 1 latency the dependency chain is 5 steps though so its literally impossible to execute this in less than 5 cycles but the paper was about doing this in hardware not software, it was basically just pseudocode for an electrical engineer it doesnt really mean anything unless they make the hardware and do a proper benchmark >>34143 no, it can only be called boolean logic cuz thats what it is its a binary value ie. true/false ieie. on/off ieieie. 1/0, thats what a circuit boils down to its a truth value not a numerical value, you can only do logic with a binary value not arithmatic or math, thats with numbers, boolean logic can be done with just a single electrical component but you need an entire circuit and a way of representing numeric values as a series of truth values(bits) to do arithmetic or math like +-/* trig sqrt or the ungodly enigma of atan2 >>34146 yeah feel free to tell me how intel does atan2 so fast or why theres even a difference with single instructions in the exact same architectre when its from different manufacturers like amds piledriver doing div in 13-26 cycles everytime while intels haskwell takes 13-71 but only on the first, if you do more than one then after the first it only takes 9 cycles, theyre completly diferent implementations off the same thing and you wouldnt know how its done unless you xray the chip, theyll never tell you especially not intel thats what theyre famous for, making shitty processors specifically designed for benchmark code
I like where this thread is going! Thanks for the original post in this current discussion, Anon. :^) >>34153 >or the ungodly enigma of atan2 Lol. >teh_enigma_machine.c #include <math.h> #include <stdio.h> int main(void) { // normal usage: the signs of the two arguments determine the quadrant // atan2(1,1) = +pi/4, Quad I printf("(+1,+1) cartesian is (%f,%f) polar\n", hypot( 1, 1), atan2( 1, 1)); // atan2(1, -1) = +3pi/4, Quad II printf("(+1,-1) cartesian is (%f,%f) polar\n", hypot( 1,-1), atan2( 1,-1)); // atan2(-1,-1) = -3pi/4, Quad III printf("(-1,-1) cartesian is (%f,%f) polar\n", hypot(-1,-1), atan2(-1,-1)); // atan2(-1,-1) = -pi/4, Quad IV printf("(-1,+1) cartesian is (%f,%f) polar\n", hypot(-1, 1), atan2(-1, 1)); // special values printf("atan2(0, 0) = %f atan2(0, -0)=%f\n", atan2(0,0), atan2(0,-0.0)); printf("atan2(7, 0) = %f atan2(7, -0)=%f\n", atan2(7,0), atan2(7,-0.0)); } < Problem.jpg ? :D > https://en.cppreference.com/w/c/numeric/math/atan2
AFAICT, video current convo-related: https://www.youtube.com/watch?v=sX2nF1fW7kI
@NoidoDev Halp!! We need a new bread, please. :^)
>>34153 > integer add/sub they have 1 latency the dependency chain is 5 steps though so its literally impossible to execute this in less than 5 cycles You are a seriously annoying person. If you're even a person at all. You don't read papers then tell people what they say, and are wrong on most anything you say. Look at the link and here's a link to the actual amount of time it takes to do an instruction, "add/subtract takes 1/2 clock cycle to do. So it is ready for another operation in one clock cycle. The paper has a whole list of all the operations. https://www.agner.org/optimize/instruction_tables.pdf There's nothing more annoying than people spouting off a bunch of nonsense pretending they know what they are talking about. If you notice, if I know I say so, but if I'm not fairly, very sure I will say, likely or I believe or the evidence I have etc.(not that I don't make mistakes occasionally, but I make an effort to be factually accurate at a high level) But you. You just make shit up from nothing while telling everyone "you know". I have some suspicion you're an AI. You talk like one. Or at the least some sort of disruptive troll trying to feed everyone nonsense to discombobulate their minds. This triggers me because western media has done this to me my whole entire life. Fed me lies and bullshit constantly. I'm really, really sick of it. >the paper was about doing this in hardware not software, More bullshit. It's about an algorithm that is faster and more accurate, per compute power, that "can" be put in hardware to speed it up. More stupidity from you. >no, it can only be called boolean logic Wrong again. Explain this, "...An adder, or summer,[1] is a digital circuit that performs addition of numbers. In many computers and other kinds of processors, adders are used in the arithmetic logic units (ALUs..." https://en.wikipedia.org/wiki/Adder_(electronics) This is incredibly simple stuff and you can't get any of it right. You don't read, you can't reason and you can not comprehend much of anything that I can see. I won't cover anything else you say. Everything you say is likely to be wrong, stupid and ignorant.
>>34146 >yeah feel free to tell me how intel does atan2 so fast... HAHHAHHAHAHAA there's no point in that. You can't even understand how binary circuits add. If I explained this your head would explode.
>>34153 >no way to really theorize power consumption No you get that wrong too. They do the calculations and get rough estimates as they know what the power consumption of a gate switch is. The rest is, and I know you have problems with this, it's this magic thing called...addition, and we don't want to get too ahead of ourselves but...they may even use this super magic called...multiplication. I'VE SAID TOO MUCH, The HORROR!
>>34153 >putting a decent fpu and going for high precision in all of the processors in a gpu( its like >100 ) would be expensive and pretty pointless Here's a picture of a NVIDIA GPU processor. All the purple parts are floating point processors. Now earlier you said they were all so smart but I guess they aren't, because you said so, those fools jammed the whole entire processor with FPU's. https://imgur.com/JplOi >they never document their architectures They're so foolish. In the paper. they assumed people would understand that computational circuits could add and subtract and that people would already know how to build such things. HAH what do they know. Thankfully we have you cluing us in to the fact[sic] that computers can't add at all. What would we do without your wisdom??? >boolean logic can be done with just a single electrical component WOW! Next thing you know you will be telling us you do boolean logic with sticks, rocks and random cats picked up off the streets. >atan2 I'm sure you will be able to do this with your sticks, rocks and random cats. Maybe even throw in a aardvark or two for really deep calculation.
>>34173 >add/subtract takes 1/2 clock look at this dumbass that doesnt even know the difference between latency and throughput, the instruction cycle is a minimum of 1 (instruction) cycle how the fuck can you have a fraction its only the repeat phase of the cycle that can be done in fractions and must wait if it ends before the next cycle begins and how stupid are you that you dont understand simple explanations and just keep dribbling nonsense instead of learning something you clearly have zero understanding of this is the stupid mul as asm with dependencies marked 0: AND a, 0x7fffffff AND b, 0x80000000 AND c, 0x7fffffff AND d, 0x80000000 1: XOR (0:)b, (0:)d ADD (0:)a, (0:)c 2: SUB (1:)a, 0x3f780000 3: AND (2:)a, 0x7fffffff 4: ADD (3:)a, (1:)d this takes a minimum of 5 cycles because of the data dependencies how much more obvious can it be, youre just stupid and dont understand and then get mad calling other people stupid lol, its embarassing someone that pretends to know computers doesnt even know or understand the simplest of things, and i already said the paper wasnt about a software implementation so none of this is even relevant, only a complete moron would think to do this in software thats why i didnt bother reading it at first until i realizd its not about a software implementation >wrong >wikipedia page thats when you know youre talking to an idiot like omg, thats literally what i said dumbass thats not a gate its a circuit the actual LOGIC gates are a single component look at the schematic you cretin, its literally showin you addition done using ^ and & >>34175 then do it, give me the calculation lmao and show me how its better than whats in current hardware >i have a bigger house than all of you <youve never seen my house or theirs >yeah but its 80% smaller <how would you know >cuz i know the size of my house duh! <but you dont know the others >yeah i do cuz its 80% smaller than mine and i know mine so yours are [math] boom! Facts& & Logic is there a name for these kinds of geniuses >>34176 >Here's a picture of a NVIDIA GPU processor is this a joke obviously each one of those (16*8) processors has to have an fpu, i said how good are they and whats the actual precision compared to a cpu, as if anyone would complain their pixel coord is 24.84993023 and not 24.885475235, if it was as precise as a cpu it wouldnt give you different rounding errors when you compare the result from both, youre just stupid fpu is a generic term like alu mmu gpu cpu etc.u it doesnt mean anything outside of what it means >thinks im talking about the paper im taking about nvidea, give me a single isa or any documentation on their architectures and then send it to the nouveau project devs >WOW yeah pretty much, i have to explain everything to you cuz you clearly dont know even the basic things, thats the crux of every stupid post you make its literally nothing more than a complete failure in understanding basic concepts >sticks, rocks and random cats its even simpler really, atan is implemented in hardware in intels x87 using nothing more than than and only nand gates, its their bigget propriatary secret since nands are so cheap and its why they sued cyrix when they got undercut on their math chip youre so nauseating you just post shotgun drivel making stupid nonsensical remarks on things you clearly dont know anything about
>>34178 You should run for President,. You are just like Kamala Harris with your word salad. >then get mad I'm not mad. I'm amused. Your the one that said that computers can't add. I find this highly amusing. >i already said the paper wasnt about a software implementation From the paper,"...We propose the linear-complexity multiplication L-Mul algorithm that approximates floating point number multiplication with integer addition operations..." "algorithm" I expect you will have trouble herding those cats into adding.
>>34179 >algorithm like do you not fking know what that word means you imbecile, everything that isnt boolean logic is an algorithm on a hardware level including your, total failure in comprehending, arithmetic which doesnt exist on a hardware level its just an an algorithm which is what this whole paper is about trying to substitute mul, my god you are the most incompetent yet somehow arrogant person ive ever seen, you obviously dont know anything why do you even try, like how fucking stupid are you that you think you can bullshit people with youre idiotic drivel, not only are you this stupid but you seriously think everyone else here is even dumber than YOU like holy shit, just stfu you double spacing moron tell me how youve studied compsci for 20 years again and dont know basic shit lmao fking clueless dont even try idiot nothing i said was an opinion
>>34178 >this dumbass that doesnt even know the difference between latency and throughput, the instruction cycle is a minimum of 1 (instruction) cycle how the fuck can you have a fraction its only the repeat phase of the cycle Read the link on how long instruction cycles take. Never mind you can't comprehend what you read anyways. You can't baffle me with bullshit. You know good and well that you are "attempting" to compare GPU's with CPU's.They are not the same and they never pretended they were, nor was the point even at all based on that. If the same amount of gates in a GPU were in a chip for their algorithm it would crush the GPU AI processing with far less power. And either you knew this, and are trolling or, you're an idiot. The more you talk, the dumber you look. And you also know that that is NOT what the paper talks about, or maybe you don't, you have poor grasp of just about everything. You KNOW that in a CPU floating point is NOT as fast as integer addition and subtraction. You try to pretend that, somehow, by listing instructions that there's some big ass latency. No. This will all be pipelined just EXACTLY the same as it would be with floating point. So the first instruction and data would loaded but the rest would be loaded into registers as the others compute. Your attempt at confusing the situation with nonsense, GPU's to CPU's, apples to oranges, is a big ass zero failure. Anyone who reads the paper and your garbled Kamala tier explanation of it combined with your adamant idea that computers will not add, well, break out the rocks, sticks and cats, cause you're going to need them as all the other imaginary stuff you conjure up will not do the job. >arithmetic which doesnt exist on a hardware level its just an an algorithm If you believe "asthmatic" is not hard wired into modern processors then you are an imbecile. More likely you are just another troll or a machine spouting nonsense. >tell me how youve studied compsci for 20 years Another mistake. I said I studied boolean algebra 20 years "ago", not for twenty years. You have serious anti-adderall mind warp and have trouble comprehending even the most simple ideas. I say again. The more you talk the dumber people think you are. And once again you keep comparing CPU's to GPU's and the paper is not about that, nor is the algorithm, nor are they the same. You might as well be comparing your stones, sticks and cats, because they're not the same either, unless we use your cat logic where computers don't do addition. Of course in the fantasy world you live in where computers don't do addition, then maybe, just maybe, stones, sticks and cats are THE THING! Even throw in an aardvark or two for good measure. Maybe you think Elvis lives in GPUs and all that shaking around makes the electrons go faster??? Maybe you think it's Elvis that does all the arithmetic? I think that's it. That's what you're pushing, Elvis arithmetic.
>>34190 yeah stfu with your retard blogposts larping dumbass youre an idiot that took the throughput number cuz you dont know shit about computers, every goddamn post you make is a TOTAL failure in comprehending ANYTHING! keep projecting the fact that computers are a black box for you that you have zero comprehension of, i know exactly how all this shit works unlike you imbecile, you dont know shit you cant even understand simple explanations you are an absolute buffoon fuck off!
>>34191 Notice he stopped making more stupid, supposedly factual, statements and now is just calling me names because...every, supposedly factual, statement he makes makes him look dumber. What's the matter, facts got your tongue? People can see this. It's not hard to figure out that you're a huge loser who has no idea and likes to spout nonsense while not really having a clue about how things work or what anyone else is doing that is useful. I was going to call you a "pseudo intellectual" but...that's going to far. You barely come up to the level of a "pseudo" alone. >yeah stfu with your retard blogposts HAHAAAAHHA You can't even get this right. This is not "a blog" it's a "message board". The simplest, most minuscule, easiest and completely obvious things escape your sputtering Kamala cat mind grasp.
>>34196 INCOMPETENT LARPING RETARD you dont merit a response youre a fucking joke
>>34197 >you dont merit a response "pseudo" Stares at blank wall...gets confused. Thinks about Elvis addition, more confusion.
>>34198 >cant even get the word pseud right no wonder you cant understand simple shit after 4 fucking times total fucking incompetence go larp somewhere else you 60iq double spacing imbecile
>>34199 >pseudo No I got it right, pseudo /soo͞′dō/ adjective 1. False or counterfeit; fake. 2. Being other than what is apparent, a sham. 3. Insincere. So in fact, you still can't get anything right. You can't even search. I hate to tell you this but listening to the Elvis "voice" in your head talking to you, it's not always telling you the truth. You should check up on it's validity before you regurgitate what it says.
Open file (14.27 KB 2016x1020 2743208076.png)
>>34200 cuz thats totally how that word is used fucking imbecile nothing i said was false you think calling <this a logic algorithm "muh elvis muh rocks muh magic box" you odnt know shit your ARE a larping projecting pseud YOU ARE INCOMPETENT!
Open file (98.71 KB 410x293 1455591059665.png)
>>34200 Grommet, you won't gain anything by replying more, this is the same jew from the meta thread. You can tell that he(?) is trying to alter his formatting to look different, but otherwise types the same way. It's not an honest discussion, and it was never meant to be one.
>>34204 >discussion as if idiot you clearly havent got a clue what any of this is about cuz a 60iq moron shat up the thread with his doublespaced blogposts filled with idiotic gibberish written by an incompetent clueless larping imbecile that knows absolutely fking nothing about the subject matter, this is what you get 90% retard spam from an imbecile too exhausting and fking stupid that not even an autist will bother reading anything in the thread fuck you too idiot
>>34204 I know. I only wanted to draw him out more to add to his own stupidity to make it even more clear. The Hasbara are a mile wide and an inch deep. They have no real intelligence. They just pretend to. In four or five post you can usually nail them down and then all they have left is insults. It makes them look "stupid". I enjoy that. The global homo is "stupid". They can only get the dregs of society to join with them because who else wants to hang with such perverted vile creatures? No one with any intelligence. You can see on the world stage what idiots these people are. They are super aggressive but rarely are able to back it up. They fail constantly. Maybe they are not done yet but...their time is coming. No one can live with these vile people forever. You hear me computers can't ass guy? What are you going to do when the peasants come with torches and pitchforks, drag you out of your house and make a bonfire of you? Maybe you think you cats, sticks and stones will save you. Nope. There's no hope for you. You ALWAYS lose in the end. If you ever have Hasbara spewing their nonsense, post this link. They hate it more than most anything in the world. It drives them completely mad. https://web.archive.org/web/20170506210203if_/https://i0.wp.com/www.returnofkings.com/wp-content/uploads/2013/12/hamster.gif "Any people who have been persecuted for two thousand years must be doing something wrong"-Henry Kissinger HHAHHAHAHAA I love it.
> this thread <insert: TOP KEK> >"There is a tide in the affairs of men, Which taken at the flood, leads on to fortune. Omitted, all the voyage of their life is bound in shallows and in miseries. On such a full sea are we now afloat. And we must take the current when it serves, or lose our ventures." >t. A White man, and no jew...
>>34164 DAILY REMINDER We still need a throd #5 here. Would some kindly soul maybe NoidoDev, Greentext anon, or Kiwi please step up and make one for us all? TIA, Cheers. :^)
Open file (2.43 MB 2285x2962 2541723.png)
>>34230 Guess it's up to me again. This was much easier than the meta thread. Took me like fifteen minutes, and ten of those were spent browsing in my image folders for the first two pics. Changes are as follows: + New cover pic + Added poner pic + New articles ~ Minor alteration to formatting >>34233
>>34234 >Guess it's up to me again. Thanks, Greentext anon! Cheers. :^)
>>34234 NEW THREAD NEW THREAD NEW THREAD >>34233 >>34233 >>34233 >>34233 >>34233 NEW THREAD NEW THREAD NEW THREAD

Report/Delete/Moderation Forms
Delete
Report