/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality!

alogs.space e-mail has been restored. You may now reach me using "admin" at this domain. -r

We are back. - TOR has been restored.

Canary update coming soon.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


“Look at a stone cutter hammering away at his rock, perhaps a hundred times without as much as a crack showing in it. Yet at the hundred-and-first blow it will split in two, and I know it was not the last blow that did it, but all that had gone before.” -t. Jacob A. Riis


F = ma Robowaifu Technician 12/13/2020 (Sun) 04:24:19 No.7777
Alright mathematicians/physicians report in. Us Plebeians need your honest help to create robowaifus in beginner's terms. How do we make our robowaifus properly dance with us at the Royal Ball? >tl;dr Surely in the end it will be the laws of physic and not mere hyperbole that brings us all real robowaifus in the end. Moar maths kthx.
>>20575 It's a great business idea Anon. I hope you can successfully pull this off as a plan. Maybe we should consider starting a non-robowaifu projects prototyping thread here? Certainly this tree surgeon robot would be much simpler than a robowaifu will be, yet touch on many of the similarly-needed design & engineering concepts. >The job is dangerous as can be... Yeah, please do not do this yourself Grommet. You're too valuable to our group here! >>20582 >Ideal would be to have code that can be used on a more common system and then transferred to such a system as soon as necessary. Agreed. >=== -minor edit
Edited last time by Chobitsu on 02/22/2023 (Wed) 11:14:42.
>>20582 >I think bit rot can be mitigated by something like btrfs or zfs Maybe I'm using the wrong terminology. You misunderstand. I'm not trying to rude just more precise. It's going to take a while to get this stuff to work. Each micro-controller will have it's own glitches and exceptions. If you code for just one then very soon the latest and greatest will come out and you will be covered up in weird bugs. "If" we can use these robot operating systems the companies or the software providers will show you which micro-controllers work with the software and you can port to it easily. You're calling finctions ionstead of raw registers and assembly coding. I have very little computer programming experience. Only FORTRAN and hexadecimal assembly. Hex programming is hard and takes forever. I could do it but the time is immense. The assumption I'm making s these libraries can do a lot of the little stuff whoe we concentrate of the larger movements of the robot. All this stuff related to timing and a whole bunch of OS type housekeeping, I don't think I can do that or at least not in this lifetime. I've been looking a a little at the Robot OS and it's hard enough. The link I gave above is microROS, there's a larger Robot Operating System with way more features. It runs on Linux. So we may need a single board computer with Linux, then micro-controllers to operated the limbs, sensors etc. Networked together like nerves in a human. Let's hope the microROS, for micro-controllers, and the larger ROS are similar. I get the impression they are. I can already see this wll be harder than origninally thought. I think you will almost have to learn C to do this. I've been downloading books to do this. I have made a couple of C programs and I really hate this stuff. Programming is something that, to me, is satisfying "after" you finish but a real pain in the ass while you are doing it. As for climbing trees. When I was young I climbed plenty but never with a chain saw running nor am I young any more.
>>20629 Your writing was a bit verbose before and I got confused trying to follow the discussion here. I can't always parse the whole comment or loose the train of thought about the conversation. I think you actually made the case for using an OS. Which I support, without claiming to know much about it, if there's no strong reason against it. Otherwise, someone has to make the case why not: OS or drivers from the scratch, down to addressing the SBC? Just to make it a bit more efficient? Maybe I'm wrong but this looks like overkill or at least premature optimization. There should be some abstraction layer running on many SBCs, then we'll only need the write the code on top of it. What's again wrong with some L4 OS? Who's arguing against it? Do we even need it? Is it even necessary to have this amount of fail safety? The real world is messy, we should make our robots being able to sense and adapt to it. Not move super fast and ultra precise. >I think you will almost have to learn C to do this. Maybe something like NIM that compiles to C is good enough? I looked briefly into L4 and it seems to have it's own programming language or maybe it's just a coding style?
>>20877 >nesper >I generally dislike programming C/C++ (despite C's elegance in the small). When you just want a hash table in C it's tedious and error prone. C++ is about 5 different languages and I have to idea how to use half of them anymore. Rust doesn't work on half of the boards I want to program. MicroPython? ... Nope - I need speed and efficiency. Thanks, this might become handy. But why are we in the math thread?
>>20882 >But why are we in the math thread? My apologies to everyone here for contributing to this abuse big-time. NoidoDev, would you be willing to consider becoming involved directly in taking on the tedious tasks (>>20532) involved in cleaning our threads up with posts in their proper places? (Again, apologies for further compounding the problem with this post.)
>>20884 >would you be willing to consider becoming involved ... in cleaning our threads up with posts in their proper places? How? Trying to register as vol?
>>20901 Yes. You just have to get Robi to turn on account creation, then once you have one made, I'll assign that account as a volunteer here on /robowaifu/. Trust me, this isn't fun work, so you should give some consideration if you really want to take it on first.
Just so I can say I was not "totally" off topic. A lot of control theory, I believe, is loosely, very loosely, based on the same sort of ideas as Fourier transforms. Sine waves. I'm not a mathematician but it seems that most are based on wave forms and the math is very slow because it uses sine waves, generally. There's a set of math functions that are based on stretching a raising the peaks of waves, "Wavelets" that is far, far, faster. Djvu uses wavelets, a lot of oil prospecting seismic processing software use wavelets to tease out very fine grained underground structures from the data and movies use these functions to compress data. I've read the processing time for signal processing can be 10 times less using wavelets to analyze data, statistics, etc. It seems that using sine waves based signal processing uses far more processing steps. More computing time. Wavelets use much more of a simple add and subtract without a lot of matrix algebra. I can't help but think it may be analogous to the mass of matrix additions that AI uses now compared to the way Xnor.Ai processes AI far faster. I'm trying to grasp the big idea pattern here. It's seems that present AI (I'm going to equate to a waveform) uses a lot of matrix multiplications to go over every single speck of the data. Analyzing each and every little data point. Xnor.Ai uses a far larger gross examinations by saying, is this one clump of data I'm analyzing larger than that clump of data, and then passing the result on as yes or no. They only care about the larger coefficients when they are analyzing it. I see this as comparable to wavelet processing in a general sort of big picture way. I'm probably screwing this up but I hope that I've pointed things in a generally correct direction. https://en.wikipedia.org/wiki/Wavelet Another offshoot of this idea is a "chirplet". There's a GREAT picture of the different waves that gives you a BIG picture idea of what I'm trying, probably unsuccessfully, to convey at this link. I'll link the picture too. https://en.wikipedia.org/wiki/Chirplet_transform https://upload.wikimedia.org/wikipedia/commons/1/10/Wave-chirp-wavelet-chirplet.png Look at how the different waves could be used to represent information or analyze information. Here's my understanding of why this is a good thing. Look at first "the "wave". Think if you had to add a lot of these up like a Fourier transform it would take a lot of them to fit it into the signal we are approximating. I think the general idea is the same as successive approximation in calculus. So we add up all these waves to make it fit our actual data. Now look at the wavelet. It could stretch and raise the peaks to fit. I think this function uses less coefficients to fit into the signal, now look at the chirplet. since it it seems to already have a bit of stretch built into the function it might take even less stretching and raising of the amplitude to approximate the information waveform. I think the basic idea is that you transform the signal of whatever we are trying to analyze into a convenient waveform, wavelet, chirplet, etc. then we can use simple addition and subtraction to quickly analyze the data to tease out what is going on in this, formally, complex wave of data. This vastly simplifies the processing power needed. Now me saying this like it's some simple thing, well it's not. Figuring out "what" transform to use and how to set it up is difficult. Maybe what needs to be done is to figure out what method, transform, operation would be most advantageous for us to use. What I'm trying to do is to state what the problem "is" and how to go about solving it, not that I necessarily know the answer. And there's always the case that I have just formented a huge case of the Dunning-Kruger effect and have no idea what I'm talking about. If so please inform me this is the case and try to explain in a way such that my little pea brain can understand what might be a better solution.
>>20963 >And there's always the case that I have just formented a huge case of the Dunning-Kruger effect and have no idea what I'm talking about. Lol. There's a somewhat-disreputable-but-probably-reasonably-effective adage that goes 'Fake it till you make it'. Maybe you're on the right track here, but are yet-unexperienced-fully in using the language to spell it out clearly for us plebeians? Regardless, you certainly have the correct sentiments about efficient processing being a (very) high priority for the proper design of good robowaifus. Drive on! :^)
I found something I think could be really fruitful Geometric algebra. I was reading a totally unrelated blog and ran across this comment. "...Geometric Algebra. It’s fantastic, it’s true, it’s representative of how the universe works, it’s relatively (hah!) simple, it’s predictive, and it’s almost completely ignored in scientific education. The behavior of both complex numbers and quaternions emerges GA. Quantum spinors emerge from GA. Maxwell’s 4 equations become a single GA equation describing the relationship between electric charge and magnetism. And all this derives from a single, simple, unifying principle..." What this appears to my limited understanding is a "fairly, easy way to do complex calculations on vectors and many other problems including those of many dimensions. It's been so long since I studied math but I remember taking class on complex numbers and how you could change them to vectors and in consequence multiplying, adding them or other manipulations became very easy. I think this is much the same. You place what you are computing into this vector format and then it becomes fast, low computing power needed to manipulate them. The power of this impressed me as you can take Maxwell's electromagnetic Quanterion math, don't ask, and reduce it to an easier manipulated vector for calculations. Anyways here's a book on, Eduardo Bayro-Corrochano, "Geometric Computing_ for Wavelet Transforms, Robot Vision, Learning, Control and Action" And notice it says "wavelets". I had an intuition that wavelets would be helpful to us. Maybe they are. https://en.wikipedia.org/wiki/Geometric_algebra You can go here http://libgen.rs/ type in "Geometric Algebra" with the nonfiction/sci button selected and find many more books on this. I tried to upload the book I mentioned and it stops at 71%. It's maybe too big. So go to the address I entered above and enter the title I mentioned and you should be able to find the book. It's where I got it from. This address is a great way to find books and scientific articles.
>>22452 This is pretty fascinating Grommet, I think you might be onto something. The point about Maxwell's equations is spot-on. They are in fact a kludge-job. (Please don't misunderstand me, James Clerk Maxwell was a brilliant, impeccable man. A true genius. He simply started from the premise of the reality of 'The Ether', which doesn't exist.) Everything they attempt to describe can be done much more simply and elegantly today. Therefore, since it's correct on that major point, this stuff is probably right about the other things as well. Thanks Anon! Cheers. :^)
>'The Ether', which doesn't exist BLASPHEMY! BLASPHEMY! I must immediately remove myself to the cleansing force of the Inertia Field. :) Did you know that this experiment DID NOT find that the speed of light is the same direction going the direction of earth's orbit as compared to perpendicular to it. It did not. https://en.wikipedia.org/wiki/Michelson%E2%80%93Morley_experiment The textbooks say it did, but it did not. I have read in the univ. library a original copy of the experiment from the Men themselves. In the back it gives the differences. And many, many, many other experiments gave the same results. The most recent one gave a null result, BUT they did it in an underground mine. Cheaters. Maybe they know more about this than they let on. Rid yourself of this silly pseudoscience that there's no ether.
>>22613 From Newton and Math to Ether. What next? Flat Earth? https://youtu.be/JUjZwf9T-cs
>>22662 I don't want to get into too much detail, I can, and maybe I will in the off topic thread(it would take some digging through a lot of sources I believe I still have), but you should not equate what I said to flat earth. There were HUNDREDS of experiments with increasingly accurate equipment testing the Michelson Morley experiment and none of them gave a null equal result of speed of light parallel and perpendicular to the earths movement in space. So I can't prove there's a ether but I can prove that the test they SAY proves there's no ether, and their interpretation of the results they SAY they got, is incorrect. The textbook explanation of this is wrong.
>>22613 Haha, thanks Grommet. You ae henceforth teh /robowaifu/ official court Natural Philosopher. The great Michael Faraday is the chief of your clan. :^)
Found a, new to me, book that covers Geometric Algebra Geometric Algebra Applications Vol. II_ Robot Modelling and Control - Eduardo Bayro-Corrochano(2020) The blurb on this thing sounds like a muticandied wonderland. I'll have to slog through it, not thta I would understad it. Some high points, "...This book presents a unified mathematical treatment of diverse problems in the general domain of robotics and associated fields using Clifford or geometric alge- bra. By addressing a wide spectrum of problems in a common language, it offers both fresh insights and new solutions that are useful to scientists and engineers working in areas related with robotics. It introduces non-specialists to Clifford and geometric algebra..." Unified domain. YEAH. Learn one thing and do it over and over! "...Lie algebra, spinors and versors and the algebra of incidence using the universal geometric algebra generated by reciprocal null cones..." "incidence", "null cones", doesn;t that sound a whole lot like that crazy thing I postualted. Using a set point on a bot body then specifying offsets to move limbs? >>22111 Sounds like it to me(maybe). So maybe here's a way to get the math to work. "...Featuring a detailed study of kinematics, differential kinematics and dynamics using geometric algebra, the book also develops Euler Lagrange and Hamiltoni- ans equations for dynamics using conformal geometric algebra, and the recursive Newton-Euler using screw theory in the motor algebra framework. Further, it comprehensively explores robot modeling and nonlinear controllers, and discusses several applications in computer vision, graphics, neurocomputing, quantum com- puting, robotics and control engineering using the geometric algebra framework..." WOW And he even has a section to make Chobitsu giddy with joy, "...and a entire section focusing on how to write the subroutines in C++... to carry out efficient geometric computations in the geometric algebra framework. Lastly, it shows how program code can be optimized for real-time computations..." I'll try to up load but a link f not. http://library.lol/main/7C2C1AEAA23194B1D55E218BE5EE87E7 Won't upload so you'll need the link. It's 20.6MB
>>23088 >Geometric Algebra Applications Vol. II_ Robot Modelling and Control Neat! Nice title. >And he even has a section to make Chobitsu giddy with joy, LOL. Thanks Anon, I"m giddy. :^) (Actually, it's everyone here that will be 'giddy' in the end, IMHO. C++ is our only practical option that will actually work... but I needlessly repeat myself :^) >Lastly, it shows how program code can be optimized for real-time computations..." Sounds like an amazing book if he delivers. Thanks Grommet! Cheers. :^)
>>23088 If anyone here speaks Spaniard, maybe you can help us track down the software. The book's link given is http: //www.gdl.cinvestav.mx/edb/GAprogramming So, AFAICT it looks like the 'edb' account is no longer a part of this Mexican researach institution (at least on the redirected domain for this link). A cursory search isn't turning anything else for me. Anyone else here care to give it a try? TIA. >=== -prose, sp edit -break hotlink
Edited last time by Chobitsu on 06/12/2023 (Mon) 21:44:14.
>>23095 That link is dead for me
>>23097 >That link is dead for me Yup, thus my request. This link (the new, redirected domain by Mexico gov) is dead: https ://unidad.gdl.cinvestav.mx/edb Even though his CV https://www.ais.uni-bonn.de/BayroShortCVSept2021.pdf (interestingly, located at a German research group) still lists this as his page. Whatever his other (impressive) mathematical accomplishments, he sure makes it hard to find his book's software heh. :^) --- Also, AFAICT the official book publisher's (Springer - Verlag) page doesn't have any software links either. Am I just missing something anons? https: //link.springer.com/book/10.1007/978-3-030-34978-3 --- Here's an entry for his work at the MX institution. Appears to be a grant amount. Again, Spaniard may help out here. https: //www.gob.mx/cms/uploads/attachment/file/458453/CB2017-2018_2ListaComplementaria_Abril2019.pdf p5: A1‐S‐10412 Percepción Aprendizaje y Control de Robot Humanoides Centro de Investigación y de Estudios Avanzados del Instituto Politécnico Nacional EDUARDO JOSE BAYRO CORROCHANO INVESTIGADOR CONTINUACIÓN $1,974,315.95 >=== -add publisher's link -minor edit -add grants link -break hotlinks
Edited last time by Chobitsu on 06/12/2023 (Mon) 21:43:35.
I went to google, in desperation, a last resort, and used their translate. He has a site at the school with his publications listed but...no links to the code. I tried searching for the book + code and all sorts of variations. I'm usually reasonably good at finding things but...a big blank on this code. It's also not on the internet archive. There's a possibility that his code, though not exactly conforming to the book, is in his papers as his book seems to be a summation of his papers. You can find his papers here, http://libgen.rs/scimag/?q=Eduardo+Bayro-Corrochano So whatever code you are looking for match the subject with the paper and maybe the code will be in the paper. Or at the least a mathematical representation of what the code is supposed to do.
More searching and I find a page full of software for Geometric Algebra, not his unfortunately but lots. Even in C++. https://ga-explorer.netlify.app/index.php/ga-software/
And look at the publications page for this. It's all about integrating GA with computing and how to go about it. Interesting blurbs, "...Geometric Algebra (GA) in diverse fields of science and engineering. Consequently, we need better software implementations...For large-scale complex applications having many integrating parts, such as Big Data and Geographical Information Systems, we should expect the need for integrating several GAs to solve a given problem. Even within the context of a single GA space, we often need several interdependent systems of coordinates to efficiently model and solve the problem at hand. Future GA software implementations must take such important issues into account in order to scale, extend, and integrate with existing software systems, in addition to developing new ones, based on the powerful language of GA. This work attempts to provide GA software developers with a self-contained description of an extended framework for performing linear operations on GA multivectors within systems of interdependent coordinate frames of arbitrary metric. The work explains the mathematics and algorithms behind this extended framework and discusses some of its implementation schemes and use cases..." another paper, "...Designing software systems for Geometric Computing applications can be a challenging task. Software engineers typically use software abstractions to hide and manage the high complexity of such systems. Without the presence of a unifying algebraic system to describe geometric models, the use of software abstractions alone can result in many design and maintenance problems. Geometric Algebra (GA) can be a universal abstract algebraic language for software engineering geometric computing applications. Few sources, however, provide enough information about GA-based software implementations targeting the software engineering community. In particular, successfully introducing GA to software engineers requires quite different approaches from introducing GA to mathematicians or physicists. This article provides a high-level introduction to the abstract concepts and algebraic representations behind the elegant GA mathematical structure. ..." https://ga-explorer.netlify.app/index.php/publications/ I'm getting the feeling that using this framework GA you can repeat it over and over. Saving computing resources and making all computing in one big scheme that can be repeated with far less resources. Now this is VERY MUCH like that Rebol programming language that I blathered so much on. One of it's BIG strengths is this unifying character of "series list" and the manipulation of them. It's why Rebol can make all these different functions in the software package and still be a megabyte. I see this sort of thing all over the place. I want to emphasize I'm not a math wiz, or even a fizzle, but I'm ok at recognizing patterns. I see a lot of computing doing this sort of thing. Like Plan 9 operating system and the QNX operating system. They use to great effect the idea of making everything in the code pass messages instead of a mish mash of pointers and other such drivel. A counter to show the difference. Linux is old school, mish mash, so it's a huge hair ball of mass and dreckage, While QNX and Plan 9 are light tidy things. L4 microkernel family does this also. In fact it was a dog at speed until they changed it to pass messages then it took off. I think they use a version of this in F-16's as the OS. Now I also know next to nothing about AI but I do know it's a huge mass of matrix manipulations. And it's very likely, like Maxwell's Quaternion calculations, that GA can whittle it down to size. It may be that the same sort of resource compaction can be done in the case of AI with GA also. Or maybe not. One more link https://hackaday.com/2020/10/06/getting-started-with-geometric-algebra-for-robotics-computer-vision-and-more/
>>23145 There's a library for that called opencv. You can do it from scratch if you want though.
>>23143 >>23144 Thanks Grommet! We'll keep looking from time to time. :^) >>23147 Thanks for the info Anon. OpenCV is pretty amazing IMO.
So I find this library. It's for fast display of surfaces. Including joining them. It's from a guy who is deep into CAD work libraries. His explanation, "Fidget is experimental infrastructure for complex closed-form implicit surfaces." Now I don't have the math chops for this but...it seems to me that a library that is super fast at mapping closed surfaces, of all sorts, might be a good tool for limb movement. So defining a path for some limb, leg, arm or avoiding hitting something could use this library to move with less compute in a very accurate manner. Or so I theorize. I could, and probably am, wrong. I linked a CAD library earlier that had little distortion. Apparently when you join objects even in expensive CAD programs it tends to sometimes goof up the joints. The other library and this one supposedly do not have this problem. https://www.mattkeeter.com/projects/fidget/ I wonder, says the fool, if this technique can be used to figure the shape of things and display them quickly could it, with a little tweaking, use stereo vision to see what shape is, very fast and not hit it? These damn Ohhhs O vs zeros 0 are driving me nuts on the captcha. Could these be excluded!! You can't tell the difference.
>>37985 >I wonder, says the fool, if this technique can be used to figure the shape of things and display them quickly could it, with a little tweaking, use stereo vision to see what shape is, very fast and not hit it? As I understand things, the common approach used now (Mars rovers, FSD Tesla, and the like) is to: 1. Create a pointcloud frame of LiDAR readings of the environment 2. Create an implicit surface map (probably using NURBS b/c speed) that fits the surface patches onto the points in the cloud, give-or-take some epsilon 3. Perform route planning against those surfaces (goal-planning, goal-seeking, terrain-following, collision-avoidance, etc.) Since the first two steps of this process algorithm need to typically occur 30 times per second+, those subsections of the overall algo need to be fast. LiDAR units typically do this in h/w using ASICs, IIRC, then output that as a data stream to downstream processing units. >tl;dr You have a good idea, I think, Grommet. But I'm not too sure how to approach this problemspace effectively just using cheap stereography, instead of a good high-quality LiDAR. Hopefully, I'm wrong! :^) <---> >These damn Ohhhs O vs zeros 0 are driving me nuts on the captcha. Could these be excluded!! You can't tell the difference. Just hit the little circular 'refresh' icon to get a new captcha image? --- >links -related: https://fab.cba.mit.edu/classes/S62.12/docs/Duff_interval_CSG.pdf https://www.mattkeeter.com/research/mpr/ https://www.mattkeeter.com/research/mpr/keeter_mpr20.pdf
>>37987 Hate to be "um actually", but Tesla FSD (at least the ones on consumer units) uses camera vision. I believe the Google self-driving cars are the ones that use LiDAR.
>>37989 Great, I was hoping I was wrong. Sounds like I probably am! :^) I understand simplistic geometric, parallax-based range finding using two perspectives (ie, two cameras, in this context), but I haven't a good understanding yet how to transform that information into serviceable implicit surfaces (useful for all the rest of the downstream processing (cf. >>37987 ). (Remembering that it all needs to run in a high-speed manner to be actually useful for realworld robowaifus.) Ideas?
Edited last time by Chobitsu on 04/29/2025 (Tue) 05:41:44.
>>37993 Basically, it works with a lot of math that our minds can do instinctually. Combining the image and finding the difference between them. Luckily for us, I found this powerpoint that is really useful https://www.slideserve.com/ursa/binocular-vision-and-the-perception-of-depth
>>37994 Slideshow in images Part I
Open file (35.48 KB 720x540 monocular-cue-1-n.jpg)
Open file (52.43 KB 720x540 monocular-cue-2-n.jpg)
Open file (33.50 KB 720x540 monocular-cue-3-n.jpg)
Open file (52.67 KB 720x540 monocular-cue-4-n.jpg)
Open file (45.64 KB 720x540 monocular-cue-5-n.jpg)
>>37995 Slideshow in images Part II As you can see in this section, a lot of depth perception can be done with just one camera/eye
Open file (65.14 KB 720x540 monocular-cue-51-n.jpg)
Open file (44.50 KB 720x540 monocular-cue-6-n.jpg)
Open file (35.21 KB 720x540 monocular-cue-7-n.jpg)
Open file (40.79 KB 720x540 monocular-cue-8-n.jpg)
Slideshow in images Part III Even more monocular cues
Open file (40.72 KB 720x540 binocular-cue-1-n.jpg)
Open file (49.23 KB 720x540 binocular-cue-2-n.jpg)
>>37998 Slideshow in Images Part IV Binocular cues.
>>37994 >>37995 >>37996 Great! Any chance you could upload this here as a pdf? I can't get past the j*wgle captcha on my end. --- >update: Oops, apologies. Looks like you're already uploading the images themselves. Thanks! :^)
Edited last time by Chobitsu on 04/29/2025 (Tue) 05:54:06.
>>38003 Fast! >>38005 Gets in a get throd! :D >>37999 >>7777
>>37994 Distance measurement to not hit things. According to Jim Keller, and he would know for sure, 100%, this is trivial. I've talked about wavelets before and I believe they are used in this. Wavelets are excellent at "edge" recognition. So instead of points you use edges to determine shape or so I think. Search for "Wavelets 3d recognition distance" AI answer "Wavelets can be used in 3D recognition by representing shapes in a compact form, allowing for efficient distance calculations between different 3D models. This approach enhances the accuracy and speed of recognizing and generating 3D shapes in various applications." So yes and the search above gives papers, one... Neural Wavelet-domain Diffusion for 3D Shape Generation https://arxiv.org/abs/2209.08725 Wavelets are much faster than older signal processing algorithms. Because it does not use multiplication. Only addition and subtraction and you can easily see that would speed things up a great deal. Off the top of my head some basic uses it can be roughly ten times faster. DjVu the book format uses wavelets to compress books far better without losing resolution. If I'm not mistaken most ALL video codecs with high compression but good quality are some form of wavelet based system. Lidar I think is too costly and not needed. I have often thought that what visual sensors need is, more wavelengths. They are only using visible and that's a fail. If they used infrared, ultraviolet and maybe thermal all together then cars, and waifus, would never hit stuff. I read that living things give off ultraviolet light but materials, normally, don't.
>>38003 >Visual Depth Perception and the Cues Involved.pdf) Thanks
>>38012 >https://arxiv.org/abs/2209.08725 Very interesting! Thanks, Grommet. Maybe some kind soul will post the pdf here on the board. Cheers. :^)
>>7777 Dancing should be easy to produce In a single pendulum, one linkage and one mass creates a constant rhythem shifting between the potential energy and kinetic energh as it travels right to right, up and down. https://youtu.be/RR4aZH1swfU Adding one more set of mass and linkages creates chaotic motion where the energy is constantly shifting between 4 pools of energy based on the current linkages position. Adding variable masses, length, spring constant and controller settings and and even more joints, you are just multiplying infinite with infinites on what is possible. Some people dance to just to join up a set of still frames, but it is more fun to loose up and take a wild ride with the music as the input.
>>38025 Thanks, GreerTech! Cheers. :^) >>38028 That's a good point, Anon. I'm pretty sure that Kiwi & others here would agree that the simpler things are, the less things that can break! Cheers, Anon. :^)
>>37995 >>37996 >>37998 >>37999 This is something I was going to mention in the face thread >>9 on why I'd been thinking of making fixed, protruding lens "ayylmao-style" eyes instead of a screen face. My end goal would be a waifu with realistic human-like eyes, but working on a minimum-viable waifu means static eyes to reduce cost & complexity. (Not having a screen reduces complexity further, but I digress) Years ago I watched some documentaries on the "starchild skull" although I can no longer find the exact clips, the people that analyzed the skull (and didn't believe it was just caused by hydrocephalus) made several interesting comments about the eye sockets. There were no sinuses around the eye sockets, the holes in the eye sockets where the optic nerves pass though were much closer together and nearly at the tear ducts, and the eye sockets themselves were way too shallow and oddly-shaped. But assuming that the eyes weren't actually deformed in any way, they reverse-engineered an eye shape based on the socket, and ended up with an eye with no blind spot, barely enough room for the eyes to move except to saccade, and lenses that bulge out in a somewhat conical shape, resulting in a very deep focus, so almost everything in view to be in focus. Eye designed like this throw a few depth perception cues completely out the window. But they also seem like they'd be simpler for the brain to process. Everything is in focus means you don't need to look directly at something to get fine details like text, and no mental filling in of the gap where the blind spot is, and coordinating eye movement may have a little trade-off with needing slightly better control over head stability. Biologically it seems feasible, but technologically it's just a fixed pair of cameras with fisheye lenses. It seems like the human eye makes the brain do a lot of shortcuts and make assumptions because of how shitty the eye is, but perhaps current image processing AI is just so focused on details that the ~30 Megapixel equivalent to the human eye is just too much to process.
This is a great conversation going on here, but can we please move it to the Vision thread ( >>97 )? We're all ** derailing the Mathematics/Physics thread at this stage now, I think. Open to other viewpoints on this however (since the geometrical aspects are partly-related to a degree). I just think this topic is primarily vision-related, and will be hard to find this conversation ITT in two years from now! :^) --- ** I'm not referring to posts like ( >>38028 ), which clearly are related ITT.
Edited last time by Chobitsu on 04/30/2025 (Wed) 06:08:08.
>>38230 Mixing in a little truth in hopes to leave your niggerpills & well-poisoning laying around behind you isn't going to work here, friend. Your insulting, BS post will be baleeted in ~3days' time, out of respect for Anons here. Either stop being a (drunken, likely) a*rsehole, or find yourself banned (again). <---> >muh_maths Yes, we're aware here its going to take maths. And specifically, maths that runs on a computer -- ie, code. There's at least one freely-available C language implementation of a Mobile Inverted Pendulum (MIP) solution that I've linked elsewhere on the board (the eduMIP; and that tied directly to operating a robotics-oriented, SBC hardware solution [the BeagleBone Blue]). Do you know of any others? That's what could be helpful to us all here. Specifically we need one that follows the bipedal humanoid form, and not just a balancing, wheeled-base unit. <---> As you rightly point out, a (necessary) "spinal column" (or similar) is part of the solution needed. And most of our designs for a waifu also include a head. This 'thrown weight' of the head at the end of that multi-nodal complex lever (the spine) is indeed an interesting kinematics problem even were it hard-mounted just to a tabletop. Throw in the fact that its instead mounted to a hips structure; and that 'mounted' atop a bipedal, multi-nodal pair of complex levers (the legs/knees/ankles/feet/toes complexes); and you have quite a fun problemspace to work! Her having arms & hands might be nice, too. :DD And don't forget to manage path-planning; accounting for secondary-animations mass/inertia -dynamics; multi-mode (walking, running, jumping, 'swooping'[as in dance], etc.) gaits; oh, and the body language too (don't forget that part, please)! And all running smoothly & properly -- moving in the realworld via actuators/springy tensegrity dynamics/&tc! >tl;dr Why not get started on it all today, peteblank!? I'm personally looking forward to enjoying the Waltz with my robowaifus thereafter. I'm sure we'd all be quite grateful if you, yourself, solved this big problem for us here! Cheers, Anon. :^)
Edited last time by Chobitsu on 05/05/2025 (Mon) 17:56:54.
>>38234 Not to mention the perfectly tuned software feedback for balancing and foot sensation, as well as visual terrain analysis.
>>38238 Yes, I didn't make mention of all the visual and other datatype overlays & analysis, or of all the sensorimotor-esque sensor fusion feedback loops (PID, etc) needed. Not to mention all the concerns for her human Master's privacy, safety, & security needs at a more /meta level. This is a massive design & engineering undertaking. If the baste Chinese can actually release these at a commercial scale to the public for just US$16K, it will be a breakthrough. And we here need to go much-further & cheaper-still than that!! :^) FORWARD
Edited last time by Chobitsu on 05/06/2025 (Tue) 02:46:29.
>>7777 > (dancing -related : >>38456 )

Report/Delete/Moderation Forms
Delete
Report