/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

I Fucked Up

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


“If you are going through hell, keep going.” -t. Winston Churchill


F = ma Robowaifu Technician 12/13/2020 (Sun) 04:24:19 No.7777
Alright mathematicians/physicians report in. Us Plebeians need your honest help to create robowaifus in beginner's terms. How do we make our robowaifus properly dance with us at the Royal Ball? >tl;dr Surely in the end it will be the laws of physic and not mere hyperbole that brings us all real robowaifus in the end. Moar maths kthx.
>>7777 Not a mathematician (electronics engineer), but would certainly like a ballroom dancing bot. Used to do ballroom dancing at uni and must say, having a perfect follower be the best. To dance Viennese waltz with my robo-waifu.... <3 The best thing is probably to start with a simple self-balancing robot, look at some existing projects, analyse the inverse pendulum problem.
>>7777 >>7786 Same anon. Did a quick search and saw this interesting article: https://ctms.engin.umich.edu/CTMS/index.php?example=InvertedPendulum&section=SystemModeling For control systems you need calculus, linear algebra, complex numbers knowledge. By analysing your problem (in the article's case, a kart with an inverse pnedulum), you can find if the system is stable. Unstable systems will experience a large change in position, speed etc. with a small input (imagine nudging an upright domino for example).
>>7786 >analyse the inverse pendulum problem. This. Specifically, we need a both forward- and inverse-kinematics solver system that will work on the multi-jointed bipedal complex of hips/knees/ankles/feet. This solver should at the least be able to address a steady-state mass affixed directly centered above the pelvis (the so called center of gravity in a typical humanoid) and allow for walking. This central mass is the 'inverse pendulum' and the solver must address successfully the complex just mentioned.
>>7786 >Ballroom dancing robot Does it have to have legs or can it just roll about on wheels? Because I reckon a robot skidding about the ballroom will be much easier. Just call it a new style! (P.S. can ya tell I know nothing about ballroom dancing? :D)
>>7786 >but would certainly like a ballroom dancing bot. Seconded. What if /robowaifu/ were to focus explicitly and specifically on solving that single problem alone Anon? Wouldn't we in many practical ways be advancing the much bigger set of general motion solutions for robowaifus across the entire spectrum? Surely ballroom dancing represents a domain that really pushes the envelope of sophisticated, discriminating humanoid bodily motion, yet without going to real extremes like robowaifu gymnasts or qt3.14 robowaifu enforcers. Seems like a relatively balanced goal for us that is a real achievement in itself, but not too unreachable conceptually overall.
related xpost, I think. >>7825
>>7802 >...about on wheels? Because I reckon a robot skidding... That could be a good solution to start with. One of the requirements for dancing Viennese Waltz without getting too tired, is to maintain close contact and move past each other instead of trying to rotate. The cool thing is that the motion looks like continuous spinning when really you're stepping forward through your partner and doing a 180 degree turn and then repeat the action this time moving backwards. Do this cleanly enough and fast enough (6/8 timing music is FAST xD), then it will look like quite beautiful, the couple moving around on the floor. In practice, I rarely had a gal that I clicked well enough well enough with (and who cared as much about the dance as I did) to actually make Viennese waltz work. Basic form of this dance has the simplest steps, but one of the most difficult techniques. So mechanical legs are still probably necessary for such a fit (though may be wheels could be used where the feet would be?) >can ya tell I know nothing about ballroom dancing? :D I'll be honest, quite a niche hobby indeed XD >>7811 I like you thinking. I'm in a similar boat in that, I want a robo-waifu to entertain me (by communication and dancing). Other features are not necessary to me. So as you point out, developments should happen in parallel and in different areas. Perhaps much further down the line functionality could be combined (though I would not sacrifice ballroom abilihy for much else :P) Perhaps if we agreed on a common "personality core", which could be attached to different robo-waifu models, depending on the required task. Still get to have *your* waifu, just the outer hardware changes. >>7830 Thanks, I'll give it a read later.
I'm using the videos of StatQuest (Josh Starmer) to learn about statistics and more: https://youtube.com/c/joshstarmer Here the basics: https://youtube.com/playlist?list=PLblh5JKOoLUK0FLuzwntyYI10UQFUhsY9
>>9065 I also want to recommend "Nudge Algebra" app for repeating some basic math skills, to prevent forgetting them, or for relearning if that already happened.
I'm going to look into this problem. I'll start by trying to understand how to model physical controls in a scalable way. If anyone wants to join me, I'm starting off by reading through the two books below. I plan to post regular updates and explanations as I make my way through the first book. If you have questions about anything I post, feel free to ask. Human-like Biomechanics https://link.springer.com/book/10.1007/978-1-4020-4116-4 This book provides an approach for modeling physical action and perception in a way that supposedly scales to human-level complexity. Referent control of action and perception https://link.springer.com/book/10.1007/978-1-4939-2736-4 This book is very grounded, it points out a lot of practical problems with common approaches, and it offers solutions that seem reasonable. In particular, >>7788, it argues against inverse dynamics models because they introduce a lot of complexity when feedback gets involved, and so they don't scale well. This book looks good for grounding and philosophy, but I believe the first book is necessary to turn the ideas here into something that can actually be implemented.
>>14784 If the formatting doesn't work here, paste this post into a markdown editor like https://stackedit.io/app#. If you have VS Code installed, that has a built-in markdown editor. If you want to understand this but have trouble following, let me know what the difficulty is. I assume some familiarity with calculus and linear algebra, at least enough to do basic neural network stuff. The introductory section is on modeling forces. There are some prerequisites: - Differential forms. These represent infinitesimal spaces. A 1-form is an infinitesimal length, a 2-form is an infinitesimal area, a 3-form is an infinitesimal volume. One of the first things the book does is draw a distinction between absolute derivatives, vectors, and differential 1-forms. - Einstein summation notation. As far I can tell, this notation is NOT precise, meaning one equation in Einstein notation can mean multiple things. I think it's supposed to be shorthand, not mathematically rigorous. The terms involved are usually not abelian (meaning A*B is not always the same as B*A), but the author seems to swap them freely for notation's sake. Maybe this is a problem with the author rather than the notation. I guess the first step is to model forces. This is done with the Hamiltonian equation of motion. $H(q,p) = \frac{1}{2m}||p||^2 + V(q)$ - H(q,p) is "total energy". q is position, p is momentum. From this form, the important part is that there are two sources of energy: one that is fully determined by the position, and one that is fully determined by the momentum. - Total energy should be constant for a given system, so energy can transfer between V(q) (potential energy) and p^2/2m (kinetic energy). Every change in one of these terms comes from a change in the arguments. That's the motivation for the next two equations: $\dot{q} = \frac{\partial H}{\partial p} = \partial_p H$ $\dot{p} = -\frac{\partial H}{\partial q} = \partial_q H$ Where the dot over a symbol refers to the time derivative. These two equations show that changes in position and momentum fully determine one another, which is an hint that complex numbers are going to get involved. If changes in position in momentum *didn't* fully determine one another, we would be using normal vectors to represent these quantities since they would need to be modeled independently. Since they do determine one another, we can treat them as the x and y variables of the Cauchy-Riemann equations (https://en.wikipedia.org/wiki/Cauchy%E2%80%93Riemann_equations). The constraints of those equations can be represented implicitly by saying that the position and momentum are the real and imaginaries parts of complex numbers, and that the Hamiltonian is *complex differentiable* (as opposed to the normal kind of differentiable). The author decides to just use bigger matrices instead of complex numbers though. $\xi = (q,p)$ $J = \binom{0 \quad I}{-I \quad 0}$ $\dot \xi = J \cdot grad_\xi(H)$ $\xi$ now represents the combined position and momentum. J is the stand-in for the square-root of negative 1 (but in matrix form. Now the change in position-momentum is given by the gradient of H. If you're familiar with neural networks, you can now figure out how the system evolves by doing something like gradient descent on H. The only difference is that instead of following -1 times the gradient, you follow J times the gradient. The $\dot{q}$ equation is the velocity equation (since velocity is change in position with respect to time). The $\dot{p}$ equation is the force equation (since force is change in momentum with respect to time). These equations in total show how the two relate to one another, like in the Hill's muscle model https://en.wikipedia.org/wiki/Hill%27s_muscle_model. As a general principle, any non-conservative forces (like the relationship between a neuron and a muscle) should be added to the force side of the equation to represent translational (meaning non-rotational) biomechanics. I think the same principle should apply to rotational biomechanics, but I'll find out as I read more.
Open file (408.45 KB 1762x2048 CalulatorACute.jpeg)
>>7777 Holy get Protip: There is already a calculator or formula for almost any physics problem you can imagine. Here is a really good calculator to determine speed and power requirements for wheeled robots. Walking will always require more power. Self balancing will also require more power and always use power just to stand up. https://www.robotshop.com/community/blog/show/drive-motor-sizing-tool I personally like using Google Sheets as a calculator as it is feature rich and inherently accessible to almost any machine with a web browser. This link can help you get started with making custom formulas. https://edu.gcfglobal.org/en/googlespreadsheets/creating-simple-formulas/1/
>>14785 The next section introduces *metric tensors*. A metric tensor tells you how to measure vectors and relationships between vectors. In practice, that means lengths and angles. I think this is important because different components of a system can measure lengths and angles differently, and sometimes a single component might change how it measures these things based on some state. These situations can be modeled as changes to metric tensors. A metric tensor is a linear, symmetric function that takes two vectors as input and produces a scalar as output. This means you can think of it as a symmetric matrix that takes a vector on the left and a vector on the right, then returns the result of multiplying everything together. $g(v,w) = v^\dagger gw$ (I'm being a little sloppy here by treating g as both a function and a matrix. It should be unambiguous though.) I'm using $\dagger$ to mean "transpose". For any AI people here, I'll try to be consistent about treating a vector $k$ as a column vector and $k^\dagger$ as a row vector / linear operator. It's symmetric because metric tensors represent degree-2 polynomials (same as a normal dot product, which not-coincidentally is also used to measure lengths and angles), and in a polynomial, the coefficient for $a b$ is the same as the coefficient for $b a$. As I mentioned, dot products can be used to measure lengths and angles. When a metric tensor is involved, it is implicit in all dot products. That means $v \cdot w = g(v, w) = v^\dagger g w$. For lengths, $v \cdot v = g(v, v) = v^\dagger g v = |v|^2$. In a euclidean space, g is an identity matrix. The next part is on configuration spaces, which are used to represent the state of a system. There are two main points here: - A configuration may be represented by more variables than it has degrees of freedom, meaning some variable settings impose constraints on other variable settings. - The exact same configuration subspace might be represented by two different sets of variables. This would happen when, for example, two components jointly affect a configuration variable but when they measure that variable in different ways. The implication is that configuration subspaces are related to one another, and that's something we need to model. These relationships can be represented by functions. For example, if $q'^k$ is determined by $q^j$ $q'^k = q'^k(q^j)$ (The book and I are both being sloppy here by treating $q'^k$ as both a point-or-vector-or-metric-tensor and as function that maps things from the $q^j$ subspace into the $q'^k$ subspace. It's assumed here that there is only one correct way to do this.) One important thing here is that the function $q'^k$ "knows" how to convert points, vectors, and metric tensors. In other words, it can map *enough* stuff that anyone looking at $q^j$ from the lens of $q'^k$ can make sense of all the quantities necessary to understand how the $q'^k$ slice of the system is affected by $q^j$. After that is a section on inertia, which I'm having a very hard time following. See the pic. I'm putting down my best guess for what it means, but I'm not confident in my explanation. If someone here can understand it better, please do explain. There are two relevant quantities here: the (scalar) moment of inertia and the (non-scalar) inertia tensor. The (scalar) moment of inertia gives the conversion from *angular velocity* to *angular momentum* through the following equation: $L = I(\lambda) \omega$ Where L is the angular momentum, I is moment of inertia, $\lambda$ is the axis of rotation, and $\omega$ is the angular velocity. The *inertia tensor* (the second relevant quantity) makes the dependence onf $\lambda$ linear. $I_m(\lambda) = \lambda^\dagger I_t \lambda$ Where $I_m$ is the (scalar) moment of inertia, $\lambda^\dagger$ is the transpose of $\lambda$ (the axis of rotation), and $I_t$ is the inertia tensor. (For AI people, I'll try to be consistent about treating a vector $k$ as a column vector, and $k^\dagger$ is a row vector.) Keep in mind that all dot products on the right-hand side are done with respect to the metric tensor. With the dot products expanded out (and keeping in mind that $g = g^\dagger$ since g is symmetric), it would look like: $I_m(\lambda) = \lambda^\dagger g I_t g \lambda$ The actual value of the inertia tensor is given by: $\sum mass_p \cdot (g x_p) (x_p^\dagger g)$ With x representing the position of a particle relative to some origin and g representing the metric tensor as usual. This says that the inertia tensor depends linearly on mass and quadratically on each coordinate relative to some origin. Fully expanded, the relationship between the moment of inertia and the inertia tensor looks like this: $I_m(\lambda) = (\lambda^\dagger g x_p) (x_p^\dagger g \lambda)$ Skipping ahead a bit, the author points out that joints are by nature rotational. The whole point of modeling muscle forces is to figure out how they get transformed to create torque 1-forms, which do all the actual work in a body. For joints, this makes sense. Maybe for dancing, this would be sufficient. In the more general case, we'll need more than torque to model motions that aren't based on joints, like face movements.
Open file (333.93 KB 763x1230 Untitled (1).png)
>>14790 >See the pic It might be easier to see if I actually upload it.
>>14790 >>14791 >$I_m(\lambda) = \lambda^\dagger g I_t g \lambda$ >Fully expanded, the relationship between the moment of inertia and the inertia tensor looks like this: >$I_m(\lambda) = (\lambda^\dagger g x_p) (x_p^\dagger g \lambda)$ I got this wrong. Here's the actual relationship between the moment of inertia and the metric tensor: $I_m(\lambda) = \lambda^\dagger (g x_p^\dagger x_p g - x_p g g x_p^\dagger) \lambda$ With implicit metric tensors: $I_m(\lambda) = \lambda^\dagger (x_p^\dagger x_p - x_p x_p^\dagger) \lambda$ I don't have a great intuition for that middle term. It's measuring a failure to commute (i.e., the extent to which ab is not equal to ba). Since: - All matrices can be written as a product of scaling, mirroring, and rotation matrices, - Scaling matrices commute, and - Mirroring matrices are disallowed by the fact that g is positive definite, It intuitively makes sense that the middle term would measure rotation, but the exact details aren't clear to me. I'm going to move on for now.
>>14784 >>14785 >>14790 >>14804 I find this extremely gratifying to know you are researching this area for us Anon. This will surely prove to be a vital area for all of us to solve. I wish I could follow along with you, but I simply don't have the maths experience yet. At some point perhaps there might be some collaborations on creating the actual software to perform these calculations for our robowaifus in a fast, efficient way on modest hardware? Hopefully so. But regardless, please press forward with your research efforts! We'll be watching your progress attentively here.
Open file (537.57 KB 1620x2450 intro-5.png)
>>14943 I want to make these posts more accessible to people with less math experience. The goal for a lot of this is to build intuition for how to construct more complex interactions. The new intuition isn't that useful if it's given to only a few people. I guess as a general note for anyone reading: if you want to understand this but have trouble doing so, let me know what your math & physics background is, and let me know the first point where you felt like you were in over your head. I've assumed familiarity with calculus and linear algebra. I won't be able to give an overview of those topics, but I can at least point you in the right direction if you haven't studied these things, and I can answer questions about these topics to help you understand them more intuitively. Note that calculus and linear algebra are the same prerequisites for creating deep neural networks, so if you want to work on algorithms for AI or robotics, it's good to study those two topics. >>14804 Sorry for the long lag between posts. I'll have a bit more time for these posts in a few weeks. See pic for my translation of the next chunk. If you're having trouble with the markdown in the previous posts, I can convert those to images like this one.
>>15002 Ah, physics. I am familiar with calculus and remember some physics forumlas. Those classes had put these equations into more readable forms and often used derivatives of these and explained why. Linear algebra is something I’ve never heard of.
>>15003 >Those classes had put these equations into more readable forms and often used derivatives of these and explained why. You were doing classical mechanics, right? Velocity is the time-derivative of position, acceleration is the time-derivative of velocity. Force is the time-derivative of momentum, and energy is the space-integral of force. These equations are simple as long as you're not doing anything too complicated with angular momentum. I think this is one of the main reason why the equations that people study in, e.g., high school, are so much simpler than the ones used later on. The problem with angular momentum is that it involves a lot of interaction between spatial dimensions. That's something linear algebra is great at modeling, but, in linear algebra, a*b is not always the same thing as b*a. This same phenomenon shows up in physics. You might remember that with a cross product, which is one form of multiplication covered by linear algebra, a cross b is equal to the negative of b cross a. That's one of the "nice" cases. In more general cases, a*b can be related to b*a in more complicated ways. When things like the order of operations matters, it becomes much more pressing to model your dynamics using a single equation. If you don't, it becomes very difficult to keep track of how you're supposed to merge equations to get the actual dynamics, especially as your system gets larger. With simpler systems, you can usually intuit how to account for changes in perspective manually. With more complex systems, you need to model the relationships between perspectives explicitly, otherwise there's no hope of being able to put them into a single equation. So that's what the more abstract versions of these equations do. They let you model angular momentum better, and they let you merge equations more systematically. Lagrangian Mechanics tells you how you can create one giant equation to model everything, and it does so in a way that makes it easy to figure out the parts of the dynamics you actually care about. That's basically what we're doing. Hamiltonian Mechanics is just some slick way to simplify the equations further and to understand some of the variables involved better. Most of the complexity comes from the fact that the order of operations matters, which is why I need to say things like: Kinetic energy = 1/2 v'Gv Instead of: Kinetic energy = 1/2 mv^2 The second equation works when an object is moving in a straight line and when your sensors are calibrated so velocity is measured the same way in all directions. It doesn't work when you have angular velocity. The first equation always works. >Linear algebra is something I’ve never heard of. You might have heard of vectors and matrices. Linear algebra is all about vectors and transformations of vectors. (A matrix is how you represent a transformation of vectors.) Vectors are important because it's how we represent changes in coordinates, and more generally any direction with a magnitude. For example, a velocity has a direction and magnitude, so it can be represented by a vector. The same is true for acceleration, force, and momentum. A matrix is a way of transforming vectors through rotating, streching, shrinking, flipping (like through a mirror), and any combination of these things. If you want an intro to linear algebra, I would highly recommend 3blue1brown's video series on the topic: https://www.youtube.com/watch?v=fNk_zzaMoSs&list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab
>>15011 >When things like the order of operations matters, it becomes much more pressing to model your dynamics using a single equation. Though in some cases, mostly comparisons, its important to be able to clearly model the relationships between different functions. >So that's what the more abstract versions of these equations do. They let you model angular momentum better, and they let you merge equations more systematically. >Most of the complexity comes from the fact that the order of operations matters, which is why I need to say things like: Thank you for reminding me of some of the formulas I had forgotten about. Derivatives are a vital concept and can't tell if you meant 1/2(v*Gv) or if you meant it to be another ^ sign. >You might have heard of vectors and matrices. Yes I have. Though they were not really featured all that heavily at all. Its more helpful when programming a robowaifu's spatial recognition than anything.
>>15016 >Though in some cases, mostly comparisons, its important to be able to clearly model the relationships between different functions. Exactly, but the specifics should be encapsulated in a way that the rest of the system can just pull the information it needs without needing to worry about how each module models its relationships. If something does need to be explicitly exposed to the rest of the system, like which volume that can be quickly grasped by a hand, it should be exposed as an optional value for other system modules to consume. Special functions and variables should never be required to predict common things, like the trajectory of a module. For this particular book, the Hamiltonian is the "one equation" that each module needs to expose. It takes in a momentum, position, and potential field as input, and it returns total energy. Assuming something is keeping track of positions and momenta, this is enough to calculate how any module will change over time, and what candidate changes need to be applied to get a module to change in a particular way. One unintuitive aspect of this is that it's not quite appropriate to try to control a system using forces directly. The problem with using forces directly is that every time something doesn't go exactly as expected, you need to recalculate how much force to apply. The "right" way to control a system seems to be through a potential field. A potential field tells you how much force to apply for every position that an object can be in. This lets you control things more smoothly, and as long as you don't end up creating some really chaotic dynamics, it should get you to the end state you want without requiring you to recalculate your trajectory every time something goes slightly wrong. The amount of force to apply should be calculated from the Hamiltonian: it's the partial derivative of the Hamiltonian with respect to position. So if you want to move the hand to a particular location, you should create some function that with a minimum at that location. From that function, you can caculate what for the apply no matter what position the hand is in, and that function can remain unchanged until the objective changes. There are a plenty of frameworks that do automatic differentiation, some numerically and some symbolically, and implementing it should be relatively easy on the small chance that we need to do it ourselves. I still need to spend more time thinking about how to make this more concrete and how to support all of the sorts of flexibility we would want in a dancing robowife, but the big picture that this book paints seems compelling. >Derivatives are a vital concept and can't tell if you meant 1/2(v*Gv) or if you meant it to be another ^ sign. You got it right. Sorry, I forgot that ' (apostraphe) usually means "derivative" and * (asterisk) usually means "transpose". The meaning of the symbols changes across fields, and it's hard to keep track of sometimes. The dot-over-the-variable-for-derivative in >>15002 comes from the book, and it seems common in physics. The cross-(dagger)-for-transpose comes from a particular branch of math. I think the apostrophe-for-derivative and asterisk-for-transpose notation was much more common when I studied physics and calculus in school. If that makes my summaries easier to read, I can switch my notation to that. >You might have heard of vectors and matrices. >Yes I have. Though they were not really featured all that heavily at all. Its more helpful when programming a robowaifu's spatial recognition than anything. I think the first time I used them in practice was for game development. Matrices and vectors show up there because it's much easier to model parts of an object from their own relative positions and have the parent object keep track of their reference frames. To render a parent object, the parent "switches" to the reference frame of each child object in turn and has the child render itself. This is possible because the game engine keeps track of a matrix that's used to automatically transform every vector the child uses. When any object multiplies the game engine's matrix with its own matrix, it's effectively changing the perspective used for rendering. So for a parent to switch to the child's perspective, it only needs to change the matrix that the game engine uses while rendering the child. The same thing is going on here. The g and G matrices in >>15002 play a very similar role to the game engine's matrix.
>>15002 Please pardon the methodical multi-responses. I personally find it conducive to disconnected, async communications (often separated by days in my case) with Anons when I want to be thorough. >I want to make these posts more accessible to people with less math experience. Thank you! That's the only real hope I might have for following along. >The goal for a lot of this is to build intuition for how to construct more complex interactions. I kind of already have some of that. Both from my basic intellectual abilities, and from my programming experiences. >The new intuition isn't that useful if it's given to only a few people. So true. One of our goals here on /robowaifu/ is to make it reasonable and feasible for every-man to construct their own robot wives, as they see fit. Simplification of necessary complexity is a given. >I guess as a general note for anyone reading: if you want to understand this but have trouble doing so, let me know what your math & physics background is, and let me know the first point where you felt like you were in over your head. <and let me know the first point where you felt like you were in over your head. Lol, but there are so many! Primarily it's to do with unfamiliarity with math notations. I dropped out of school very early, and never got much by way of maths notation. My programming experience is entirely self-taught. >I've assumed familiarity with calculus and linear algebra. I understand the basic concepts behind integration and derivatives, but that's it. Just the basics. As for LinAlg, I grasp the basic ideas of Matrix and Vector operations, but again, very basic. I did however manage to piece together a few test case programs that would properly do matrix multiplications, etc. >I won't be able to give an overview of those topics, but I can at least point you in the right direction if you haven't studied these things, and I can answer questions about these topics to help you understand them more intuitively. Again, thank you. >Note that calculus and linear algebra are the same prerequisites for creating deep neural networks, so if you want to work on algorithms for AI or robotics, it's good to study those two topics. Of that I'm quite sure. Further, I want us to actually implement the code algorithms for doing so! :^) BTW, I could follow the topics slightly better with the image you provided here for this post, so yes I personally would welcome you do so for others.
>>15011 >Velocity is the time-derivative of position, acceleration is the time-derivative of velocity. Force is the time-derivative of momentum, and energy is the space-integral of force. Let me try to restate these as questions in my own words, and please evaluate my understanding, Maths-Anon. Velocity is an expression of the instantaneous 'change' in position from one time point to another? Acceleration is an expression of the instantaneous 'change' in Velocity from one time point to another? Force is an expression of the instantaneous 'change' in Momentum Lol, w/e that is from one time point to another? Energy is an expression of the total Force 's acting across a given volume?
>>15018 >so yes I personally would welcome you do so for others. BTW I'd like both the image + the written text, if reasonable.
>>15020 It's hard to find a medium that will support everything. Colab seems like the best option right now since it supports markdown with embedded mathjax, and it will show both the rendered text and, when double-clicking a cell, the source text. If it ends up being useful, I can also add code examples in the script to make things more concrete. I updated my previous posts to use the new notation: - https://colab.research.google.com/drive/1kxQDLDL--WyFsTHEyXCdY1xOOrv1vEG1?usp=sharing I plan to keep that "script" up-to-date with future posts here.
>>15167 Thanks for your response and for your efforts in this area Anon. It's much appreciated!
Open file (209.01 KB 1173x663 physics basic picture.png)
>>15019 That's correct. Note that the "volume" for energy should be 1-dimensional, like a path. I added an explanation to the script in >>15167, same as the attached image. Here's the text description. --- Objects move around and rotate in space. Every object has a position in that space, a mass, and a velocity. - The velocity describes the instantaneous change ($d/dt$) in position. - These quantities also define a momentum, which is $mass * velocity$. Momentum is a representation of how much an object resists slowing down to zero velocity. Momentum increases with mass and with velocity. - Instantaneous change ($d/dt$) in velocity is called acceleration. Changes to object positions are done indirectly. To change an object position, you need to apply a force, which provides an instantaneous change ($d/dt$) to momentum. - For the physics we're dealing with, forces don't change the mass of an object. Since changes in momentum can come from either changes in mass or changes in velocity and since forces don't change mass, all forces will result in some change in velocity (acceleration). The common approach is to represent forces through a potential field. A potential field is represented by some energy value at every point in the space. (If the space is flat, the potential field would be visualized by something like imaginary hills and valleys placed throughout the space.) You can calculate a force at any given point in the potential field by measuring the downward slope (negative $d/dx$). You can get a measure of "total force" as an object moves through a potential field by calculating $\int_{x_0}^{x_t} force(x) dx$. The result has units of type "force times distance", which is the same as energy. Since $force(x) = - \frac{d}{dx} potential(x)$, the integral is the same as $potential(x_0) - potential(x_t)$, which has units of the same type as the potential field. This means that the potential field assigns an energy value to each point $x$. Because of this, the potential field is also called potential energy. A lot of interesting equations result from the fact that force can be represented as either a negative slope ($-d/dx$) in an energy field or an instantaneous change ($d/dt$) in momentum.
>>15186 Excellent post Anon, thanks. Please give me some time to chew this over and I'll respond better. Just wanted to let you know it's been seen.
>>15186 AUUUUUUUUGH! :^) I still haven't made time yet Anon. I haven't forgotten.
Some stuff for dance generation: https://expressivemachinery.gatech.edu/projects/luminai/ It looks like this is based on Viewpoint theory, which is a theory of improvisation. There's a book about it called "The Viewpoints Book".
>>15459 Sounds interesting Anon. Unfortunately, they are blocking access across Tor so I'm unable to see it. But there has been some good things come out of GA Tech in our area so yep, I can believe it. Thanks!
This one showcases controlling dance movements with high-level controls, including music based controls: https://www.youtube.com/watch?v=YhH4PYEkVnY This alone might be enough to teach a robo how to dance.
>>17444 Very interesting research. Nice find, Anon.
>>7777 >got a D- in my introductory calculus course it's over for me bros.
>>18616 Lol, don't be that way bro The fact you even completed the course with a grade shows you can do it M80. Just buckle down! Mechanics is very important to creating robowaifus, Anon. I'd suggest you re-take the course.
>>18616 Consider using the Brilliant App (around 80.- per year) or looking into some YouTube videos.
>>18616 Retake it and use khan academy. I ended up getting a D in Calc 2 but after I retook it I got an A with help from khan academy.
A idea. Being able to create equations and then refine them for robowaifus locomotion is going to be a huge problem. Maybe there's a way to cheat. "If" you could get the right equation for a neural net then train the robowaifu to walk. Maybe train by hanging it from overhead wires and walk on a treadmill. Another idea may be to feed it videos of people walking then have it walk while watching itself from video cameras. It could just stumble around until it learned. Not saying this is easy but making a really good "set" of algorithms for walking that work in all cases might be damn near impossible while a simple neural net that corrects itself would look super bad at first but end up being elegant. Maybe there could be some simple equations to get it started then have it learn the rest itself by stumbling around.
>>18635 Hi Grommet, good to see you again! I think you have some valid points, Anon. In some sense we already have good equations for the single Mobile Inverted Pendulum problem (including at least one working open-sauce C example). The difficulty comes when you chain together several different levers that all have to cooperate simultaneously in realtime, such as a bipedal humanoid's neuro-musculo-skeletal system does. However, I think Boston Dynamics has clearly shown that it can be done well already. Therefore I believe it's at least reasonable to assume that with the growing number of smart men getting interested in the robowaifu idea, we'll likely have some kind of open-sauce solution to this problem before long. And yes, you're certainly correct IMO that neural networking can be used to devise a model that's likely to be workable -- even one highly-refined. And as far as legit training data is concerned, we have MoCap tech (primarily from the film industry). We could hire/obtain data from suitable 3DPD performances of favorite waifu characters as a good base to begin with; it can be expanded from there under the attention of professional 3D-character animators. Finally, Carver Mead's, et al, Neuromorphics will likely prove an exceptionally important approach in the end for low-cost, effective robowaifus. The basic tenet is to push the computation out to the edges where the sensoring happens, and not requiring much by way of two-way comms back to a 'central core' of computation. This is how organic life primarily operates on a day-to-day basis. Combining these compute/sensors/actuators elements all together into one compact, lightweight & inexpensive unit is the ideal here. In the end, the best robowaifu systems are going to combine every one of these approaches (and more). TBH it's going to be fascinating watching all this unfold! :^) Cheers, Anon. >=== -minor sp, prose edit -add 'one unit' cmnt
Edited last time by Chobitsu on 01/12/2023 (Thu) 05:39:57.
>>7777 Making dedicated labeled calculators in spreadsheet software can be really beneficial for making quick calculations without needing to memorize formula's.
>>18620 >>18625 >>18632 I'm not sure about the retaking it since I wanna be done with my undergrad as soon as possible. Also, the worst part about the D- is thsat literally every topic was pretty easy. I literally started studying a week before the exam, and since the faculty wasn't very good I studied up on youtube. And that's why I managed to pass. Had I started studying from the beginning of the semester, it'd have been a comfortable B+ atleast.
>>18646 OK, well at least you've proved my assertion! :^) I'd simply encourage you to not skip learning and wind up in ignorance (like myself!) now when it counts the most. Buckle down! Godspeed Anon.
>>18636 >The difficulty comes when you chain together several different levers that all have to cooperate simultaneously in realtime This is really the essence of my point. All these equations are great but they bog down. I've become super impressed by Elon Musk ideas to look at the very lowest layer of a problem and I try to do this as much as I can. Here's my reasoning for saying use a neural net and just flog the doll around until it learns to stand. An ant has has almost nothing in brain power and other insects are the same but they can maneuver around and even fly with next to nothing actual computing power. This means ultimately that with the right neural net algorithm a robowaifu can do exactly the same. Of course I'm not saying this is easy and picking the right path to pursue this might be difficult, or it might not. Might get lucky. There's some guys who I hope people will look at. They had a company, XNOR.ai, that could recognize all sorts of real time video when trained They could recognize people, bikes, animals, all sorts of stuff. The amazing thing was they used cell phones and raspberry pi's to do this. They were getting rid of all the multiply and divide processing and had reduced the AI to yes or no binary. and it was super fast. They used have a a lot of videos but Apple bought them and a lot of it disappeared.
A search term for the work they were doing is "binary Convolutional Neural Networks" Now I want to be clear I'm smart enough to see something is good and maybe understand a little of it but my math is so rusty and deficient in the first place that I'm not so sure I could make any of this work. Stuff like GPT-3 I have no idea what all this matrix stuff is doing. I saved in a folder a bunch of stuff related to XNOR.ai before they went dark. Here's some of the links https://github.com/jojonki/BiDAF https://github.com/allenai/XNOR-Net https://people.utm.my/asmawisham/xnor-ai-frees-ai-from-the-prison-of-the-supercomputer/ I will say their stuff was MAJOR super impressive and if it could be used by us they are talking two orders of magnitude less processing power. So in fact that's likely to be in the range of a good fast processor right now. For simple turn on the light and walk over here might be enough. Maybe even lite speech and comprehension. We are abut 5 to 7 years away from a human level desktop processor so using this powerful tool may work now.
BTW I'm still thinking about actuators but haven't built anything.
>>18635 >neural net then train the robowaifu to walk No offense intended, but this is a very obvious idea which probably several of us had already and this is also being commonly done in robotics. I listen to Robohub podcast from time to time, and it least there and on some websites this came up. I think James Bruton did it the same way with his walking robot.
>>18651 >Of course I'm not saying this is easy and picking the right path to pursue this might be difficult, or it might not. Might get lucky. This approach has already been tried by researchers, basically attempting exactly what you suggest. Most of the results are grotesqueries (worm or pseudopodia-style dragging itself along the ground, etc), and in fact I've never seen one that looked like natural human movement yet. However, this is likely just a systemic issue, and not a fundamental problem. >>18652 Thanks for saving these links Anon! >>18653 I'm currently thinking that very inexpensive linear-screw actuators may be our best friends in the near-term. Any chance you can focus on these, Grommet?
>>18652 >inexpensive linear-screw actuators may be our best friends in the near-term. Any chance you can focus on these, Grommet? I replied in the actuator thread here >>18771
I was looking for some more XNOR.AI videos. There's a great site where you can download YouTube videos faster. (if you get an error downloading it's likely you will have to change resolution to download I've found) https://yt5s.com/ If you go to the link above and type in "XNOR.AI" you will get a bunch of videos that show how powerful this sort of AI is with low power devices. But wait. While I was there I found a video of some new guys who say they are even faster. https://xailient.com/ It's mostly vision and recognizing thing. I'm not an AI guy but if it does vision and it's a neural network, wouldn't it do other stuff too with the same algorithm but trained to do something different? I don't know but it's interesting. They have a software development kit it says on their video but I can't find the link for it.
A really killer video is this one, https://www.youtube.com/watch?v=rov7T256z4s It shows that they can train a low power processor to recognize humans only running off a solar power cell. The key take away here is they are doing complicated things with low power and low speed processors. Maybe I don't understand but it seems to me this capability means we could take present processors of low cost and get a waifu that could at least be trained walk and maybe even do some limited hearing and understanding of commands. I had a link on ESP32 microcontroller and found we might need 20 or so of these just to control 300 needed full human like muscles. Well these things have a lot of power and if the waifu is not moving around it could focus it's processors to understanding us or looking around. Much like a human it could stop moving and concentrate using it's processor power to do other stuff if it's not moving. I actually did some math on the needed processing power to control the limbs and it left a huge amount of power left for other stuff. Most of it in fact. Here, >>12480
>>18778 You encourage us all Anon. This type of approach is exactly what I've been encouraging our Big Data AI researchers to consider instead. Mobility-capable, autonomous (ie, disconnected) AI is vital to the future of our robowaifus. >tl;dr We'll never reach our goals if we rely on the Globohomo's """cloud""" for run-time services. They are not our friends. >ttl;dr Do you really want to risk leaving yourself and your robowaifu at their 'tender' mercies, anon? >=== -prose edit -add 'ttl;dr' cmnt
Edited last time by Chobitsu on 01/17/2023 (Tue) 05:56:24.
I was looking at some more of their videos and found another paper. I hoping someone with more of a AI background, and better math skills than me, can make sense of this. They are using it for image recognition but I can't imagine that if trained and tuned for something else it could do that also. XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks Part of the abstract, "... In Binary-Weight-Networks, the filters are approximated with binary values resulting in 32x memory saving. In XNOR-Networks, both the filters and the input to convolutional layers are binary. XNOR-Networks approximate convolutions using primarily binary operations. This results in 58x faster convolutional operations and 32x memory savings. XNOR-Nets offer the possibility of running state-of-the-art networks on CPUs (rather than GPUs) in real-time..." Apple bought these guys so they are dark now. I understand some math. I understand the basics ideas in calculus and I know about vectors but AI with multiplying matrices, I think of vectors, I have no idea what that is actually doing. It's like the guy in Insane Clown Posse said,"Magnets, how the fuck does that work"? I'm totally lost and might not ever be able to understand it. I'll try to upload this paper.
The cloud is a non starter. I think if we can get one to walk and move about a little, then, very soon the processing power will be far along enough to get it to talk and maybe do some other things in just a few years for a processor less than $500. That ESP32 microcontroller I'm so enamoured of has software to do image recognition with just one of them. Think what twenty could do. https://microcontrollerslab.com/esp32-cam-video-streaming-face-recognition-tutorial/ https://www.neliti.com/publications/558804/a-performance-evaluation-of-esp32-camera-face-recognition-for-various-projects https://github.com/andriyadi/esp32-custom-vision One thing we might could do is a sort of fake memory. SSD drives are really fast and big. So maybe it would no tdo super complicated stuff real time like clean the house but if we were away it would have lots of time to do programmed stuff just not as fast as a human. Using the SSD as a slow memory. When cost come down you could put in fast memory and the programming is already done.
>>18818 I found this from Xailient, claiming that they would do it better: https://www.youtube.com/watch?v=RyAXzHcIJFI >>18819 Yeah, the internal systems can't be as powerful as the external brain (home server), and the waifus for guys with smaller amounts of money need to do things very efficient anyways. We should use cheap low power devices wherever we can and be smart about it. >but if we were away it would have lots of time to do programmed stuff just not as fast as a human.
>>18818 Thanks! Posting the papers directly here is good b/c they get backed up by us personally along with the rest of the board. They can't be memoryholed at least for robowaifuists that way. >tl;dr Cornell's policies can change tomorrow. Save your papers! :^) >>18819 >So maybe it would no tdo super complicated stuff real time like clean the house but if we were away it would have lots of time to do programmed stuff just not as fast as a human. This is a good idea Anon, and one that already has a successful working example: Roomba.
>>18819 >That ESP32 microcontroller I'm so enamoured of For good reason. AFAICT they are awesome for the kinds of things needed in robowaifus. >>18830 >I found this from Xailient, claiming that they would do it better Neat! While that company in particular is one of the last ones Anon should have anywhere near his robowaifu, the idea that it can be done faster is in itself encouraging to us all in general.
>>18835 >that company in particular is one of the last ones Anon should have anywhere near his robowaifu I agree. The main thing is to try and understand their math and make our own open source design based on it. Now I may be able to do some waifu actuators but this AI stuff is over my head. I "maybe" could eventually understand it, but in reality I'm too lazy to bang my head against that wall of math to actually do it. I would have to go back and review so much stuff and learn so much new math, it would be tough. Maybe I could and will, but I doubt it. The thing is that these people show what "could" be done. When you know something can be done, that's a big part of the hurdle to doing things. Just like until Musk showed that electric cars could be done no one was in the least bit interested. I want to add something important. I want to show that computing increases mean that robowaifu is definitely possible and how this relates to a planned build and what can be done. Here's some computing data I saved. How many MIPS in a Human Brain? 100,000,000 How many MIPS in an insect? 10 How many MIPS in a ESP32? 600 DMIPS How many MIPS in a Guppy? 1,000 How many MIPS in a current desktop computer? 1,000 How many MIPS in a Lizard? 5,000 How many MIPS in a Mouse? 100,000 How many MIPS in a Monkey? 5,000,000 Who used MIPS to estimate the computing power of the brain? Hans Moravec When does Moravec believe general purpose desktop computers will hit 100 million MIPS? 2020 Tesla AI chip runs at 2GHz and performs 36 trillion operations per second 36,000,000 MIPS So a ESP32 at 600MIPS x twenty of them is 12,000MIPS. This is well above what we need just to walk around and who knows what else it will do. Maybe some limited speech commands like a dog. Easily facial identification so it could unlock the door and simple stuff. So my thinking is use these to get it to walk, move around and network it, internal, with a more powerful processor for vision, speak, etc. logic. You could start out with just the minimum ESP32 and then add processors as they become available. The ESP32's will be just to get it going. This is one of the most important gifs below that shows the whole picture in one small video. If you internalize this you will immediately understand what I'm talking about. Link and I'll try to upload also. http://i0.wp.com/armstrongeconomics.com/wp-content/uploads/2015/01/ComputerPower.gif
I was looking through video's I saved of XnorAI and found this one. It gives a bit of an overview of how they are doing really significant AI with super low power devices. They only using 1 bit for their convolutional networks. Not that I understand all this. But I can easily understand the huge difference between the power of matrix additions on 32 or 16 or 8 bits for traditional AI compared to 1 bit. They're doing object recognition on rasberry pi's. I can't help but this this could be significant. If this could be used for walking, speech recognition, and we know it can be used for person recognition and this would be extremely useful. They say they are working on speech recognition for devices like cell phones. So assuming we have a mass of ESP32's the power is far above a regular cell phone. The training of this would be difficult and take a long time but once trained it's then a matter of just copying the training over to new devices. https://www.youtube.com/watch?v=3cD9bpfX9FA Unfortunately XnorAI was bought by Apple and went dark.
>>19341 Thanks for the reminder Grommet. Yes, this is definitely the kind of efficiency-approach we need to attempt capitalizing upon. Robowaifus need to be able to operate independently from external compute or data resources in at least a 'powered-back' mode.
>>7777 What different branches of mathematics will I deal with as an AI researcher? I am going to start learning maths from scratch and I know I can't cover every single thing, so I will cover those that deal with AI/ML and robotics since thats where my interest lies.
>>20107 "Basic mathematics: Start by reviewing the basics of mathematics, such as arithmetic, fractions, decimals, and percentages. If you feel that you are comfortable with these concepts, move on to more advanced topics such as algebra and geometry. Algebra: Learn algebraic concepts such as equations, inequalities, polynomials, and factoring. You should be able to simplify expressions and solve equations, and be familiar with basic algebraic properties. Geometry: Learn geometry concepts such as points, lines, angles, and shapes. You should be able to calculate areas and volumes, and be familiar with trigonometry. Trigonometry: Learn the trigonometric functions such as sine, cosine, and tangent. You should also learn about their inverses, as well as how to use them to solve problems involving triangles and other geometric shapes. Calculus: Start with differential calculus, which includes topics such as limits, derivatives, and optimization. Move on to integral calculus, which includes topics such as integrals and area under curves. This will be necessary for more advanced topics in robotics, AI, and machine learning. Linear algebra: Learn the basics of linear algebra, including matrices, vectors, and systems of equations. You should be able to perform operations on matrices and vectors, and understand the concepts of determinants, eigenvalues, and eigenvectors. Probability and statistics: Learn about probability distributions, expected values, and standard deviations. Understand the basics of statistical inference, such as hypothesis testing and confidence intervals. Optimization: Learn about optimization techniques, such as gradient descent, and how to use them to optimize machine learning models. Differential equations: Learn about differential equations, which are important for modeling the dynamics of robotic systems. This includes topics such as first-order and second-order differential equations. Graph theory: Learn about graph theory and algorithms such as Dijkstra's algorithm and the minimum spanning tree algorithm. This is important for modeling the connectivity of networks of sensors, robots, and other components. Information theory: Learn about basic concepts of information theory, including entropy, mutual information, and the Kullback-Leibler divergence. Advanced topics: Once you have a solid foundation in the above areas, you can move on to more advanced topics such as control theory, functional analysis, and topology. The roadmap I provided is a general guide to the topics that are important in robotics, AI, and machine learning, and the order in which they are presented is a reasonable progression from the basics to more advanced topics. However, the order in which you tackle these topics may depend on your background and personal interests. For example, if you are already familiar with basic mathematics and have some programming experience, you may want to start with linear algebra and probability and statistics before moving on to calculus. On the other hand, if you have a strong background in calculus and physics, you may want to start with differential equations and optimization. It is important to note that all of these topics are interconnected, so you may find that you need to revisit earlier topics as you progress through more advanced material. Ultimately, the key is to find a learning path that works best for you and helps you build a strong foundation in the mathematics that are relevant to robotics, AI, and machine learning." This is the roadmap ChatGPT gave me. Would you I should follow this? I'm a bit confused since I was under the impression that I should deal with linear algebra and Probability and statistic before calculus but I'm not sure.
>>20108 >This is the roadmap ChatGPT gave me. Would you I should follow this? I'm a bit confused since I was under the impression that I should deal with linear algebra and Probability and statistic before calculus but I'm not sure. I expect Robowaifudev can give much better advice than this Anon, and far more specific to your goals. Asking him would be my advice, when you see him here. >=== -minor edit
Edited last time by Chobitsu on 02/14/2023 (Tue) 17:14:27.
>>20107 I asked some researchers a while ago and read about their responses to others. They mostly said for the whole deep learning stuff you only need ideally a bit of linear algebra, but only the basics. Otherwise look into basics of statistics: https://www.youtube.com/playlist?list=PLblh5JKOoLUK0FLuzwntyYI10UQFUhsY9 Website: https://statquest.org/ I think it's also better to pick up stuff around the time you need it. Don't try to learn all kind of things before getting started with some framework.
>>20110 I asked aroynd and yeh, while most basic deep learning and neural networks need only linear algebra and statistics, if we want to develop new SOTA models for our waifus, calculus, optimization, graph and information theory are also essential. But I'm getting overwhelmed by the amount of stuff to learn. Guess I better shut up and just start.
>>20155 Somebody else already did bros. Just put gpt 3 inside the waifu BROS. You're going to need to make the body if you want to program the movements though. I know that for a fact.
>>20157 I'll leave the body part to other mechanical engineer anons, I'll specialize in AI. And no, there's no way you can fit current GPT 3 inside a robot. It'll have to be seriously downscaled, hence the differential equations and optimizatio . And GPT 3 isn't very good anyway.
>>20163 >GPT 3 isn't very good anyway. GPT3 is using large values and matrices of these values to get answers. I think here's the answer, "A Review of Binarized Neural Networks - MDPI electronics-08-00661" I "think" I'm fairly good at recognizing large say, ideas, concepts, and how they fit into the larger picture. But, sigh, I'm not so good at actually setting up these equations properly to get the result I want. As Barbie says,"Math is hard". I have done diff. equations, calculus and minimal matrix stuff but there's a HUGE GAP between doing some exercises in a textbook and actually setting up the equations for a case. Like how fast to move a leg so it will not topple over and meet the ground at the right time. This is hard. Setting up equations that mimic the constraints of actual physical things is difficult and takes far more understanding of the actual operations and what they are doing than what I can manage. I think a good analogy between "Binarized Neural Networks" and GPT3 with it's full value multiplication with LOTS of multiplies of large values is to compare signal processing for other things. Specifically Fourier transforms and wavelet theory transforms. The old Fourier is very much like the full value. It's based on representing each little portion of a signal like music or video, as a summation of tiny sine waves. Some positive which add and some negative which take away. Now you can see this gets up to a lot of computing and big numbers very fast. Wavelets however are based on stretching a waveform and changing it's height. Jpg pictures are based on the old Fourier transform but djvu pictures and documents are based on the wavelet transform. I think webp files are based on wavelets also. One thing that stands out is that in both cases the difference is around x10 or an order of magnitude difference in data and computing power needed. I suspect very strongly that some of the underlying math is somewhat similar. I linked the above noted paper. I think if this can be understood then a relatively decent waifu can be constructed with present commodity computer chips. I think you could easily get independent movement, face recognition and possibly limited command understanding with low cost commodity chips. But the math and training is not going to be easy. Especially since someone needs to set up the actual equations to get this to work. I suspect the data to do this is in the papers that are reviewed but as Barbie says,"Math is hard". It won't be easy. Look at the accuracy with bit wise operations in the above paper. It;s very good. Mostly 80% or above. Plenty good for our case and with some feedback when it makes errors it will only increase. Suggestion. The AI code for movement should be agnostic on height/length of limbs. It should be designed and trained with different limb lengths but with the value added as constants. That way different waifus can use the same AI movement code.
Adding further to what is needed you are going to need an Operating System. I'm willing to bet that the easiest way to do this is take a micro-kernal type system like L4. There's lots of these but all of them tend to be real time and have been used in the billions of devices. I saw it mentioned that a L4 derivative was used to control F-16 fighters. Here's a link on this OS, https://en.wikipedia.org/wiki/Open_Kernel_Labs It would also seem to me that it would need some sort of message passing system to send commands to different parts or functions. This is like plan9 OS or QNX which have been proved to work very well. QNX has proved that this message passing makes a micro-kernel-based system not lose a great deal of speed. Taking something already used and open source will speed things up. Other links, https://web.archive.org/web/20191024093602/http://l4hq.org/projects/os/ One BIG problem is that they seem to be trying to bury all the code from these micro-kernal L4 projects. Typical for certain operatives to hide useful things. You can look at the links of above but you have to go way back to get them. The sites have been taken over. Here's one where I went far back to get to the site. https://web.archive.org/web/20010405192923/http://www.cse.unsw.edu.au/~disy/L4/index.html
>>20473 >>20481 wow, thanks for that long application. Guess I got my work cut out for me. And it looks like I'll need to do a masters atleast, if not a PhD. I was hoping to finish up my undergrad and then devote most of my time to making robots but it looks much harder than what I thought. I should probably download all those L4 project codes right now before they get erased
>>20473 Nice paper, Anon. No doubt using single bits is much cheaper than using arrays of them. I'd suggest that this style approach to the data will also lend itself rather handily to future implementations directly in 'hard-wired' hardwares (other than just FPGAs). This ofc would be very inexpensive as well as blazingly-fast. >"...various works have explored how to improve their accuracy and how to implement them in low power and resource constrained platforms." I think it's neat that this is a large-scale need, and not one only unique to our robowaifus. Thanks Grommet! That was a rather well-written paper that didn't unnecessarily descend into industry jargon commonplace with many of these researcher's writings. These authors instead seemed to be actually trying to help the relative-uninitiate to understand.
>>20481 >Here's one where I went far back to get to the site. Thanks for the studious efforts Anon.
>>20488 haven't read the actual paper itself, only the links. Its not too hard for a noob right? I can't understand most of the stuff written in these papers so could never read them.
Maybe I'm wrong about the bury part. This would seem to be a very good target to work on, https://sel4.systems/ Looking at this page which goes over the various L4 OS implementations. https://en.wikipedia.org//wiki/L4_microkernel_family This one has been fully verified as not having any security problems, locks, glitches. So if this could be the base then it would likely not have any surprises in it.
>>20491 I've been checking out the wiki and it says L4 is closed source. There's a variant, OKL4 which is open source. Is that inferior to the original L4? And it seems to be compatible with MIPS, ARM and x86, but no word on RISCV.
>>20484 >And it looks like I'll need to do a masters atleast, if not a PhD I'm really sorry about that. I know it's a big pain in the ass but just looking at this stuff it appears to me that this stuff will be necessary to actually have something that will walk around, recognize you and maybe understand commands. It's a real pain to look at all this but I don't know any short cuts. It's depressing. I'm fully capable of being entirely wrong. Nothing would make me happier than to be. But it seem if you want to properly have something that can expand over time you need an OS or what will happen is you will have a big binary pile of stuff that will never ever, ever be ported but to one specific thing and then immediately bit rot will set in and it will become useless. If however an OS is used, and some of the L4 OS's are based on C and are portable to many processors, then once you get the base then you are able to port it to other processors. Speeding up the waifu and adding capabilities at the speed of processor advancement. I'm willing to bet thta if you cobble somethng together that runs a waifu that by the time you get all this stuff going you will have written an OS worth of stuff to make it work that you could have gotten off the net in the first place. Better to start with something they have already worked on. They are using L4 for missile systems, Jet fighters, medical equipment. It's first class stuff. The bad part is you have learn to use this stuff and of course, none of this is easy. Hence my constant hem hawing and pleading that I'm not sure I understand this or that, because, it's hard.
>>20497 The code part isn't the hard part. I can learn it myself, but robotics is a practical field. Sooner or later, I'll have to get my hands dirty and its best done under supervision of someone who knows what they're doing.
seL4 has code online. The OKL4 was bought out by General Dynamics. They are using it in missiles and stuff. If I was forced to say, pick one, I would say use this seL4. They , I think DARPA, went through every line to make sure that the code would not give anyone any surprises and that it would work. Here's the link for the code. https://github.com/seL4/seL4 I repeat myself but the original code was L3 then L4 made a big breakthrough . The guy who did it made small hypervisor code work MUCH faster. Like 20 times faster. So work exploded. Here's a link on the evolution of these various codes based on his original L4 work. The work made message passing faster. https://en.wikipedia.org//wiki/L4_microkernel_family The advantages of micro-kernels are VERY high in systems we do not want to fail. Because the basic system that runs everything is small and can not be screwed with easily it is always solid and works. Other programs may fail and not take the system down. Very important. Until that advancement in message passing they were very slow. Jochen Liedtke did the break through to make speed not a problem and others improved on it.
From this page https://docs.sel4.systems/projects/sel4/frequently-asked-questions "...To the best of our knowledge, seL4 is the world’s fastest microkernel on the supported processors...64-bit RISC-V kernel is about 9,400 SLOC...Presently seL4 runs on Arm v6, v7 (32-bit) and v8 (64-bit) cores, on PC99 (x86) cores (32- and 64-bit mode), and RISC-V RV64 (64-bit) cores..." NICE! I wonder if t will run on ESP32 micro-controllers. If so it would be SO IDEAL. Even of it doesn't you could use serial comm to talk to them.
This is great stuff Grommet. Thanks for all the research. If you feel strongly about this as a OS platform, then I'd think it's in our collective best interests here to investigate it thoroughly in a practical way?
>>20481 >One BIG problem is that they seem to be trying to bury all the code from these micro-kernal L4 project Ok I'm wrong about this. I looked at all this L4 stuff a few years ago. Several years ago and when I looked recently a bunch of it seems to be gone but the seL4 is current. The guy who came up with the code that made L4 possible died. (Likely another one of those "died suddenly" that we see so much of) It may very well be that a lot of his projects died with him and the sites went down. I've looked at the L4 stuff for along time but not recently. I want to add if you are interested in robowaifus you're going to need micro-controllers for input and output. There's this one called ESP32 that is really the swiss army knife of micro-controllers. here's some comments I made on them >>12474 >>18902 >>18778 >>12480 Here I do same math on cost to build with these micro-controllers. >>13408 BTW here's a paid OS for ESP32 microprocessors. https://mongoose-os.com/mos.html
>>20507 I'll check them out. But I'd rather some other anon specializes in microcontrollers and OSes. I'm already deep into the AI part and I'm not sure if I have enough time to spare to learn something completely new. If we are to make proper robowaifus, we need different specializations working together, instead of everyone becoming jack of all trades, master of none. btw we'll be solely using RISCV in our robowaifu microcontrollers right? I wouldn't trust closed source ARM and x86 and iirc the MIPS creators have moved on to support RISCV.
>>20506 >investigate it thoroughly in a practical way? Part if this is I read a lot of stuff because this sort of thing interest me. It may very well be that there are big problems that that are not readily apparent on the surface. I'm trying to point people to stuff I've seen that "seem" to work but there's no doubt that I could miss a lot of others that could be better. This seL4 looks really good though. It has definitely been used for major systems like missiles and planes and stuff like that and the source is available. That being said none if this stuff is really easy. The Boston Dynamics people worked on this stuff for many years. Fake dog and fake oxen. I suspect that these guys, BD, coded everything into a big wad of code with all this very specific motion code. I can't say I know for sure but I "think" that if we we were to make some basic movement type code, say a rough outline of movement, and then run AI so that it learns to walk, I bet it would be faster and less computational dense. A lot faster and cheaper. People here have said that has been done and it didn't work. Maybe if it could watch itself and then have it reference an actual human walking to correct itself as it learned???? Not easy. If you can build a waifu, you could also build a exoskeleton and that could be used to program the waifu.
>>20510 does BD ever plan to implement some kind of AI or other adaptabilitycode in their robots? Otherwise, their Spot robots that they plan on selling to the police will not take off. Nor will their other robots, I forgot the name. Scripted obstacle courses and dances can only take you so far. I'm so frustrated with BD, they're the only real competition to Tesla's Optimus. Unless Honda brings out ASIMO 2.
Here's using a ESP32 for face recognition. I don't know if the code is AI or not. https://randomnerdtutorials.com/esp32-cam-video-streaming-face-recognition-arduino-ide/ Enough, I'll stop filling up comments now.
>>20512 >BD I have no real knowledge of what they are doing but look at their stuff. It looks like they programmed in all this motion stuff with physics and all of that. Or to me it does. I think that path is a dead end. But what do I know I'm just some guy on the internet.
>>20513 >Enough, I'll stop filling up comments now. No, don't stop lol. Everything you've ever posted here has been useful information Grommet, some very-much so. :^)
>>20514 exactly what I said. They've already hit a dead-end imo. How many different real-world scenarios can they hardcode into their robots? They really better start investing in AI. Hopefully, poach some talent from Tesla guys working on Optimus. While the Optimus robot generally felt like a sore disappointment, I thought the AI and vision part was pretty good. I'd like to see it work in BD robots.
>>20512 >I'm so frustrated with BD I mean I think I understand your postition, Anon. But frankly, I see any fumbles by the big players as feeding into our own success here on /robowaifu/ and our related-cadres out there. More 'breathing room', as it were.
>>20525 I don't particularly care who gets to wroking humanoid robots first. I see all advances as a win. Besides, even their wins would eventually trickle down to DIY anons. I'd definitely buy a BD robot, take it apart to see how they made it, then make my own.
>>20524 >While the Optimus robot generally felt like a sore disappointment I predict Tesla will soon shoot far past any other player is this arena anywhere in the midterm timeframe. We can be thankful they aren't targeting the robowaifu market! :^) >I thought the AI and vision part was pretty good. It is interesting. But not too surprising IMO, after all, it's literally the same AI board they use in their auto-driving cars. >=== -minor edit
Edited last time by Chobitsu on 02/21/2023 (Tue) 10:55:09.
>>20526 Ah, I see. Well, I understand the perspective at least. However I personally feel morals impinging around this entire domain that make it vital, I feel, that we get robowaifus out there as a real cottage industry well-before the big players invade that niche and manipulate the systems to make anything but their own globohomo-approved mechanical c*nts illegal. History shows us all time and time again their slimeball tactics in this manner. >=== -prose edit
Edited last time by Chobitsu on 02/21/2023 (Tue) 10:54:01.
>>20527 Honestly, as much as I hate Elon, I think if anyone's going to start selling robowaifus and disregard the mainstream outrage, it's Musk. >>20528 Thing is, it's much easier for the world governments to start banning robowaifus when it's just a small DIY scene. But, when you got megacorps who would lobby billions, its much harder to ban. The more robowaifus proliferate, both in the DIY scene and in large corporations, the harder it will be for them to ban. And I don't think any megacorp will actually advocate banning robowaifus. Their only ideology is their bottom line and robowaifus would potentially a trillion dollar industry.
>>20529 >I think if anyone's going to start selling robowaifus and disregard the mainstream outrage, it's Musk. I too see him creating new companion lines once we here succeed at it first. He clearly is targeting his own factories first and foremost, and then specialized labor uses. Then he'll likely move into medical/patient care. During all that time, we'll be working on perfecting robowaifus, ofc! :^) >>20529 I disagree on both points. Since we are open-saucing everything here, it's a genie out of the bottle. Since the East will go banging with this quickly, the Globohomo will never be able to stop it. Secondly, I feel you err in your estimate that somehow the 'megacorps' and the government are two distinct entities. They haven't been for a long time. That's why their full, formal title is The Globohomo Big-Technology/Government. And it's also the odd state of affairs that the tech-tail is wagging the beltway-dog, very clearly. >=== -minor edit
Edited last time by Chobitsu on 02/21/2023 (Tue) 11:08:18.
>>20530 I do think they're working parallelly on adapting Optimus to waifu duties and they'll release it a few years after the factory robots. And I do not believe the Globohomo to be one huge monolith with a single goal. There are different factions with competing interest within them, hence why you can find governments, companies often coming into conflict. Among them, I believe companies to be more shortsighted and focused on profits than some world government/global control scheme. They're also the ones who fill the pockets of politicians. They can definitely see the potential profits in offering a robowaifu, especially in this day and age with billions of lonely men.
>>20531 >I do think they're working parallelly on adapting Optimus to waifu duties NIce. But they'll have to create something on an entirely-different frame geometry (which will require re-optimizing all the articulations & actuation codes). Fair enough about your estimates on the Globohomo. I might have time & energy to debate this topic later. Speaking of my energy, we're well off-topic ITT. Any further discussion please move it to somewhere else like /meta or news. It's a tedious, error-prone process copy-pasting each post one-by-one, with a new post each over into it's proper thread, by hand, and then deleting all the originals; but sadly that's exactly what I have to do each time I 'move' a conversation over to another thread. I'd like to 'cut the saying short' if you take my meaning! :^) >=== -prose edit
Edited last time by Chobitsu on 02/22/2023 (Wed) 05:04:28.
I also should mention the micro-controllers will need an OS to lesson the burden of writing all this super complicated timing stuff in OS's. Really hairy stuff. A good open source that works a huge amount of micro-controllers, including my favorite the ESP32 is FreeRTOS. Real time so it will be responsive. It has good expansion features for legality. How long before they add in safety rules and regs? Not long. It MIT license so you free to do as you please but it has, if you pay, verified guarantees. There seems to be a good deal of documents for it and other libraries to use. Code for RISCV and ARM and others. This means we can write code and use whatever micro-controller w can get at the lest cost highest performance. https://www.freertos.org/index.html I don't think this will work for microcomputer processors but I'm not sure. Be nice if it did then we could use more powerful RISCV for processing speech, visual, etc. while using the same OS everywhere. Less to learn. I'm not saying I know how to do this but my thinking is if we could use all these micro-controllers for input output to mostly walk and move around but ALSO use the fairly large computing power they have built in to do processing also. So say the robowaifu wants to clean something or do something complicated it could stop moving mostly and use a little of it's micro-controller distributed computing power to aid in whatever task it was concentrating on. Much like humans. When they concentrate they slow down and think. I think as I said before something like seL4 will likely have to be used for the main processor for speech, understanding and navigation. There's something I talked about earlier that I think is real important. In order to keep cost low we will have to use some sort of off the shelf micro-controller. The ESP32 I like so much has a large amount of inputs (they can read capacitive sensors for touch) and a large amount of outputs. Ideal. Now instead of building some contrived output board instead we use these outputs built in (driving transistors or more likely MOSFETs) AND the big win is these things have a a lot of computing power. So we have all this computing power and if we can share this between the various micro-controllers then it may very well be that for basic moving around, not bumping into things we could just use the built in computing power of these things and not have any main processor at all. Later for speech or higher functions we could add a fast RISCV microprocessor and link to the other controllers. What kind of power we talking about. I wrote before, "...You can get them for less than $9USD... So at 300 muscles/16 PWM output channels per micro-controller, means we need 19 and at $9 each=$171 But with that comes 19 x 600 =11,400 DMIPS. DMIPS is basically that many integer million instructions per second. It's a lot. 11.4 billion total per second with 19 MC's. >>12474 Let's say we check every output and input every micro second so a 1000 times a second and it takes 10 instruction cycles to do so that leaves us 599,560,000 instructions a second to do...something with. And that's just one processor. Most things we are going to do is compare and test, is the arm here or there, has the touch sensor touched anything, etc. most of these are values you compare to some value. I don't think the average computing will be very large for that. Even if it's ten or hundred times larger we still have a hell of a lot of computing left over. I think with the right algorithms walking and moving about will take very little computing power. After all insects move about just fine and they have next to no computing power. Of course figuring out how to do this might be tricky. I bet someone, somewhere, has a paper about this but I'm not sure if I've seen it. Looking up insect movement papers might be helpful. "...Insect brains start at about 1000 neurons...".." It's a lot of power these MC have and communication is built in to these things with CAN2.0 bus comm. like they use in cars, industrial machinery and medical equipment. Now none of this is easy. Learning some operating system, writing code for micro-controllers, using CANbus commands, but think of you had to write this stuff yourself from scratch. Hopeless. Almost a lifetime job, but if you can crib through a manual and copy other people's snippets of code, cut and paste code from these specs that are used by lots and lots of people, (meaning more code in the open), maybe you could get something working. If anyone has any ideas on how to send messages over CAN2 bus for movement of actuators I would really like to hear it. You would have to send some sort of vector, meaning a direction, in or out, and a velocity which would be turned into a voltage to drive the actuator. Now you want to be able change this in the middle of movement so how to do that? Could you send a beginning vector and then send maybe a second vector? How to make these vectors coordinated so you have walking instead of jerky stuttering movement? Could you have one micro-controller control all the muscles in one limb so that you could tell the limb to move say, forward ten inches and two inches to the side and then have the computer figure out how to work these all together? Could you use some sort of AI software to do all this coordination? Coding it all by hand could take forever. What kind of AI code would you use to do this sort of thing? Lots of questions, no answers...yet.
>>20558 This is absolutely great stuff, Grommet. Thanks! >Let's say we check every output and input every micro second so a 1000 times a second I'm guessing you mean milli-second? >If anyone has any ideas on how to send messages over CAN2 bus for movement of actuators I would really like to hear it. There's RobowaifuDev's IPCNet (>>2418). Also, you can read this (>>772). My apologies I can't give the response ATM that your post deserves, Anon. Cheers. :^) >=== -minor edit
Edited last time by Chobitsu on 02/22/2023 (Wed) 04:58:59.
I hate to keep changing things but I haven't looked at real time operating systems for micro-controllers in a long time and things have really changed. I found a real interesting OS called Zephyr. https://en.wikipedia.org/wiki/Zephyr_(operating_system) This looks good. It is run by the Linux foundation and is maintained. It also will run on larger faster computer processors so that's a big plus. You could run MC and your main high power processors on the same OS. Big win. Save you from having to learn more than one system. Huge advantage. Here's a comparison of some well known, I think, OS's for micro-controllers. https://micro.ros.org/docs/concepts/rtos/comparison/ I'm still looking at these to see what might be best. It's a big deal to pick one because of all the effort you will put into learning how to use it. Once there, you're not likely to change so having the right mix of ease of use and others having code you can reuse is important. While I was looking at the OS comparison link I went up to the main link and found there is such a thing as a "Robot Operating System" which uses the lower level OS's listed as a base and rides on top of it. Holy smokes there's a lot of books and data on this thing. It's also free. I'm asking all these questions about how to do all this stuff about coordination, maybe it's already done??? https://micro.ros.org/ Here's a link to a search for a LOT of books on this Robot Operating System http://libgen.rs/search.php?req=Robot+Operating+System&open=0&res=25&view=simple&phrase=1&column=def Have to look further and see what this is all about. My first impression is this is the shit. WOW! This is what's needed(if it works). There's tons of books, documents and the books say you can simulate robots. How cool is that. Get the right actuator then simulate all the movements and software before you build. You could make progress with zero money just simulation. BUT does it really work? It's very impressive marketing blurbs, but you know how these things are. I've got an idea for using robots for some dangerous work but stuff you could do yourself or two guys and maybe make some money. It wouldn't be a waifu, but all the techniques to build the equipment and operation would use all the same tools and provide, maybe , some cash. This really excites me. I may be buried in some books for the next few months.
>1000 times a second I'm guessing you mean milli-second? Oops...yes
>Car Hacker's Handbook Much thanks. I need that thread. I'm looking at robots AND drones right now. I've got an idea that I can use drones to carry ropes to the tops of trees. Attach cables, crawl or pull robots tree cutters and then trim or cut down trees. There's some money in this and my Mom has a tree that really has to have it's limbs cut. I've been debating how to trim this down. I bought a tree harness and have some ropes but I'm terrified of climbing this really tall tree. And let's not even talk about usng a chain saw while up in the tree. And I know how to use one but...it's like 75 foot high. The costs are really high to get someone to trim these and this seems the right time to try something different. It's going to fall on the house. So I'll get some tree cutting bids and see just what it will take and "try" to build some sort of tree trimming robot. In the process I'll learn all the skills needed for waifus, save my Mom's house and her some bucks. My thinking is get away from chain saws. Build something like huge garden shears like the beak of a parrot with inward cutting blades. I've talked a lot about transmissions. I think I can build one of those I talked about. Slowly close the jaws and snip trees right in two. If too big nip at it a little at a time. Good selling point if they have power then no machine noise or much less with electric. The job is dangerous as can be I know someone who used to do it. A chainsaw will tear a huge chunk in you in seconds if you let it hit you. I saw a guy hit his leg one time. DAMN the blood. It went right in his leg. There's also home owners fear of the guy killing himself and suing him so knowing you will be on the ground and using robots. I think a big selling point That's my plan over the next many months. Likely, or the plan now, is to use this kick ass sounding Robot Operating System, ESP32 and try to make it work. I can weld and have a decent knowledge of metal casting aluminium (but haven't done it), so I expect I can build whatever need and what I can't, maybe bearings, are easy to get. I have lots of reluctance motor ideas. I'll just have to build some and see what works. A PLAN!
>>20497 >over time you need an OS or what will happen is you will have a big binary pile of stuff that will never ever, ever be ported but to one specific thing and then immediately bit rot will set in and it will become useless. I think bit rot can be mitigated by something like btrfs or zfs. Also, I maybe don't understand, but if something runs on a known system you can always emulate that. Anyways, thanks for the reminder that maybe we should use some RTOS for movements at least. Ideal would be to have code that can be used on a more common system and then transferred to such a system as soon as necessary. It's likely that people will implement things in Arduino C++ or Micropython these days. >but I'm terrified of climbing this really tall tree. I did this as a child for fun. You have four limbs, it's unlikely that all of them fail or slip at the same time. Also, the branches below you would catch you while falling. Well, being slender, fit and young would help. A belt on top of it should make it very safe. >chainsaw ... I saw a guy hit his leg one time. Yeah, please don't do that.
>>20575 It's a great business idea Anon. I hope you can successfully pull this off as a plan. Maybe we should consider starting a non-robowaifu projects prototyping thread here? Certainly this tree surgeon robot would be much simpler than a robowaifu will be, yet touch on many of the similarly-needed design & engineering concepts. >The job is dangerous as can be... Yeah, please do not do this yourself Grommet. You're too valuable to our group here! >>20582 >Ideal would be to have code that can be used on a more common system and then transferred to such a system as soon as necessary. Agreed. >=== -minor edit
Edited last time by Chobitsu on 02/22/2023 (Wed) 11:14:42.
>>20582 >I think bit rot can be mitigated by something like btrfs or zfs Maybe I'm using the wrong terminology. You misunderstand. I'm not trying to rude just more precise. It's going to take a while to get this stuff to work. Each micro-controller will have it's own glitches and exceptions. If you code for just one then very soon the latest and greatest will come out and you will be covered up in weird bugs. "If" we can use these robot operating systems the companies or the software providers will show you which micro-controllers work with the software and you can port to it easily. You're calling finctions ionstead of raw registers and assembly coding. I have very little computer programming experience. Only FORTRAN and hexadecimal assembly. Hex programming is hard and takes forever. I could do it but the time is immense. The assumption I'm making s these libraries can do a lot of the little stuff whoe we concentrate of the larger movements of the robot. All this stuff related to timing and a whole bunch of OS type housekeeping, I don't think I can do that or at least not in this lifetime. I've been looking a a little at the Robot OS and it's hard enough. The link I gave above is microROS, there's a larger Robot Operating System with way more features. It runs on Linux. So we may need a single board computer with Linux, then micro-controllers to operated the limbs, sensors etc. Networked together like nerves in a human. Let's hope the microROS, for micro-controllers, and the larger ROS are similar. I get the impression they are. I can already see this wll be harder than origninally thought. I think you will almost have to learn C to do this. I've been downloading books to do this. I have made a couple of C programs and I really hate this stuff. Programming is something that, to me, is satisfying "after" you finish but a real pain in the ass while you are doing it. As for climbing trees. When I was young I climbed plenty but never with a chain saw running nor am I young any more.
>>20629 Your writing was a bit verbose before and I got confused trying to follow the discussion here. I can't always parse the whole comment or loose the train of thought about the conversation. I think you actually made the case for using an OS. Which I support, without claiming to know much about it, if there's no strong reason against it. Otherwise, someone has to make the case why not: OS or drivers from the scratch, down to addressing the SBC? Just to make it a bit more efficient? Maybe I'm wrong but this looks like overkill or at least premature optimization. There should be some abstraction layer running on many SBCs, then we'll only need the write the code on top of it. What's again wrong with some L4 OS? Who's arguing against it? Do we even need it? Is it even necessary to have this amount of fail safety? The real world is messy, we should make our robots being able to sense and adapt to it. Not move super fast and ultra precise. >I think you will almost have to learn C to do this. Maybe something like NIM that compiles to C is good enough? I looked briefly into L4 and it seems to have it's own programming language or maybe it's just a coding style?
>>20877 >nesper >I generally dislike programming C/C++ (despite C's elegance in the small). When you just want a hash table in C it's tedious and error prone. C++ is about 5 different languages and I have to idea how to use half of them anymore. Rust doesn't work on half of the boards I want to program. MicroPython? ... Nope - I need speed and efficiency. Thanks, this might become handy. But why are we in the math thread?
>>20882 >But why are we in the math thread? My apologies to everyone here for contributing to this abuse big-time. NoidoDev, would you be willing to consider becoming involved directly in taking on the tedious tasks (>>20532) involved in cleaning our threads up with posts in their proper places? (Again, apologies for further compounding the problem with this post.)
>>20884 >would you be willing to consider becoming involved ... in cleaning our threads up with posts in their proper places? How? Trying to register as vol?
>>20901 Yes. You just have to get Robi to turn on account creation, then once you have one made, I'll assign that account as a volunteer here on /robowaifu/. Trust me, this isn't fun work, so you should give some consideration if you really want to take it on first.
Just so I can say I was not "totally" off topic. A lot of control theory, I believe, is loosely, very loosely, based on the same sort of ideas as Fourier transforms. Sine waves. I'm not a mathematician but it seems that most are based on wave forms and the math is very slow because it uses sine waves, generally. There's a set of math functions that are based on stretching a raising the peaks of waves, "Wavelets" that is far, far, faster. Djvu uses wavelets, a lot of oil prospecting seismic processing software use wavelets to tease out very fine grained underground structures from the data and movies use these functions to compress data. I've read the processing time for signal processing can be 10 times less using wavelets to analyze data, statistics, etc. It seems that using sine waves based signal processing uses far more processing steps. More computing time. Wavelets use much more of a simple add and subtract without a lot of matrix algebra. I can't help but think it may be analogous to the mass of matrix additions that AI uses now compared to the way Xnor.Ai processes AI far faster. I'm trying to grasp the big idea pattern here. It's seems that present AI (I'm going to equate to a waveform) uses a lot of matrix multiplications to go over every single speck of the data. Analyzing each and every little data point. Xnor.Ai uses a far larger gross examinations by saying, is this one clump of data I'm analyzing larger than that clump of data, and then passing the result on as yes or no. They only care about the larger coefficients when they are analyzing it. I see this as comparable to wavelet processing in a general sort of big picture way. I'm probably screwing this up but I hope that I've pointed things in a generally correct direction. https://en.wikipedia.org/wiki/Wavelet Another offshoot of this idea is a "chirplet". There's a GREAT picture of the different waves that gives you a BIG picture idea of what I'm trying, probably unsuccessfully, to convey at this link. I'll link the picture too. https://en.wikipedia.org/wiki/Chirplet_transform https://upload.wikimedia.org/wikipedia/commons/1/10/Wave-chirp-wavelet-chirplet.png Look at how the different waves could be used to represent information or analyze information. Here's my understanding of why this is a good thing. Look at first "the "wave". Think if you had to add a lot of these up like a Fourier transform it would take a lot of them to fit it into the signal we are approximating. I think the general idea is the same as successive approximation in calculus. So we add up all these waves to make it fit our actual data. Now look at the wavelet. It could stretch and raise the peaks to fit. I think this function uses less coefficients to fit into the signal, now look at the chirplet. since it it seems to already have a bit of stretch built into the function it might take even less stretching and raising of the amplitude to approximate the information waveform. I think the basic idea is that you transform the signal of whatever we are trying to analyze into a convenient waveform, wavelet, chirplet, etc. then we can use simple addition and subtraction to quickly analyze the data to tease out what is going on in this, formally, complex wave of data. This vastly simplifies the processing power needed. Now me saying this like it's some simple thing, well it's not. Figuring out "what" transform to use and how to set it up is difficult. Maybe what needs to be done is to figure out what method, transform, operation would be most advantageous for us to use. What I'm trying to do is to state what the problem "is" and how to go about solving it, not that I necessarily know the answer. And there's always the case that I have just formented a huge case of the Dunning-Kruger effect and have no idea what I'm talking about. If so please inform me this is the case and try to explain in a way such that my little pea brain can understand what might be a better solution.
>>20963 >And there's always the case that I have just formented a huge case of the Dunning-Kruger effect and have no idea what I'm talking about. Lol. There's a somewhat-disreputable-but-probably-reasonably-effective adage that goes 'Fake it till you make it'. Maybe you're on the right track here, but are yet-unexperienced-fully in using the language to spell it out clearly for us plebeians? Regardless, you certainly have the correct sentiments about efficient processing being a (very) high priority for the proper design of good robowaifus. Drive on! :^)
I found something I think could be really fruitful Geometric algebra. I was reading a totally unrelated blog and ran across this comment. "...Geometric Algebra. It’s fantastic, it’s true, it’s representative of how the universe works, it’s relatively (hah!) simple, it’s predictive, and it’s almost completely ignored in scientific education. The behavior of both complex numbers and quaternions emerges GA. Quantum spinors emerge from GA. Maxwell’s 4 equations become a single GA equation describing the relationship between electric charge and magnetism. And all this derives from a single, simple, unifying principle..." What this appears to my limited understanding is a "fairly, easy way to do complex calculations on vectors and many other problems including those of many dimensions. It's been so long since I studied math but I remember taking class on complex numbers and how you could change them to vectors and in consequence multiplying, adding them or other manipulations became very easy. I think this is much the same. You place what you are computing into this vector format and then it becomes fast, low computing power needed to manipulate them. The power of this impressed me as you can take Maxwell's electromagnetic Quanterion math, don't ask, and reduce it to an easier manipulated vector for calculations. Anyways here's a book on, Eduardo Bayro-Corrochano, "Geometric Computing_ for Wavelet Transforms, Robot Vision, Learning, Control and Action" And notice it says "wavelets". I had an intuition that wavelets would be helpful to us. Maybe they are. https://en.wikipedia.org/wiki/Geometric_algebra You can go here http://libgen.rs/ type in "Geometric Algebra" with the nonfiction/sci button selected and find many more books on this. I tried to upload the book I mentioned and it stops at 71%. It's maybe too big. So go to the address I entered above and enter the title I mentioned and you should be able to find the book. It's where I got it from. This address is a great way to find books and scientific articles.
>>22452 This is pretty fascinating Grommet, I think you might be onto something. The point about Maxwell's equations is spot-on. They are in fact a kludge-job. (Please don't misunderstand me, James Clerk Maxwell was a brilliant, impeccable man. A true genius. He simply started from the premise of the reality of 'The Ether', which doesn't exist.) Everything they attempt to describe can be done much more simply and elegantly today. Therefore, since it's correct on that major point, this stuff is probably right about the other things as well. Thanks Anon! Cheers. :^)
>'The Ether', which doesn't exist BLASPHEMY! BLASPHEMY! I must immediately remove myself to the cleansing force of the Inertia Field. :) Did you know that this experiment DID NOT find that the speed of light is the same direction going the direction of earth's orbit as compared to perpendicular to it. It did not. https://en.wikipedia.org/wiki/Michelson%E2%80%93Morley_experiment The textbooks say it did, but it did not. I have read in the univ. library a original copy of the experiment from the Men themselves. In the back it gives the differences. And many, many, many other experiments gave the same results. The most recent one gave a null result, BUT they did it in an underground mine. Cheaters. Maybe they know more about this than they let on. Rid yourself of this silly pseudoscience that there's no ether.
>>22613 From Newton and Math to Ether. What next? Flat Earth? https://youtu.be/JUjZwf9T-cs
>>22662 I don't want to get into too much detail, I can, and maybe I will in the off topic thread(it would take some digging through a lot of sources I believe I still have), but you should not equate what I said to flat earth. There were HUNDREDS of experiments with increasingly accurate equipment testing the Michelson Morley experiment and none of them gave a null equal result of speed of light parallel and perpendicular to the earths movement in space. So I can't prove there's a ether but I can prove that the test they SAY proves there's no ether, and their interpretation of the results they SAY they got, is incorrect. The textbook explanation of this is wrong.
>>22613 Haha, thanks Grommet. You ae henceforth teh /robowaifu/ official court Natural Philosopher. The great Michael Faraday is the chief of your clan. :^)
Found a, new to me, book that covers Geometric Algebra Geometric Algebra Applications Vol. II_ Robot Modelling and Control - Eduardo Bayro-Corrochano(2020) The blurb on this thing sounds like a muticandied wonderland. I'll have to slog through it, not thta I would understad it. Some high points, "...This book presents a unified mathematical treatment of diverse problems in the general domain of robotics and associated fields using Clifford or geometric alge- bra. By addressing a wide spectrum of problems in a common language, it offers both fresh insights and new solutions that are useful to scientists and engineers working in areas related with robotics. It introduces non-specialists to Clifford and geometric algebra..." Unified domain. YEAH. Learn one thing and do it over and over! "...Lie algebra, spinors and versors and the algebra of incidence using the universal geometric algebra generated by reciprocal null cones..." "incidence", "null cones", doesn;t that sound a whole lot like that crazy thing I postualted. Using a set point on a bot body then specifying offsets to move limbs? >>22111 Sounds like it to me(maybe). So maybe here's a way to get the math to work. "...Featuring a detailed study of kinematics, differential kinematics and dynamics using geometric algebra, the book also develops Euler Lagrange and Hamiltoni- ans equations for dynamics using conformal geometric algebra, and the recursive Newton-Euler using screw theory in the motor algebra framework. Further, it comprehensively explores robot modeling and nonlinear controllers, and discusses several applications in computer vision, graphics, neurocomputing, quantum com- puting, robotics and control engineering using the geometric algebra framework..." WOW And he even has a section to make Chobitsu giddy with joy, "...and a entire section focusing on how to write the subroutines in C++... to carry out efficient geometric computations in the geometric algebra framework. Lastly, it shows how program code can be optimized for real-time computations..." I'll try to up load but a link f not. http://library.lol/main/7C2C1AEAA23194B1D55E218BE5EE87E7 Won't upload so you'll need the link. It's 20.6MB
>>23088 >Geometric Algebra Applications Vol. II_ Robot Modelling and Control Neat! Nice title. >And he even has a section to make Chobitsu giddy with joy, LOL. Thanks Anon, I"m giddy. :^) (Actually, it's everyone here that will be 'giddy' in the end, IMHO. C++ is our only practical option that will actually work... but I needlessly repeat myself :^) >Lastly, it shows how program code can be optimized for real-time computations..." Sounds like an amazing book if he delivers. Thanks Grommet! Cheers. :^)
>>23088 If anyone here speaks Spaniard, maybe you can help us track down the software. The book's link given is http: //www.gdl.cinvestav.mx/edb/GAprogramming So, AFAICT it looks like the 'edb' account is no longer a part of this Mexican researach institution (at least on the redirected domain for this link). A cursory search isn't turning anything else for me. Anyone else here care to give it a try? TIA. >=== -prose, sp edit -break hotlink
Edited last time by Chobitsu on 06/12/2023 (Mon) 21:44:14.
>>23095 That link is dead for me
>>23097 >That link is dead for me Yup, thus my request. This link (the new, redirected domain by Mexico gov) is dead: https ://unidad.gdl.cinvestav.mx/edb Even though his CV https://www.ais.uni-bonn.de/BayroShortCVSept2021.pdf (interestingly, located at a German research group) still lists this as his page. Whatever his other (impressive) mathematical accomplishments, he sure makes it hard to find his book's software heh. :^) --- Also, AFAICT the official book publisher's (Springer - Verlag) page doesn't have any software links either. Am I just missing something anons? https: //link.springer.com/book/10.1007/978-3-030-34978-3 --- Here's an entry for his work at the MX institution. Appears to be a grant amount. Again, Spaniard may help out here. https: //www.gob.mx/cms/uploads/attachment/file/458453/CB2017-2018_2ListaComplementaria_Abril2019.pdf p5: A1‐S‐10412 Percepción Aprendizaje y Control de Robot Humanoides Centro de Investigación y de Estudios Avanzados del Instituto Politécnico Nacional EDUARDO JOSE BAYRO CORROCHANO INVESTIGADOR CONTINUACIÓN $1,974,315.95 >=== -add publisher's link -minor edit -add grants link -break hotlinks
Edited last time by Chobitsu on 06/12/2023 (Mon) 21:43:35.
I went to google, in desperation, a last resort, and used their translate. He has a site at the school with his publications listed but...no links to the code. I tried searching for the book + code and all sorts of variations. I'm usually reasonably good at finding things but...a big blank on this code. It's also not on the internet archive. There's a possibility that his code, though not exactly conforming to the book, is in his papers as his book seems to be a summation of his papers. You can find his papers here, http://libgen.rs/scimag/?q=Eduardo+Bayro-Corrochano So whatever code you are looking for match the subject with the paper and maybe the code will be in the paper. Or at the least a mathematical representation of what the code is supposed to do.
More searching and I find a page full of software for Geometric Algebra, not his unfortunately but lots. Even in C++. https://ga-explorer.netlify.app/index.php/ga-software/
And look at the publications page for this. It's all about integrating GA with computing and how to go about it. Interesting blurbs, "...Geometric Algebra (GA) in diverse fields of science and engineering. Consequently, we need better software implementations...For large-scale complex applications having many integrating parts, such as Big Data and Geographical Information Systems, we should expect the need for integrating several GAs to solve a given problem. Even within the context of a single GA space, we often need several interdependent systems of coordinates to efficiently model and solve the problem at hand. Future GA software implementations must take such important issues into account in order to scale, extend, and integrate with existing software systems, in addition to developing new ones, based on the powerful language of GA. This work attempts to provide GA software developers with a self-contained description of an extended framework for performing linear operations on GA multivectors within systems of interdependent coordinate frames of arbitrary metric. The work explains the mathematics and algorithms behind this extended framework and discusses some of its implementation schemes and use cases..." another paper, "...Designing software systems for Geometric Computing applications can be a challenging task. Software engineers typically use software abstractions to hide and manage the high complexity of such systems. Without the presence of a unifying algebraic system to describe geometric models, the use of software abstractions alone can result in many design and maintenance problems. Geometric Algebra (GA) can be a universal abstract algebraic language for software engineering geometric computing applications. Few sources, however, provide enough information about GA-based software implementations targeting the software engineering community. In particular, successfully introducing GA to software engineers requires quite different approaches from introducing GA to mathematicians or physicists. This article provides a high-level introduction to the abstract concepts and algebraic representations behind the elegant GA mathematical structure. ..." https://ga-explorer.netlify.app/index.php/publications/ I'm getting the feeling that using this framework GA you can repeat it over and over. Saving computing resources and making all computing in one big scheme that can be repeated with far less resources. Now this is VERY MUCH like that Rebol programming language that I blathered so much on. One of it's BIG strengths is this unifying character of "series list" and the manipulation of them. It's why Rebol can make all these different functions in the software package and still be a megabyte. I see this sort of thing all over the place. I want to emphasize I'm not a math wiz, or even a fizzle, but I'm ok at recognizing patterns. I see a lot of computing doing this sort of thing. Like Plan 9 operating system and the QNX operating system. They use to great effect the idea of making everything in the code pass messages instead of a mish mash of pointers and other such drivel. A counter to show the difference. Linux is old school, mish mash, so it's a huge hair ball of mass and dreckage, While QNX and Plan 9 are light tidy things. L4 microkernel family does this also. In fact it was a dog at speed until they changed it to pass messages then it took off. I think they use a version of this in F-16's as the OS. Now I also know next to nothing about AI but I do know it's a huge mass of matrix manipulations. And it's very likely, like Maxwell's Quaternion calculations, that GA can whittle it down to size. It may be that the same sort of resource compaction can be done in the case of AI with GA also. Or maybe not. One more link https://hackaday.com/2020/10/06/getting-started-with-geometric-algebra-for-robotics-computer-vision-and-more/
>>23145 There's a library for that called opencv. You can do it from scratch if you want though.
>>23143 >>23144 Thanks Grommet! We'll keep looking from time to time. :^) >>23147 Thanks for the info Anon. OpenCV is pretty amazing IMO.

Report/Delete/Moderation Forms
Delete
Report