/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

I Fucked Up

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


“If you are going through hell, keep going.” -t. Winston Churchill


AI Software Robowaifu Technician 09/10/2019 (Tue) 07:04:21 No.85
A large amount of this board seems dedicated to hardware, what about the software end of the design spectrum, are there any good enough AI to use?

The only ones I know about offhand are TeaseAi and Personality Forge.
Open file (98.29 KB 1000x562 BERT.jpeg)
I wonder how I should go about trying to implement BERT using mlpack (if this is even possible)? I found some Chinese guys who apparently managed something along this line in straight C++ . https://github.com/LieluoboAi/radish/tree/master/radish/bert Surely a library that's specifically designed to support AI & ML should make this effort relatively simpler? I wonder what kind of things I should study to be able to pull this off?
>>5966 There's a simple PyTorch implementation here: https://github.com/codertimo/BERT-pytorch You'll need to understand attention, multi-head attention, positional embedding, layer normalization, Gaussian error linear units (GELU), transformers and BLEU scores. Layer normalization: https://arxiv.org/pdf/1607.06450.pdf GELU: https://arxiv.org/pdf/1606.08415.pdf Multi-head attention, positional embedding and transformers: https://papers.nips.cc/paper/7181-attention-is-all-you-need.pdf BERT: https://arxiv.org/pdf/1810.04805.pdf BLEU: https://www.aclweb.org/anthology/P02-1040.pdf The functionality needed to implement BERT in mlpack was recently added: https://mrityunjay-tripathi.github.io/gsoc-with-mlpack/report/final_report.html It seems the pull request to add the model and transformer block are working but haven't been fully accepted yet: Transformer PR: https://github.com/mlpack/models/pull/16 BERT PR: https://github.com/mlpack/models/pull/30
>>5964 It depends on what you would like your AI to do. Everything it learns depends on the data it is given. For example, if I wanted one that can read research papers and discuss them with me I'd need to convert my collection of papers into plain text, find a dataset for document-grounded conversations, a Wikipedia-like dataset of machine learning concepts, and optionally another conversational dataset for extra training data. If you wanted to create an AI that can come up with ideas for games, you'd have to create a dataset of articles and reviews on games. Likely you don't want just random game ideas though. To improve the quality of output you would need to sort the data into what you want and don't want. So you could collect ratings data from game review sites, and then use these two datasets to generate new game ideas with high ratings. If you wanted to create an imageboard bot, you'd need to collect imageboard posts and sort what you want and don't want. If you wanted it to generate posts that get lots of replies, then this data could be extracted from the post dataset by counting the replies to each post. In summary, ask yourself: >What do you want to create? >What data is available? >What qualities do you want it to have? >What data is needed to sort or transform it by that quality? >Optionally, what extra data could help enhance it?
>>5970 >If you wanted to create an imageboard bot Hmm. Having a waifu who could shitpost bantz alongside me every day sounds pretty interesting haha. It probably wouldn't be too hard to collect millions of posts from archive resources. I guess going through and sorting it all would take years though...
>>5969 One of the points of the whole excercise is to be freed from both Google and Python (well, at least the ML libraries pitfall trap). Being freed from G*ogle needs no further explanation IMO. The Python thing is related to the terribly convoluted, non-backwards (or forwards!)-compatible dependency hell involved. Far worse than anything I've ever encourntered in years working with C, C++, & C#. I actually had no idea dealing with Python could be this painful, since my experience was limited to simple projects in bunny classes. To be quite frank, I've been rather discouraged by it as I haven't managed to get a single example Python project posted on /robowaifu/ working at all over the past year. There's also the runtime performance issue. I realize that in large part these Python scripts are just front-ends for underlying C & C++ compiled libraries, but still it adds some overhead vs straight, optimized compiled C binaries. If we're really going down the path of providing inexpensive robowaifus to men the world over--many with very limited resources available to them--then we really do need to take a proactive, positive stance towards an embedded approach to our software projects. Relying on huge-ass & tricky libraries, doled out to us from locked off walled gardens at the whims of the 'Protectors of Humanity' seems to me to be the exact antithesis of this approach. Regardless, I very much appreciate what you're doing Anon. I've gained a whole lot of both understanding and encouragement from your inputs here on /robowaifu/. And also thanks for this wonderful response, too. I'll be going over all your links you provided here today to see if I can get a better understanding of how to approach this. Now, any chance of you giving the same degree of response to the similar question 'I wonder how I should go about trying to implement LSTM using mlpack (if this is even possible)? ' This seems an even more powerful-sounding approach. :^)
>>5973 >as I haven't managed to get a single example Python project posted on /robowaifu/ working Actually, check that. I did manage to get Anon's very cool Clipchan working (other than anything to do with the Spleeter parts of it).
>>5973 Yeah, the way people use Python it's becoming a dumpster fire and pip is a nightmare I don't wanna deal with, especially with major libraries dropping support for common systems and freezing their dependencies to specific versions. No one is maintaining a stable package distribution and it's just an unstable clusterfuck. Python's performance isn't too big a deal though since slow code can be compiled in C easily with Cython. There's still some overhead but it's extremely fast, especially doing loops which the Python interpreter really struggles with. For me Python is more like a sketchpad for prototyping ideas rapidly. If I was doing heavy training, I'd wanna use C++ to get the maximum performance. I don't have much experience using mlpack but LSTMs are already implemented: https://mlpack.org/doc/mlpack-git/doxygen/namespacemlpack_1_1ann.html You can implement a bidirectional LSTM by creating two LSTMs and feeding the input sequence in reverse to the second one and summing the output of the two. If you wanna implement an LSTM yourself for an exercise, you'll have to learn how to implement a RNN first and then read the LSTM papers. There's plenty of great tutorials around, probably even specific ones for C++ on how to do it from scratch without any library. The ANN tutorial is also helpful to get started: https://mlpack.org/doc/mlpack-3.1.0/doxygen/anntutorial.html
>>5975 >Python I'm glad you at least understand my pain heh. :^) One of the great things optimized, compiled code from a systems-level language like C (and, with care, C++) can bring to the table is is efficiency in both time and space. Even Cython-compiled binaries probably bring along a storage-size issue that is likely to be a challenge for embedded by and large. And small-footprint code very often runs faster as well, due to the laws of physics going on inside compute cores. >links Thanks very much. I will add those links to the list for today.
>>5976 >If I was doing heavy training, I'd wanna use C++ to get the maximum performance. BTW, it's not the training I care so much about. A) We can create our own hardware system for that B) We can create the delicately-organized dependencies just so for each training system. It's the runtime perf that matters, b/c that's where the rubber will meet the road for day-to-day robowaifu life for millions of men in the future!
>>5972 You definitely don't wanna sort every single piece of data by hand. It's possible to create filters to sort the data for you. In many cases you can train a separate AI to sort the data by showing it a few examples of what you want. Let's say you downloaded an archive with millions of posts and only wanted posts speaking positively about a certain topic. Since you're asking for two dimensions (posts on a specific topic and positive sentiment,) you would give it 2x2 classes of examples to work with (on-topic/positive, off-topic/positive, on-topic/negative, off-topic/negative,) but you only need just a few examples. You (or the AI software) would split the examples given into two datasets, a training set and validation set. The AI trains on the training set then validates that it's learning to sort correctly by evaluating itself on the validation set and reports back. If it's classifying the validation set correctly, then it's working correctly and its output can be used to make the AI that generates posts about the topic you want with positive sentiment. If it fails, it needs to be fed more examples until it succeeds. Anything that's repetitive can be automated with scripts or AI. While collecting data you wanna subtract the repetitive work and focus on the meaningful, like mirroring archives so you have a copy in case it disappears (as datasets online often do,) or compiling meaningful lists of URLs and downloading them automatically to be mined later. For example, to download all the articles off Niche Gamer, you just tell your preferred web crawler to download something like nichegamer.com/*/*/*/*/ and let it fly at it. If you ever feel like you're going one-by-one doing something with a massive list of things, you're likely doing something that can be automated. In the next 5-8 years almost the whole pipeline will be handled by AI. People won't need to know how AI works anymore than they need to know machine code to use their computer.
>>5977 There's a lot of techniques for reducing models by 2-3 orders of magnitude with little accuracy loss so they can run on mobile devices, such as network pruning, sparse networks and knowledge distillation. Doing it manually today is quite a bit of work but it will be automated in the near future and consumer hardware and algorithms will be much faster, so I haven't been too worried about the runtime performance. But now that I think about it I'm sure people will want the latest and greatest features and demand maximum performance. We definitely don't want people buying Alexa spydroids because our damn GNU/Waifus run like fucking GIMP.
>>5923 You know, I got to thinking that this seemed like it might be something related to the optimization by the compiler for these fast template generics and the fact that my old Intel hardware might be the cause. So, I set up openblas on a less powerful CPU-wise 1Ghz arm7hf RaspberryPi 2 and tried again. Sure enough, the entire thing comes in at ~370 - 380us -- much faster. > I feel better about the whole thing now, but it's also an obvious reminder to profile our robowaifu code for the specific hardware configuration being used. We'd do this anyway, but now it's obvious to also do it early as well. :^)
>>5987 Reducing the code down to just creating the Matrix creation and taking it's determinant reduces it down to ~250 - 300us. >main.cpp #include <armadillo> #include <chrono> #include <iostream> using namespace std; using namespace arma; using chrono::duration_cast; using chrono::microseconds; using chrono::steady_clock; int main(int argc, char** argv) { steady_clock clock{}; auto begin = clock.now(); mat A = {{0.165300, 0.454037, 0.995795, 0.124098, 0.047084}, {0.688782, 0.036549, 0.552848, 0.937664, 0.866401}, {0.348740, 0.479388, 0.506228, 0.145673, 0.491547}, {0.148678, 0.682258, 0.571154, 0.874724, 0.444632}, {0.245726, 0.595218, 0.409327, 0.367827, 0.385736}}; // determinant auto det_a = det(A); auto end = clock.now(); cout << duration_cast<microseconds>(end - begin).count() << "us\n"; return 0; } > >meson.build project('arma_test', 'cpp') add_project_arguments('-std=c++17', '-Wall', '-Wextra', language: 'cpp') cxx = meson.get_compiler('cpp') arma_dep = cxx.find_library('armadillo') openblas_dep = cxx.find_library('openblas') executable('arma_test', 'main.cpp', dependencies : [arma_dep, openblas_dep])
>>5987 >related to the optimization by the compiler for these fast template generics I might add here that one of the (few) things I dislike about Mesonbuild is the somewhat wonky way you have to specify optimizations to it's build system. From Juci this basically means if you want release-mode (-O3) optimization, you have to run an external command to do so. So, from Project > Run Command (alt+enter) fill in: cd build && meson configure --buildtype=release && cd .. >or do the equivalent from the command line This will regenerate build the files and until (and unless) you edit the meson.build file thereafter, all your builds will execute with '-O3' in Juci.
Ehh, I realize now that I'm probably getting this all out of order for anons who are following along in the Modern C++ Group Learning thread, but if you want (as I have done here) to use Meson instead of CMake inside of Juci, then first close Juci, open config.json inside Mousepad, then edit the build management system line (#82 in my file) to use meson instead of cmake: "default_build_management_system": "meson", > then restart Juci. Your new projects will then use Meson as your build system, and provide you a default meson.build file with all new projects. I'll probably move this over into the Haute Sepplesberry or C++ thread at some point.
Open file (92.40 KB 1116x754 juCi++_114.png)
>>5978 OK, I'm going to take a shot at something like this. I've already begun to do a complete re-write on the BUMP imageboard archive software I wrote as an emergency measure a year ago or so when we moved to Julay so we wouldn't lose the board. I've since been using it regularly to keep ~80 boards archived, including /robowaifu/ ofc. During the rewrite, I'm planning to rework serialization of posts out to disk files to sort of 'standardize' the half-dozen or so IB software types BUMP currently supports. It occurs to me that that approach could be extended to not only integrate all archive site content desired, but also serve as a stand-alone desktop app that could integrate all things IB. Naturally, this seems a logical facility to begin to integrate sentiment analysis, human-specified validation, sorting & prioritization to allow a robowaifu to both read and post to imageboards. I plan to make it an open community thing for all of /robowaifu/ to give input on if they want to. What do you think, is a robowaifu-oriented Bumpmaster application a good idea? I can probably roll the machine-learning routines directly into it as I learn them, and make an interface to the program that's standardized so that any Anon creating robowaifu AI can directly use the tool with their own waifus. It will work both headless for her use, and with an IB-like GUI for the user. Sort of all came together for me today when I realized I should use a namespace and 'robowaifu' came inexorably to mind. :^) >
>>6084 Keeping the machine learning separate would be a better idea, just an interface for other programs to access imageboards and work with them. If possible, it would be great if it could generalize to other websites too. I imagine a tool where you can specify element selectors to scrape for data and output it all to CSV files or something. Besides being able to download all kinds of data, it would make it easier to maintain when imageboards change their designs.
>>6086 Hmm. Probably good advice I'm sure. I'll think it over and see if I can figure out the way to direct the tool in the direction you suggest. BTW, any suggestions for the name of such a 'Waifu Internet Frontend' tool you envision? Bumpmaster seems a little narrowly-focused for such an expansive framework tbh.
I'm just going to be blunt. I am a retarded nigger and all this white people talk is starting to make my head hurt. I want to dip my toes into the pool to see if this sort of thing is worth my time before investing serious effort into it. Is there a waifu AI that I can set up that just werks?
>>6410 Haha. I don't really think there's really something that you can easily set up by yourself yet Anon. There are plenty of different chatbots out there though, but you have no privacy that way ofc. Just look around here, there are a few mentioned. I think replika.ai is a popular spybot chat atm.
>>6410 >Is there a waifu AI that I can set up that just werks? Maybe in a year or two. Even decent chatbots available at the moment require a 16 GB GPU. In two years though they'll only need 6 GB since machine learning doubles in efficiency every 16 months.
>>6416 >Even decent chatbots available at the moment require a 16 GB GPU Do you mean RAM? Because if so that is achievable for me. Thanks for letting me know regardless,
>>5818 Thanks, that's both encouraging and inspiring.
>>5810 You still around Anon? How's the mlpack/LSTM/raylib project going? Any progress on it yet?
>>8933 Nah, haven't been around here much. Been focusing on making virtual waifus in Godot and using WebSockets to send data between Godot, PyTorch and the web browser. Right now my priority is to earn money with my projects and build a large GPU cluster with used Tesla K80s so I can advance my research. I still wanna make an AI toolkit with mlpack and raylib but now isn't the time. Also when raylib gets glTF support in 3.6 it will be much more ready for doing interesting AI projects. The main issue though is that most people lack the computing power to actually do anything useful with an AI toolkit. In a year or two though that'll change when people start dumping their unsupported 12 and 16 GB GPUs on the market in mass that can do amazing stuff for $100. We can snatch these cards up dirt cheap and use them in mlpack, and there won't be such an enormous barrier anymore for people to get into AI.
>>8954 Neat. Hope you make plenty of money Anon, you have some great ideas. This sounds like a nice event where good GPUs are available cheaply in a used market. Really glad to know you're still with us Anon.
Open file (1.18 MB main.pdf)
Found this when people were criticizing that ML needs so much computer power and the big corps won't care. >Abstract : Strong empirical evidence that one machine-learning algorithm A outperforms another one B ideally calls for multiple trials optimizing the learning pipeline over sources of variation such as data sampling, augmentation, parameter initialization, and hyperparameters choices. This is prohibitively expensive, and corners are cut to reach conclusions. We model the whole benchmarking process, revealing that variance due to data sampling, parameter initialization and hyperparameter choice impact markedly the results. We analyze the predominant comparison methods used today in the light of this variance. We show a counter-intuitive result that adding more sources of variation to an imperfect estimator approaches better the ideal estimator at a 51× reduction in compute cost. Building on these results, we study the error rate of detecting improvements, on five different deep-learning tasks/architectures. This study leads us to propose recommendations for performance comparisons. https://hal.archives-ouvertes.fr/hal-03177159
>>9281 Correction: This conversation was about small and mybe dirty datasets instead of big data. https://project.inria.fr/dirtydata/
>>9281 >people were criticizing that ML needs so much computer power and the big corps won't care. Just looking over the abstract and admittedly not digging into the paper yet, there doesn't appear to be any contrary evidence to that complaint. I think both points are objectively, provably true. It's going to be our task here to find extremely efficient ways to run AI-like tasks, if we ever hope to have them operate (necessarily in realtime) onboard our robowaifus. Simple as. Never forget Big Tech/Gov has a demonstrably vested interest in making our task infeasible on modest, hobbyist-grade compute resources. >tl;dr It's up to us to make liars of them, Anon.
>>9285 >It's going to be our task here to find extremely efficient ways to run AI-like tasks, if we ever hope to have them operate (necessarily in realtime) onboard our robowaifus. I didn't think this was much of an issue but after I gave my chatbot an avatar the response delay became really noticeable with her sitting there blinking at me. Once I'm done with my current project I'm gonna seriously look into model compression for mobile systems and implementing these models in mlpack so we can run this stuff on embedded systems. Most of the pull requests for features that transformers require have been merged so it's ready to go now. Also it's such a pain in the ass waiting 2 minutes for PyTorch and Tensorflow to load. If this stuff is ever used in a robowaifu she's gonna have to take a nap for 20 minutes just to boot up. And the disk space usage grows exponentially each year for pointless features I will never have a use for. The mlpack code I've tried so far though compiles super tiny and starts up instantly so it gives some hope of having seamless realtime experiences even on low-end hardware.
>>9288 That is very comforting to hear Anon. I can write much more extensively in response to all your points in this post, but unless you'd like me to, then I'll simply leave it at 'GOODSPEED' :^)
>>3844 ROFL! Although, this is why I sometimes think it would be best if bots just communicate in math...or maybe ultra-rapid Morse code? That might be cool.
>>9288 Did you consider using AIML while the rest of the system starts? I think that's how it will be done eventually. There could be a list of comments to give while waking up, choosen randomly every time. Later maybe adding some new responses automatically, so she could pick up while booting and ask how you were doing what you planned to do while she was sleeping. That aside, why booting down at all? Or alternativly, why using only one computer? Because it's still development, okay. I think we're going to have systems which use some computers simultaneously. If one fails or needs to be rebooted the system would still be live and have the same knowledge. So there different ways to mitigate that, and it might only be a problem while working on a part of a system on one computer, not really an issue for a full build.
>>9377 I haven't used AIML before but that might be a good idea for dealing with loading times in a finished build. The main issue is just development really. Often I wanna test something in a model for 5 minutes and shut it down but waiting 2 minutes for it to start up wastes a lot of time. Even once PyTorch is loaded into the disk cache it still takes 15 seconds to load. One way I try get around this is by using Python's interactive console and Jupyter notebooks so PyTorch remains loaded, but sometimes the code I'm testing can't be imported easily without refactoring. It also takes some time loading large models but that could be fixed by using an SSD or possibly SD Express 8.0 cards in the future with 4 GB/s read speed.
>>9377 >I think we're going to have systems which use some computers simultaneously. If one fails or needs to be rebooted the system would still be live and have the same knowledge. You are absolutely right, and the future is here 4 decades ago Anon. 'Fly-by-wire' in aviation commonly has multiple, redundant, control computers running simultaneously. Usually in groups of 3 on modern aircraft (Although the Space Shuttle sported 4 different CnC systems). All the computers receive all the same inputs, all of them calculate these and (presumably) output all the same outputs. Or it is to be hoped so, at least. And that's the basic point; by having these redundant flight computers all running, they validate the common consensus by cross-checks and elections. If one of the three malfunctions, the other two kick it out until it 'comes to it's senses'. This leaves the actually not too unlikely scenario question "What happens if the two don't agree while the third is out of commission?" Thus the Shuttle's four machines on board. Additionally, it's not uncommon for highly-critical systems to require different contractors and different software running on at least one of the systems. That way if an unknown bug of some sort suddenly crops up, it's more likely the oddball system won't exhibit it. Safety-critical controls systems are both a complicated and fascinating field, and one ultimately of high importance to /robowaifu/. >>98 >>9390 >or possibly SD Express 8.0 cards in the future with 4 GB/s read speed. Neat, I didn't know about that yet.
Open file (21.20 KB 958x597 Selection_011.jpg)
>"Charticulator: Microsoft Research open-sourced a game-changing Data Visualization platform" >Creating grand charts and graphs from your data analysis is supported by many powerful tools. However, how to make these visualizations meaningful can remain a mystery. To address this challenge, Microsoft Research has quietly open-sourced a game-changing visualization platform. Haven't tried this myself yet, but I found this graph humorous & honest enough to make this post to keep track of the tool. > https://charticulator.com/index.html https://github.com/Microsoft/charticulator https://www.kdnuggets.com/2021/05/charticulator-microsoft-research-data-visualization-platform.html
>>10625 Okay, cool.
How do we get someone important to us to donate the use of one of these? I believe we could create some great robowaifu AI with it!!! :-DDD https://en.wikipedia.org/wiki/Blue_Gene
Does anyone have any resources on how the software integration would work? I.e., say you solve the vision piece so that waifubot can identify you as "husbandu," and you have the chatbot software so that you can talk to your waifu about whether NGE is a 2deep4u anime--how do you connect the two? How do you make it so that waifu realizes you, and says, "Hi, how's it going?"
>>12067 Is this one more of the many theoretical questions here? When building something, solutions for such problems will present themselves. Why theorize about it? And to what extend? Or short answer: Conditionals. Like "if".
>>12069 >Is this one more of the many theoretical questions here? No. Allow me to get more specific. I have an OpenCV based code that can identify stuff (acutally, I just got that OakD thing ( https://www.kickstarter.com/projects/opencv/opencv-ai-kit ) and ran through the tutorials), and I have a really rudimentary chatbot software. When I've been trying to think through how to integrate the two, I get confused. For example, I could pipe the output of the OakD identification as chat into the chatbot subroutine, but then it will respond to _every_ stimulus or respond to visual stimulus in ways that really don't make sense.
>>12067 In my experience the simplest way to think about it is like a database. You give the database a query and it gives a response. That query could be text, video, audio, pose data or anything really and the same for the response. You just need data to train it on for what responses to give given certain queries. There was a post recently on multimodal learning with an existing transformer language model: >>11731 >>12079 With this for example you could output data from your OpenCV code and create an encoder that projects that data into the embedding space of the transformer model.
>>12086 Exactly what my brain needed. Thanks anon.
This looks really interesting to me. llamafile https://github.com/Mozilla-Ocho/llamafile A standalone open source AI that can be run on may platforms including Raspberry Pi. It also can use other AI's other than the ones available for download. https://hacks.mozilla.org/2024/04/llamafiles-progress-four-months-in/ The AI software is getting better and better, smaller and smaller and more useful for local PC's. I imagine there's some way to train this. It could be a great advance to have these small AI's and then intensively train them on narrow dedicated task that we need. And no I don't know exactly how to do this yet. The tech is evolving rapidly to do so.
Open source consumer level simulated universe robot training AI, WOW!... maybe... Genesis https://genesis-embodied-ai.github.io/ https://github.com/Genesis-Embodied-AI/Genesis I heard about this from a guy who is using AI, successfully, to write software for his company. I comment on "Rebol" language every so often. he was very fond of it but has moved on. His forum http://www.rebolforum.com/index.cgi?f=home
Open file (57.58 KB 1004x1074 1734774762182155.jpg)
o3 just came out and it is multiple times better than chatgpt-4. the argument that the underlying tech for current ai is not good enough is very weak. Also ai could mean anything if you dont establish a goal on what you want the ai to do you wont accomplish anything.
>>35044 tbh i have to rely on premade ai(such as nudenet in my case or pygamylion for the chat aspect) Not only because of lack of knowledge but because training ai is fucking expensive and time consuming and in the case of nudenet a lot of naked pictures.
>>35044 iirc o3 is hella expensive, for the kind of model architecture it uses. iirc for the entire ARC AGI test, it consumed $300,000 worth of compute.
>>35044 lol what even is this an ai good boy points graph, its like shitcoin marketing
Open file (34.03 KB 739x415 images (32).png)
>>35047 Here is a chart of ai research from 1950 until now. Nobody came up with something better during 74 years. I treat the current ai like a law of physics. There us no alternative in my mind. I get the impression some may want a blank slate fu that starts out with the mind of a toddler and learns as time goes on. I think with the current tech it might be possible but that imply collaboration cause not one single person would pull that off, which would imply that goal be set on stone and everyones efforts be directed towards that goal. Itd also imply continous real time training. The training data would be whats captured by the camera and sensors. The waifu would have its mind in a datacenter with gpus running at max 24/7. I think.I dont think its practical. what i have in mind is a sex bot that can position itself for sex acts and can chat. It can bend over, it can tilt its head back and forth, it can position itself on top of you. Its either or no transitioning between positions. The diference between that and the first is like comparing a hill to a mountain.

Report/Delete/Moderation Forms
Delete
Report