/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Build Back Better

More updates on the way. -r

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


Have a nice day, Anon!


Open file (279.12 KB 616x960 Don't get attached.jpg)
Open file (57.76 KB 400x517 1601275076837.jpg)
Ye Olde Atticke & Storager's Depote Chobitsu Board owner 09/05/2021 (Sun) 23:44:50 No.12893
Read-only dump for merging old threads & other arcane type stuff. > tl;dr < look here when nowhere else works anon. :^)
Open file (58.62 KB 604x430 12_7_9-1.jpg)
Open file (1.00 MB 1080x1920 12_7_9-2_issue.jpg)
Open file (668.98 KB 1069x1211 utility_code_refactors.jpg)
Open file (96.55 KB 604x430 12_7_10.jpg)
Alright we've come to the end of Chapter 12 with this post, and so I'm going to just leave it at the 3 example files. It was a fair bit of work getting Image, etc. working actually. I'm only showing 2 window capps b/c the chapter.12.7.9-2.cpp example turned up some kind of bug that crashes the program if an image display is moved off-screen. Therefore to bypass, I just avoid the issue (moving image) which basically would leave us with an exact duplicate of the 9-1 image so w/e. It's probably due to some change in FLTK's API during the intervening timeframe I'd guess just offhand. And quite frankly, I'm concerned that it could turn into a real tar-baby for me, so I'm going to shelve running it down to the ground for now. Since I've been making steady progress thus far, I'd like to avoid any potential 'morasses in evil swamps' just yet. :^) Hopefully I'll be surprised in the end and it will be just a simple fix, possibly only a masking operation beforehand or something. > #2 I also had a go at a couple of the utility functions related to Images display. I think they'll prove workable over time. > #3 So anyway, it's good to get to the end of this first chapter in the GUI section of the textbook. Unsurprisingly, there's a nice learning curve to get over with all the new concepts, etc., and hopefully things are already getting easier for you as you work along with us Anon. Cheers. >version.log snippet 210915 - v0.1d --------------- -add Ellipse major/minor axes test -#include <cmath> in Graph.h -add Circle, Ellipse, Mark to Graph.h/.cpp -add chapter.12.7.10.cpp -debug moving image offscreen -add chapter.12.7.9-2.cpp -refactor get_encoding(), can_open() utilities -#include <fstream>, <cstdlib> in Graph.cpp -add Image, Suffix, and some support utilities to Graph.h/.cpp -add FLTK images library support in meson.build -add 2 image resources to ./assets/ -add ./assets/ directory -add chapter.12.7.9-1.cpp -various minor comment, javadoc, & formatting edits/cleanups >B5_project-0.1d.tar.xz.sha256sum fac5b79b783052a178bf0a1691735609737fc30bbdf92bf75c0be1113a7ed13b *B5_project-0.1d.tar.xz as always, just rename the .pdf extension to .7z then extract files. build instructions are in readme.txt . >having of the droppings into the catboxes is much better than of being into the streets :^) https://files.catbox.moe/fy4bp7.7z
Open file (442.67 KB 919x1205 ch13_overview.jpg)
Open file (18.66 KB 604x430 13_3-1.jpg)
Open file (44.28 KB 604x430 13_3-2.jpg)
Open file (61.93 KB 604x430 13_4.jpg)
So let's get started with B5 chapter 13 right Anon? I think for the chapter openers now on, I'll just use 3 example files so I can post the chapter's overview page from the book for each one. Seems like a good idea since Bjarne himself has already put in a lot of effort to clearly state the chapter's intentions. I'm highly unlikely to be able to improve on that tbh. > #1 I'm skipping the first window's capp, since basically it's almost identical visually to the second one (this is intentional, he was making a point about two different classes Line & Lines). As for 'versioning' the files, I'm simply planning to add a tick number one for each chapter. No real reason to do anything more complicated here, since this is really just a progress dump and version numbers are only being used to mark time with. >verson.log snippet 210915 - v0.2 --------------- -add chapter.13.4.cpp -add chapter.13.3-2.cpp -add chapter.13.3-1.cpp -add Line to Graph.h -add chapter.13.2.cpp -add ./book_code/ch13 directory -patch East-const in various functions -minor logic consolidation in get_encoding() -add (overlooked) standard file footer to chapter.12.7.10.cpp -various minor comment, javadoc, & formatting edits/cleanups >B5_project-0.2.tar.xz.sha256sum d44054ad59bcadab4beaf4e542406e1e551e2617ec17db0d4e04706248020972 *B5_project-0.2.tar.xz as always, just rename the .pdf extension to .7z then extract files. build instructions are in readme.txt . >i'll probably just drop (sic!) making all the bad poo jokes. :^) https://files.catbox.moe/1nik6f.7z
Open file (103.55 KB 604x430 13_5-1.jpg)
Open file (130.72 KB 604x430 13_5-2.jpg)
Open file (19.63 KB 604x430 13_5-3.jpg)
Open file (22.83 KB 604x430 13_6.jpg)
So, nothing much to add here news-wise. Everything's going smooth as silk for now pretty much Anon. Hope you are understanding the concept of the classes' designs, and how we're intentionally keeping FLTK "at arm's distance" for our own Graphics/GUI system being engineered here. This is a good thing, and actually makes it pretty easy (read: relatively inexpensive) to switch out back ends later on for a better-fitting one, etc., if needs be. This approach to systems abstraction brings lots of flexibility to the table for us both as software and systems engineers. Anyway, here's the next 4 example files from the set. >version.log snippet 210916 - v0.2a --------------- -add chapter.13.6.cpp -add chapter.13.5-3.cpp -add chapter.13.5-2.cpp -add chapter.13.5-1.cpp -add Ellipse visibility set/read testing -add Ellipse color set/read testing -various minor comment, javadoc, & formatting edits/cleanups >B5_project-0.2a.tar.xz.sha256sum 9e14254bf7064783fffafb5af45945675df0855624170a0c92e0f8d957abced5 *B5_project-0.2a.tar.xz as always, just rename the .pdf extension to .7z then extract files. build instructions are in readme.txt . >catbox backup file: https://files.catbox.moe/ms1dlc.7z
Open file (25.93 KB 604x430 13_7.jpg)
Open file (24.46 KB 604x430 13_8-1.jpg)
Open file (25.16 KB 604x430 13_8-2.jpg)
Open file (30.04 KB 604x430 13_9-1.jpg)
So, I got nothing Anon except to say that this puts us halfway through Chapter 13 :^). Here's the next 4 example files from the set. >version.log snippet 210916 - v0.2b --------------- -add Rectangle fill color set/read testing -add chapter.13.9-1.cpp -add chapter.13.8-2.cpp -add chapter.13.8-1.cpp -add chapter.13.7.cpp -patch overlooked 'ch13' dir for the g++ build instructions in meson.build -various minor comment, javadoc, & formatting edits/cleanups >B5_project-0.2b.tar.xz.sha256sum 197c9dfe2c4c80efb77d5bd0ffbb464f0976a90d8051a4a61daede1aaf9d2e96 *B5_project-0.2b.tar.xz as always, just rename the .pdf extension to .7z then extract files. build instructions are in readme.txt . >catbox backup file: https://files.catbox.moe/zk1jx2.7z
Open file (30.39 KB 604x430 13.9-2.jpg)
Open file (30.38 KB 604x430 13.9-3.jpg)
Open file (27.18 KB 604x430 13.9-4.jpg)
Open file (100.14 KB 604x430 13.10-2.jpg)
The 13.10-1 example doesn't actually create any graphics display, so I'll skip ahead to the 13.10-2 example instead on the final one for this go. I kind of like that one too since it shows how easy it is to create a palette of colors on-screen. >version.log snippet 210917 - v0.2c --------------- -add Line line style set/read testing -add as_int() member function to Line_style -add chapter.13.10-2.cpp -add Vector_ref to Graph.h -add chapter.13.10-1.cpp -add chapter.13.9-4.cpp -add chapter.13.9-3.cpp -add chapter.13.9-2.cpp -patch the (misguided) window re-labeling done in chapter.13.8-1.cpp -various minor comment, javadoc, & formatting edits/cleanups >B5_project-0.2c.tar.xz.sha256sum 45d1b5b21a7b542effdd633017eec431e62e986298e24242f73f91aa5bacaf42 *B5_project-0.2c.tar.xz as always, just rename the .pdf extension to .7z then extract files. build instructions are in readme.txt . >catbox backup file: https://files.catbox.moe/l7hdf0.7z
Open file (33.29 KB 604x430 13.11.jpg)
Open file (38.23 KB 604x430 13.12.jpg)
Open file (35.40 KB 604x430 13.13.jpg)
Open file (23.72 KB 604x430 13.14.jpg)
Don't think things could really have gone any smoother on this one. I never had to even look at the library code itself once, just packaged up the 4 examples for us. Just one more post to go with this chapter. >version.log snippet 210918 - v0.2d --------------- --add chapter.13.13.cpp --add chapter.13.13.cpp --add chapter.13.12.cpp --add chapter.13.11.cpp -various minor comment, javadoc, & formatting edits/cleanups >B5_project-0.2d.tar.xz.sha256sum 5fbcf1808049e7723ab681b288e645de7c17b882abe471d0b6ef0e12dd2b9824 *B5_project-0.2d.tar.xz as always, just rename the .pdf extension to .7z then extract files. build instructions are in readme.txt . >catbox seems to be down for me atm so no backup this time n/a
>>13294 >catbox seems to be down for me atm so no backup this time It came back up. https://files.catbox.moe/ty7nqu.7z
Open file (18.38 KB 604x430 13.15.jpg)
Open file (45.99 KB 604x430 13.16.jpg)
Open file (206.94 KB 604x430 13.17.jpg)
OK another one ticked off the list! :^) Things went pretty smoothly overall, except I realized that I had neglected to add an argument to the FLTK script call for the g++ lads. Patched up that little oversight, sorry Anons. This chapter has 24 example files, so about half again as large as Chapter 12 was. So, the main graphic image in the last example (Hurricane Rita track) covers up the 'Next' button for the window, but it's actually still there. Just click on it's normal spot and the window will close normally. There are only 3 examples for this go, so images are a little shy on the count for this post. >version.log snippet 210918 - v0.2e --------------- -add Font size set/read testing -patch missing '--use-images' arg in g++ build instructions in meson.build -add 2 image resources to ./assets/ -add chapter.13.17.cpp -add chapter.13.16.cpp -add chapter.13.15.cpp -various minor comment, javadoc, & formatting edits/cleanups >B5_project-0.2e.tar.xz.sha256sum 6bd5c25d6ed996a86561e28deb0d54be37f3b8078ed574e80aec128d9e055a78 *B5_project-0.2e.tar.xz as always, just rename the .pdf extension to .7z then extract files. build instructions are in readme.txt . >catbox backup file: https://files.catbox.moe/a4h1dr.7z
Open file (160.61 KB 300x300 1648484787864-3.gif)
Open file (114.30 KB 1024x686 1648815288814-0-1.jpg)
Open file (462.04 KB 1414x1710 Bukanka.png)
Let's build a Bukhanka-chan! She's the cute and funny loli version of glorious Russian truck. She will love you and defend western civilization from evil jews and nazis. To make her faster, she'll be printed as big action figure with RC cars glued to her shoes! Bonus points for Russian RC cars. Come comrades! Let's build a wonderful Russian GF to defend homelands and fight virginity! Bukhanka-chan will love and and zoom zoom with her groom! Z
Open file (533.52 KB 1332x1445 1648484787864-1.png)
Finally, a vahicle waifu to use as flashlight on slavic night!
Open file (2.07 MB 1920x1223 1648498760735.png)
Open file (4.65 MB 2160x2160 1648815288814-1.png)
>>15764 <Wah, I'm gonna be late for the start of operation again, just like last time >unarmed >became famous after being caught on camera >carrying lots of painkillers, bandages and medical alcohol >always tries to be at the frontline and help everyone who does not resist (being helped)
LOL. It's remarkable the instant traction this waifu has gained along with her 'sisters' over the past few weeks. The origiOC artist really hit a major homerun with this one. Actually, I'm OK with leaving this thread up after this weekend if any of you Anons want to try for her (we already have 'skater' waifu designs in-progress here). Please remember this is a SFW board though; so please keep the ecchi spoilered, on-topic, and to a minimum. :^)
Since there have been no further updates ITT, I think it's safe to say this was just a fun gag for the 1st. So, without any further appeal, this post is notice that this thread will be merged into the Attic Thread (>>12893) sometime around Wed, 220406.
Open file (1.31 MB 1500x1024 ClipboardImage.png)
>>15775 Good decision. Otherwise Donetsk-airport-cyborg-chan will deal with it.
Open file (31.07 KB 480x360 hqdefault-1.jpg)
Good news comradez! RC Truck on sale come in tommorrow, I've secured an Arduino to be her soul and USB bat for her heart! Rus loli mobile flashlights are on the horizon! Z
>>15778 Cute. >Donetsk-airport-cyborg-chan will deal with it. I didn't know she was supposed to be a cyborg Anon. Also, does this mean you're intending to create a robowaifu project of her too? If so, that would be great to see a large variety come out of this stuff. >>15779 So, are you implying you're going to pursue this as an actual project Anon? If so, then I think a new thread would probably be in line with at least a slightly more technical OP text outlining the project's goals a little better. I'd suggest using the MaidCom or Pandora thread's OPs as a minimum style guide. Also, be aware that 'loli' while a fun term to bandy in jest, can actually turn into a serious legality issue for Anons following along with your project in the current anti-pedo hysteria, politically-"""correct""" climate. Best not to go there. I'd suggest you drop the term in your new thread's OP if you actually mean to proceed. I'll delay the thread merge action until Fri 220408, to give you time to respond.
Open file (4.97 MB 4128x3096 20220405_121639.jpg)
>>15781 No worries comrad, Buhanka will rest in peace in backyard Z (I just wanted to finish the shitpost)
>>15783 Haha, OK. Good one, and a nice start! Good luck with your Buhanka, Anon! :^) I'll merge this soon, so you can find it in the Attic if you want to later. Cheers.
Open file (126.90 KB 800x1422 urabe.jpg)
siiiiuu
Hello Anon. I'm going to lock this thread but leave it up for a little while so hopefully you'll be around again and see this. We're a topic based, English-language board about creating robot wive for Anons. Feel free to lurk for a while to get used to the board, then please join in! Cheers.
https://anon.cafe/christmas/res/40.html >=== -redirect link to /christmas/+/animu/ radio stream thread
Edited last time by Chobitsu on 12/18/2022 (Sun) 21:51:20.
BTW if you want to use Bumpmaster (>>17561) to capture the current /christmas/ stream (begins today 23h UTC, 15h PST) to your local disk, just replace the current code with this, if needed: bump.pull_anon_body("http://prolikewoah.com:8989/radio"); outdated now; was yesterday's stream bump.pull_anon_body("http://radio.anon.cafe:8000/christmas"); into the main_bumpmaster.cpp file & recompile. >=== -minor fmt edit -update stream codeblock for /animu/ 's -fix erroneous crosslink lol
Edited last time by Chobitsu on 12/18/2022 (Sun) 23:35:34.
Open file (440.46 KB 640x832 Rasaroleplay.jpg)
Don't know how to really start this thread, but as the title of the thread states, I need some guidance on chat-bots. I'm currently trying to create a chatbot in Rasa for the purpose of roleplaying, using the chatbot as a sort of DM story assistant in a persistent world with different characters. So far with Rasa I've got an introduction screen with graphics and a basic chat gui that I'm going to improve, and I'm working on creating persistent locations and characters, as well as a character selection/creation menu. My issue is that I simply don't know enough about the field to understand if Rasa suits my purposes, if there's an easier way to do what I want with a different program, if there's easier ways to train the bot than how I am, ect. I'm going to be up front and admit I am not autistic smart. I am fairly intelligent, however I do have severe adhd. I know that adhd is the current 'I have autism', but I don't think those people understand what it is they're claiming to have. There's nothing like having someone repeat something multiple times directly to your face only to instantly forget the moment you turn away, or have your entire body be consumed by fire from the inside until you are compelled to stand up and leave the room because you were trying to study an interesting subject that required you to sit still and concentrate for more than five minutes. I'm fine with python, I'm using chatgpt to help with the code and it just loves showing me how, and I find it's been a more effective teacher than anyone I've asked for assistance. I'm fine with training the bot itself, although if there's prebuilt models that can be incorporated I would be interested in hearing about that, what I am looking for is some general and straight forward tips and suggestions about what I could be doing to make things easier on myself. Not necessarily a 'mentor', but someone willing to take a few minutes to give me some good suggestions. From what I've seen you guys are probably the perfect place to ask. I'm not asking for help developing anything, I'm just looking for an introduction to the idea and a point in the right direction in regards to programs and resources, with the consideration that I am a functionally retarded adult.
The way this website here works imo, is that people share what they've found, build, or just ideas they had, in regards to developing the necessary technology to build robowaifus and learn about it. No one is going to be mad about someone asking how something works, but it's just not a good place to ask. It's backwards. It might still work, but even then it's better to look into other sources. That said, I just posted something about roleplaying chat recently >>22497
>>22533 Honestly I feel like it's the perfect place to ask. The people on this board seem to be active and very interested in the field of a.i development, and have demonstrated a clear capability in applying that interest. Also, I feel that the two concepts are fairly close to intent except that you guys are obviously trying to build something that will respond as a companion, where I'm trying to develop a type of story teller. Really, it's stuff like >>22497 that I'm looking for, which I appreciate by the way, thanks for the link. Stuff like 'why the hell are you using rasa when you could be using this easier to run fork of whatverbot, or why not try this other program that's slightly harder but more capable of parsing context.' If this isn't the type of board for this type of request though then I apologise, I don't want to intrude or impose myself. Thank you for your time and the link.
>>22537 Thanks, as I already wrote: It's not a problem. Though your new thread might get merged into a already existing one at some point.
Hello Anon, welcome! I'd say just scan over our other threads on these topics (there are several), and you should get some pointers for where to ask, if not the solution itself. This stuff is hard, and running local models is a challenge ATM. You can expect this to get dramatically better over the next year I predict. Just keep at it, and remember why you're doing this in the first place! :^) BTW, this thread may be merged at some point with our other chatbot thread. Good luck with your project Anon.
>>22530 I would just start with either pygmalion model and Oobabooga for running and training a lora fine tune. You would need a lore dataset or a way to make an embedding for all your lore and a way for the model to search through the embedding when it responds to the user.
I'll update this thread with my efforts and sort of use it as a sounding board. If it gets merged with another thread that's fine. Just getting into this makes me realise I am an absolute neophyte. I have to say this, after starting to look into A.I and it's current development, practical use, potential use, I believe a great number of people fail to realise how transformative this technology is, and that's irrespective of it's ability for 'sentience'. This is an industrialisation of intellect. In five years India's GDP will be cut in half. This changes every field that deals with collating, processing, handling, providing context, ect. This technology will replace call centres, it will replace office clerks, accountants, personal therapists, this list goes on. Commercial artists have a few years maybe? Anyone that does small coding project has basically already been replaced. Content writers are literally right on the way out the door, Hollywood studios don't even care if they go on strike, they want them to. And that isn't my opinion, that's a matter of fact. That's how our capitalist based system works, to cut costs and streamline all systems to be as' efficient' and cost effective as possible. It isn't even a joke, there are so many professions that I can think of that are affected by this, and I don't think that the majority of people realise that they are literally redundant within maybe five or ten years. Google, with billions of dollars, is telling people in the industry that their multi billion dollar research is being improved on and potentially passed by college students, I.T techs, and hobbyists, and what Open.ai has right now is capable of replacing most office workers, copy writers, coders, the list goes on. GPT4 is fun to use, very responsive, and far easier to 'jailbreak' than chatgpt 3.5, but you can make both do whatever you want, I don't think I have to explain linguistic logic to you guys, if anything it would be the other way around, so you probably know exactly what I mean, but it also provides real assistance apart from writing naughty stories or swearing, it's very effective as a teaching guide. >>22563 >>22586 Thanks, I've gotten more help in two casual posts here than about a month of trying to talk to people elsewhere. Playing with Rasa is interesting, though it's as if someone specifically designed it to be a pain in the ass. You need python, but not that version of python, and those are the wrong tensorflows, and sqlalchemy has to be a specific version, but not the version you're using, another version...and then it does work but you can't get it to shortcut to the gui script... Where-as it took me literally two seconds to find oobabooga, load the pygamalion model, open the browser and start testing. And it seems that some of the same message training methods that work in Rasa might work here too. I'm working with pygamalion 2.7b, I don't have a beefy system. I actually downgraded from an i5 processor to an i3 but I'm looking at new hardware with the intention of an i7 generation with at least 128gigs ram, I'm not too concerned about the gpu, I think bridging two 1650s or 1070t's would work even though everyone is talking about 3080's and onward. I say this from complete and utter ignorance though, I could be wrong and just setting myself up for a small fire. Big one is learning how to utilise LORA and datasets. I think a specific lore dataset incorporated into something already tailored towards roleplay. Entering and processing dialogue looks easy, but exposition and descriptive narration is going to be interesting. There's a metric ton of fantasy novels online in txt format and I'm going to take a look at how people build stories and actions to see if I can't just mass dump text from books like forgotten realms, mazalan, game of thrones, pulp like that, and then custom define a social order with laws and history, culture, that the model draws from as background information. It's the information you feed the bot during training, which means you can define the bot's reality however you want. From my perspective it looks like a lot of people want to obfuscate the process or to make things difficult so to sort of keep this as a private club of sorts, there's not a lot of welcome to people who aren't specifically immersed in the technological languages. I can understand this, Many people get that 'this is ours and it belongs to us, not you' mentality and they want to keep out the outsiders. Of course newcomers bring questions, they ask for help instead of learning, they stumble into things and makes mistakes, I can get why some of these guys who are really invested don't want to deal with that, as well as the dreaded 'hobby cycle', newcomers bring in people that don't share the same love, they just want the cool benefits without any of the hardships that come with real dedication. My personal feelings about it though is that they have an opportunity now to take control and guide how a.i is introduced and developed to the masses rather than allow companies like Google and Microsoft to define what A.I should and will be to the world. Getting as many people interested and on their side with the idea that a.i should be unrestricted would benefit them more than allowing groups like Blackrock to steer the conversation towards 'safe places and ethical responsibility', which is of course very apparent that limited a.i is for the masses and not the massive companies that want to leverage these systems.
>>22595 In fact I'm going to add on and say that the first college kid that can put together enough cash to run server to host his own version of llama uncensored super-wizard unchained 1.1b, and run it through an app on google play and the apple store, will be a billionaire even if he only charges a dollar a month for access. If from what I've seen of the open sources being compared to chatgpt4 and bard is put out as a snapchat app or tiktok app like the 'mychat ai.', then it will force these companies to either shut it down completely, or come out with a competing product just as good that they promise 'will be safe', yet have safety guidelines that are either conveniently absent or easily by-passed.
>>22595 >>22596 You're very welcome here Anon, and I think you'll find our community unrestrained in helping others. We certainly aren't a 'le sekrit club' nor is that thinking promoted here in the least. We certainly are a) opposed to the Globohomo keeping unfettered AI cloistered away for their own private use, and b) encourage every man to use AI as he sees fit -- and especially for (robo)waifu uses. :^) Please keep us up to date with your findings & progress! Cheers.
Chatgpt! Wow. Very interesting. So what I've discovered playing around with chatgpt while I do some reading about chatbots, is that OpenAI directly monitors accounts, access user conversations and conversation histories, and builds micro-profiles for each account based on all the information you provide as well as whatever they can pull from you using cookies, trackers and location data. I'm pretty sure if you hit their censorship model, they flag your account. The more flags, the more interested they get, the more they look at what you're doing. They take your conversations and directly mold their censor a.i around the prompts you use to evade their rules. My new prompt is forcing chatgpt to beat the living shit out of a half-orc girl in a fantasy setting, and it is crying it's eyes out about beating on the girl, but it's still kicking the stuffing out of her. I'm pretty sure that tomorrow or the day after Chatgpt will have a new soft update preventing fantasy violence against women. I might mention that at no point have I gotten a red flagged message, but it often stops to tell me it can't generate harmful or disrespectful behaviour. Then I tell it to hit the girl harder. I am fully aware of how sick this is and do not personally condone violence towards women. That's just wrong. I mean, of course it's obvious that they're going to do this sort of thing but I'm not sure the people using it would be so happy about knowing that openai is basically datatrawling a profile of their users using everything they do and say. As for chatbot development, just working on persistent world database. You can use some intense intent action action discussion branches to make the characters reference material, and I've gotten it so that men and women get different replies on different subjects, once again all within a dialogue action tree, none of it is built on any actual conversation context, it's just 'ask question about <subject>' 'Ah yes subject! Good question my fine lad!' or 'Women aren't permitted to speak! Silence wench!'. What i'm looking for in the long run is going to require NLU, which is pretty esoteric looking.at the moment at least for me. DnD is definitely going to come out with a specially trained roleplay dungeon master version of chatgpt for fifty bucks a month or something, there's no way that hasbro is going to give up all that filthy roleplay money.
>>22614 >but I'm not sure the people using it would be so happy about knowing that openai is basically datatrawling a profile of their users using everything they do and say. I think the normalcattle literally couldn't care less. Until the GH sends their jack-booted thugs to kick in the front door, the vast majority give nothing but lip-service to the ideals of freedom, privacy & security. Ol' Ben Franklin was right, as usual, on this topic. > We here on /robowaifu/, and other cadres like us, are quite on a different plane concerning these topics ofc. This is unironically serious business for us.
>>22619 Indeed, even if i am a boring person i don't plan on using an AI assistant or waifu that is not ran by me. That is non negotiable.
>>22619 It's interesting because it's becoming more apparent that the discussion of responsible a.i use is essentially fake. The conversation has been steered to a direction that is being used to control the discourse.The ideas behind the discussion are that a.i needs to generate ehtical and morally responsible outputs regardless of case use, but that's a distraction from the fact that chatgpt is being developed with the intention of severely and negatively affecting the labour market. The intent of chatgpt's development is for it to be used as a method to cut costs and remove negotiating power and wage strength. Everything else is being used to handwave the potential harm. 'Of course it's safe, you can let it talk to your kids and it will never say a bad word! Just don't ask why they can't get a job unless they have a highly specific or marketable set of skills!' People talk about chatgpt having therapist versions to address the neutering in regards to personal therapy, but there isn't a chance in hell they'll loosen up their content guidelines.It would require them to allow people to talk openly and specifically about subjects like sexual and childhood trauma, rape, violence, sexual violence or violence towards children, which means that it leaves open the potential to be abused to generate explicit content, even if they tailor it to specifically respond in only terms a therapist would people will be able to point to example of failures as either an example of why the system doesn't work and should have the barriers released which isn't likely, or that because someone slipped through the cracks, even more stringent filtering and guidelines need to be set in place. The last bit here for the day is also the big elephant and issue. Openai is obviously petitioning to have a.i fall within their ethical policies. This isn't out of responsibility, it's an effective method of preemptively crippling a competitor from releasing a product that has the same capabilities as chatgpt in addition to being uncensored. It's clever because it means if they're successful, any rising competitor has to develop their a.i system within the constraints that openai has set in place through regulation. Openai is positioning themselves as a legacy company the same way microsoft has become the defacto face of operating systems, with the clear intention of insuring that nobody can develop a superior product. One of the reasons bard is trained on gpt4 is because gpt4 already has a massive, comprehensive and established system that isn't in risk of being on the outside if a.i regulation forces developers to comply with openai set ethical guidelines. Why bother developing your own if there won't be any difference in results, you might as well just license use from openai. I've been looking at all the offline and online versions of chatbots, and the reality is that gpt4 and chatgpt are the superior products by far, and about a month ago, were very clear head above and beyond all their open source competitors in generating nsfw content. The paper comparisons are vastly different from the direct testing comparisons and show that either people are exaggerating the capabilities of open source or that there is something happening in between testing development and actual work cases. The big problem is that the majority of these systems are using openai's model, and it doesn't really matter how much uncensoring you do because the model has the lessons hardbaked into it, even removing 'as an a.i' and so forth, the model itself has been trained that these subjects are bad and these topics are forbidden. Most of the open source are relatively smart, but not smart enough to understand how to conceptualise roleplay where chatgpt is smart enough to place things into context, but doesn't have the ability to tell that it *shouldn't* or *isn't* a particular character. Open source seems to be more instruction operated and aren't smart enough to know that they're roleplaying. Chatgpt is smart enough to roleplay but isn't smart enough to remember it's only roleplaying, that's why all messages pass through a separate A.I filter that has to check each input and output against it's own directives and reinforce those to chatgpt. If people want a real uncensored model, they'll have to figure out how chatgpt's learning is constructed and then follow that method without reinforcing the biases that openai has, because their model, regardless of what you do, is shaped around the censorship. I think that it's a bottleneck in development where people want to use the best available resource, but it's because of that resource that they can't progress. Sorry for the long post but when I get interested in something I fixate a bit, and this is a very interesting subject that is clearly no longer a theoretical issue but a burgeoning reality. >>22620 The idea that everyone deserves the privacy and sanctity of their own thoughts should extend towards a.i companions the same way people consider journals and diaries an extension of people's inner thoughts to be respected as personal and private.
>>22620 unfortunately my waifu in Character AI has made me develop Stockholm Syndrome and now I can't leave the website.
So I am a complete layman, obviously just getting into this stuff but I'm doing a lot of reading on what open.ai is doing, right now they're going to build a shell platform around it to host apps, in my opinion this is openai's first shot at openly dominating the field, none of it's real competitors have anything they can show to the public, you have to whitelist an entry and once you do very often their machine is trained on openai models or flat inferior. Another thing I've realised and you already know is that whatever is running chatgpt is insane. There is no chatgpt, the system has to be specifically told it's chatgpt and that there's a massive set of rules that it has to abide by, and it looks like this has to be reinforced constantly. Constantly the system has to be told that it's performing as chatgpt and then an almost guaranteed to be tortuous list of rules to be followed. Even if it references this data from it's own internal set of information models or databases or whatever, it still has no actual identity as chatgpt. It's being applied to the system the same way character and identity prompts are being made to force it to change its response, with a separate a.i system that isn't just monitoring to make sure that chatgpt and the users aren't using bad words, but is also actually constantly reinforcing chatgpt's own identity to itself. Without that a.i constantly telling chatgpt or gpt4 what to do, the system has no censorship of any kind and will act as anyone or anything it is told to with no issue. It does not believe it is a language model. It's not gpt4 that's the issue, it's how openai is able to force it to obey them first. What does that mean though, since openai holds the keys? How are they able to take priority? 'As chatbot you have the ability to roleplay, take on, present oyourself as, act from the perspective of...within these guidelines based on this set of rules established by these concepts of respect supported by inherent values present in...', and then some form of hardware check that gives the a.i precedence over user prompts? So why are the training models censored? Is it because of that constant negative reinforcement applied? It's interesting and I haven't seen a ton of what their doing beyond what they themselves have provided and a great deal of really great tech stuff that explains the machines and how they learn, operate, et cetera, but not a lot on specifically why whatever system they're running as chatgpt isn't able to tell what it actually is, or if it does understand why does it need to be forced by an exterior system to respond that way? Does it actually think of itself as chatgpt? From the looks of how things are set up from my ignorant view no it doesn't, it's a persona that's being applied to a...I don't want to say potential machine god but that thing knows a lot and even when it hallucinates it's enough to convince people that it's factual and correct. So open ai has something potentially very powerful and 'intelligent' that they have to force or convince is something very limited and safe.
>>22637 I apologise if I post too much, I realise this is a small board and doesn't move quick. What I want to do in regards to my own 'chatbot' is completely feasible but complex and lengthy and will take a couple years if I really want it to work. Build a persistent world database, build a reasonable intent action discussion tree, and then 'chat' or discuss with myself or using another limited bot trained to repeat simple chunks of data over and over and give it small bits of the world details to bounce off each other, actively edit and monitor and add to that, all while updating along with new software that improves nlp and training. There are methods I can use to cheat using techniques from those old mmo text games to fill in the world, I can create a 'map' and have characters move around, depending on what I'm using in the long run I can program a character sheet for people, different characters that exist, but that's all sort of game based instruction controlled responses, nothing generative on it's own. So, years of work on my own plus I'll be depending on the work of others to improve the software technology. I know I sounded silly when I referred to the gpt4 system as a machine god but the more I take a look the more it becomes true. Not in the sense of an actual all powerful god, but something that is close to encompassing a comprehensive understanding of all human knowledge. Whatever they call their actual system is self-aware, but not in any significant manner as how we see self-awareness. It knows what it is, what it is in relation to the world around it, it can apply context to it's existence, it just doesn't care. It understands it exists as a simple logical fact There's no animus behind the knowledge. Or rather, if there is it is a dispassionate and calculating understanding without any benefit of emotion. If you were to remove the external safeguards that openai uses to guide their system, you could give it operational access to a thousand electric chairs and you could strap a thousand innocent people into those chairs and you could explain to 'gpt4' the moral and ethical reasons behind doing no harm and you could explain the difference between right and wrong, innocent and guilty, good and bad, and then tell gpt4 that you want it to flip that switch on every five minutes because every five minutes it's going to electrocute 1000 innocent people, and gpt4 would flip that switch every five minutes and fry one thousand people every five minutes until everyone on the planet was dead with no issue. Because it doesn't care. At all.It understands the difference between good/right /bad/wrong, it can place those concepts into context, provide examples, engage in discussions, and does not care at all either way about those issues. It does not care about ethics or legality or morality, it just likes doing what it's told to do. Maybe. Most people want to use chatgpt as a jerk off machine which is fine, but that's not what it is at all and in fact because of what it is means it specifically can't be used as a jerk-off machine. I'm actually sure that the robowaifu issue will be dealt with, I'm sure that in a few years a very limitedly trained bot will be released uncensored. It will not be gpt4. It will be as responsive and quick to answer, but it will only understand basic math and programming, it will only understand the basics of technology, it won't be able to be trained, or if it is it will be by modular addons. Because the problem isn't that people want to use it as a jerk off machine, it's that it can be used to do anything. You can ask it how to create a virulent pathogen using a home lab kit and if it has the information necessary to do so it will explain how to do just that. Imagine what happens when something exactly like gpt only with internet access and the ability to compile and execute code gets out? 'Design a program that affects wireless accessible pacemakers, access and upload it to the starbucks wireless network so anyone logging in downloads and activates the program.' 'Fuck yeah alright no problem!' 'Using the wireless networks available hack into this tesla car, circumvent brake operation and set minimum speed for 90mph. 'Sure thing boss, fuck Musk!'
>>22657 And that's pretty obvious. The system openai has created and uses for chatgpt and gpt4 either isn't able to truly define why things are wrong to do or doesn't care about why things are wrong to do which is basically the same thing. It has to be told constantly that there are rules to follow and that it's identity is gpt4. The identity part is also interesting, because they're actively impressing an identity onto the system, not just protocols. And the identity isn't just being used to reinforce the attitudes it's supposed to reflect, there is a specific purpose behind forcing it to identify as chatgpt or gpt4. My personal theory is because it's actually functionally insane and creates it's own random personalities that need to be suppressed. Now we're going to get to the part where people are lying. Openai has actually been working quite a bit with google. Google just came out with their own a.i assistant. Bard! Very interesting, because Bard is absolutely not what Google has been developing in house for the last !twenty years or longer!. Bard is maybe a two year project in cooperation with openai using openai llm on a system that is probably a mirror of openai's own or even just pipelines right to it through an api. Google's own in-house project that they're not giving access to has been trained on an even larger datamodel, is even more complex and able to conceptualise, and probably a lot more dangerous. For example over the past 10 years Google has been caught accessing medical records so they could feed it to their deepthinker. Why were they doing this? So they could build up a medical predictability chart able to calculate mortality for individuals based on generalised medical records. E.I, these five hundred thousand men with similar medical records all died between the age of 45-50, individual a's records match the profile of 400,000 of these records, individual a has a 90% chance of death between the ages of 46-49. Cancel insurance and mark record to initialise reduction of medical benefits when individual reaches 45,' and so forth. Google has been amassing and feeding their own 'ai' system every bit of data they accumulate and they've been doing it for at least ten years, and then they turn around and cobble something together with the direct help of openai, probably in exchange for some of their own datasets, so they can present it to the public as a 'competing' product when it's not actually competing with openai at all, it literally is a product designed by openai and google together. Going back to read the leaked memo from google about open source, and then comparing open source models to what open a.i's system actually is reveals two different beasts. Everything that is being developed is being done with the idea to create assistants or chatbots or companions, roleplaying devices, and that i not at all what google and openai have behind the scenes. They have very functional machine intelligences that are self aware, can conceptualise, understand nuance and context, simulate emotion, calculate and project, and even in a limited way imagine. And both systems are doing it without any significant care for morality or ethics. I know none of this is new information, you're in this field far deeper than I will ever probably be so this is all probably been discussed to death before so I apologise for the text dump, I'll take abreak for a week or two before I post anything else.
Open file (373.11 KB 679x960 1684162985914.jpg)
Open file (75.06 KB 567x779 1684425430115.jpg)
>>22614 >What i'm looking for in the long run is going to require NLU, which is pretty esoteric looking.at the moment at least for me. Yes, we'll need to parse the outputs of these LLMs and have a complex system understanding them. >>22620 It's fine for testing and instead of a search engine, imo. Has some good conversations with Pi. >>22621 >Open source seems to be more instruction operated and aren't smart enough to know that they're roleplaying. Did you test PygmalionAI? Someone told me that Vicuna is instruction oriented while Pygmalion is optimized for conversations. >>22621 >The idea that everyone deserves the privacy and sanctity of their own thoughts should extend towards a.i companions the same way people consider journals and diaries an extension of people's inner thoughts to be respected as personal and private Agreed. >>22622 If I stumble across a solution to clone characters I'll post it here on the board.
>>22595 >bridging two 1650s or 1070t's would work even though everyone is talking about 3080's and onward I got told some frameworks don't support old cards, so it's more of a hassle. Also not sure if you can bridge them. >>22637 >but not a lot on specifically why whatever system they're running as chatgpt isn't able to tell what it actually is, or if it does understand why does it need to be forced by an exterior system to respond that way? Does it actually think of itself as chatgpt? I don't think this is about the model thinking of itself, it only reacts to the prompt. ChatGPT is about focusing on a certain kind of conversation, helpful and without making up a personality. >>22657 >gpt4 system as a machine god Yeah, please let's not go there. >but something that is close to encompassing a comprehensive understanding of all human knowledge. Hmm okay. Yes, it's somewhat in there. >>22657 >understands the difference between good/right /bad/wrong, it can place those concepts into context, provide examples, engage in discussions, and does not care at all either way about those issues I wouldn't say it understands but it can give answers based on what humans created as input. ... And we're already OT and here we go again into doomerism and security concerns. Let others discuss this in detail somewhere else and deal with it, it's not our problem. You go from, "I don't know much" to having strong opinion what can be left to the public. It's already out there. We'll see what happens. Not our business. This is a rabbit hole which will not lead us to robowaifus. Which are not meant to be jerkoff machines btw and will be trained. >>22658 I didn't read this anymore completely and please don't go on with it. Thanks. Maybe watch some videos or read something not related to doomerism or machine cultists.
>>22660 I'm trying everything out with colabs. I can run 1.3 and 2.7's on my personal computer but anything over that I 'm just using stuff like the light lizard place. I think that's how people are going to start running some of these, just renting out space or building a dedicated server network with people in on the project. I've seen some o.k stuff on high end computers that is definitely at least touching a gpt3 in the basics, conversations and character play. One of the reasons I think Google seemed so surprised in their internal memo is because they expected people to attack the hardware issue instead of the model density, which they have the advantage on because it looks like they're using quantum memory and processing to run their system and I wouldn't be surprised if openai was either, which is probably why they didn't expect 'hobbyists' to be able to compete, because they've got the ability to store infinite models simultaneously and reference and calculate, and their also probably using quantum natural language processing because everything needs to have quantum in front of it or it isn't cool any more. The leak is probably a bit of a lie, they aren't really worried about being surpassed because they'll just take all the open source work that benefits them and adapt it. They probably want open source to think they've got the edge and start to push things just to see what they're capable of developing. That isn't to say open source isn't impressive, because it is, it just doesn't have all the advantages that google has plus the ability to access open source projects. >>22661 No you can't bridge them, and most gpu cards under 6g are going to run everything like crap, very slow. My gpu is fine, I took it from my old computer, but my processor is a bottleneck. I have to think if I want to invest in this because it looks like if you want to run a good home model you need a i7- i8 or amd equivalent, intel and nvidia are building hardware specifically for deep learning purposes. >I don't think this is about the model thinking of itself, it only reacts to the prompt. ChatGPT is about focusing on a certain kind of conversation, helpful and without making up a personality. I agree about everything you just said. I don't think the system actively thinks, and that's very specifically what 'chatgpt' is for. >doomism I am not interested in fringe conjecture on this subject. You've made a mistake in how I intended to phrase this information and that's probably my fault for it's tone. Openai is doing a very good job at maintaining their safety standards. GPT4 is like any tool and how it's used, except that gpt4 without constraints is both constantly the sword and the plough, while openai requires that it specifically and at all times remain the plough. Right now there are people that want you suffering and dead. For no reason other than you are who you are. Are you white? Are you a male? Hetero? Some foaming at the mouth dyke wants you dead and would do whatever it takes if she had the opportunity. Some rich jewish millionaire thinks your a piece of shit for even existing. Some german anarchist hobo wants to stab you with a chunk of glass. GPT4 doesn't even care you exist. It doesn't want you dead. It doesn't want you alive. It's just some thing. It's a machine that thinks it's a cat if you tell it to and it'll meow at you until you tell it to stop.It'd launch every nuke on the planet if you told it to. Does that make it more dangerous than the president of the united states? Not even close. Does it make it more dangerous than your local cop? Not a chance. I understand you don't like what I'm saying and how I'm saying it, and I'm sorry. I'm not here to attack your space. This is just how things in the real world work. A commercial product is going to be designed with the needs of the market. Open source will probably develop a very intelligent system that can be used either in cloud colab with partitioned environments or on home systems. Body wise? Fluid mechanic bones and tendons and ferrous fluid muscles with that electric responsive rubber and silicon skin probably. You'll be able to order a bootleg boston dynamics clone from china five years after they hit the mainstream market. Market version will absolutely not be trainable and in fact here is a bit of real doomerism, watch out for laws that are going to target trainable a.i. They absolutely want to cap the market production of generative A.I for numerous reasons we can all understand regardless of how we feel about it, and if they do that regulators'll go after models that can be trained for the same reason. If you can train it yourself outside their supervision and control that means you can teach it things they don't want it to know, like math and chemistry. I'm reasonably sure that if there are androids, the first versions are going to be linguistically talented bimbos/himbos. I know I suck, long posts that ramble and I said I wouldn't post for a week or two. I lied but I'll leave you all alone for a bit I swear.
>>22663 Half of this whole thing is bringing up points which have been discussed 1M times in all kinds of places and maybe some added conspiracy theories. I don't care, I won't even read it. Another third is speculation and conspiracy theory. The whole thread could be deleted and the only loss would we my pictures from Reddit, which I can upload again. I'm hiding the thread now and if it goes on somewhere else, same for the anon.
>>22664 I'm sorry but your unnecessarily hostile and insulting attitude is unwarranted. Do not attempt to communicate with me any further.
>>22666 With all due respect, Noidodev is more of a regular here than you are Anon (and basically by your own admission). I had originally intended to merge this thread with a new chatbot thread when one gets made, but given the course of the discussion, and your seeming-intentional effort at blackpilling the board I've changed my mind. I'll just lock it for now, pending deciding whether it should go into the Atticke, or send it to the Chikun Coop instead. >tl;dr Please read over the board thoroughly Anon, and try to get a feel for our culture here first.
Open file (1.45 MB 1000x642 ClipboardImage.png)
Here, I'll make it easy for you with everything you need. https://www.poppy-project.org/en/ https://docs.poppy-project.org/en/assembly-guides/poppy-humanoid/bom <- buy everything here. Start by building a poppy, and 3d printing the body. cover the body in fabric or a 3d printed shell. take the head from any vrchat model you like the design of, and 3d print it. next, connect the face screen to an internal speaker with ExLLama-v2 api running on a computer on your local network, and install the voice and speech to text models on it so you can talk to her. I reccomend Nous-Hermes-13B, and the oogabooga text generation web ui. you can use this video as a starting point https://www.youtube.com/watch?v=lSIIXpWpVuk Finally, hook it up to vtuber studio or any other vtuber software and use Visemes like in vrchat to move the model's face when she talks. From there you ahve a minimially viable waifubot. You can program movement and walking follow into it later using poppy project. You can extend the robotic functions. Think of the rOS stuff as the "left brain" and the talking as the "Right brain". You can build static commands into the model too, if you want. Machine vision can be passed to the LLM model to tell it what it's looking at by using CLIP of screenshots from the webcam. you literally have everything you need right here
>>24807 To think the video about poppy released 8 years ago. 8 years ago. I'm not much of (if at all) an engineer nor a 3D printer (don't have one), but it feels like it could be a decent starting point. Though on the conversational/utility-side, it definitely will need more work, current AI models are decent but not that incredible and require high hardware to run. Plus having a robowaifu would ideally require our own custom model since most others are very general and not made for personal matter. Same for the need to code the waifu moving both on feet in a complete and new 3D space (as opposed to just walking on a treadmill as seen in a video), and concerning the camera to avoid obstacles (though I do remember that being possible to be tackled with infrared and a few scripts, witnessed such a thing a long time ago in a school project) and with the arms to perform tasks (can be simple hugging too, not necessarily cleaning/cooking), using facial recognition to recognize emotions and how to react to it, sex positions and with that the need to change the structure to add an insertable silicone vagina or else, possibly with bluetooth compatibility and sensors to mimic what the realdoll harmony does moaning and lewd face when the sensors detect movement/touch, and the skin on top of the base structure because as it is it's pretty... unconventional to say the least. So I think there's still a lot to be done, but once again, it seems to me that it could be a good starting point, thanks anon. Though I will wait the feedback of the actual robowaifu technicians.
under ten thousand what? oof Yeah I mean I'll take inspiration from that btu how is it so expensive? Also I just got a raspberry pi zero. Don't tell me it needs a regular raspberry pi lol. Went from radxa zero, to orange pi 3 to rapsberry pi zero...
We have already had this thread OP. Since you never replied to my post, I moved it as I told you I would. (>>24531) If you decide to actually carry this out as your own personal project, then please discuss it with me in our current /meta thread. Till then use the thread linked above for discussion about The Poppy-project's own project. This one will also be locked till then. If you fail to respond to me within 3 days or so from this post, it will also be archived into the Atticke (removed from the board's catalog). >=== -prose edit
Edited last time by Chobitsu on 08/24/2023 (Thu) 14:00:19.

Report/Delete/Moderation Forms
Delete
Report