/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality!

We are back again (again).

Our TOR hidden service has been restored.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


“There is no telling how many miles you will have to run while chasing a dream.” -t. Anonymous


Visual Waifus Robowaifu Technician 09/15/2019 (Sun) 06:40:42 No.240
Thoughts on waifus which remain 2D but have their own dedicated hardware. This is more on the artistry side though ai is still involved. An example of an actual waifu product being the Gatebox.
gatebox.ai/sp/

My favorite example is Ritsu, she's a cute ai from assassination classroom who's body is a giant screen on wheels.
>>43006 Thanks for the explanation! I presume either of these devices (or yours) must have some kind of software used to split the display into two 'screens'? What I really want to do is be able to play an application I'm working through to run on the computer first before attempting to run it natively on any phone. Any way these can do something like that (ie, 'beam' my program into the phone [whether wirelessly or not])? Regardless, cheers and thanks again for all you do here, Anon! :^)
Open file (2.19 MB 1800x900 swrc.png)
OK guys, this is a pretty hefty post. I've been doing some more holowaifu theorycrafting and research, and also decided to try to flesh out the waifu program's architecture a bit more. There are three basic parts, the control system, the waifu's internal behavioral logic, and the rendering. I have some more details about how these would work, but there still some aspects I'm not sure about. But I'm sure the other board members can help with those. I know this wall of text looks formidable, but please read all of this because the payoff for doing so is potentially colossal. I decided this post was too big and covers too many things to be all in the same post, so it comes in 2 parts. The Setup >Control system For this I had a couple of ideas. The first is extending the tracking marker concept. Up until now this holowaifu project has relied on the principle of summoning the waifu when a suitable tracking marker such as a QR code is detected. But there's no reason to limit this only to the waifu. You could introduce multiple markers that represent different AR objects. So for example, if you have a marker for a virtual keyboard on a glove you're wearing, you can just look at your hand to summon a virtual keyboard in the AR space (or other suitable control scheme; I'm a fan of a Mass Effect-style dialogue wheel) and issue commands through it. The second is using hand gestures. The reason I picked these ideas as opposed to something else like physical buttons or head movement detection is because they operate on the same fundamental principle as the waifu herself does, the principle of AI image recognition. This simplifies the project so we don't have to develop a separate method to control the visor; we can do so through the functionality already developed for the holowaifu herself, although I might like to incorporate a voice command system at some point simply because it doesn't step on the toes of the other control systems and it creates more of a sense of the waifu being your OS. But this system parallels the smartphone models that lack physical keyboards (most of them), instead using touch screen keyboards. The vast majority of smartphones now use touch screen controls; only BlackBerry and a few other outliers that have very small market share still have a physical keyboard, and I think the AR visor market would work the same way. >Behavioral logic I think this portion of the program should be written as a state machine, much like the control system likely will be. State machines represent basically any sort of nontrivial programming I might be able to do because they just intuitively make sense to me. But we could have the waifu switch between states according to hand gestures and other forms of interaction that govern her behavior, including interactions with other AR objects you summon through their separate markers. >Rendering This is the part that I have the hardest time understanding, which is kind of a problem because under this system the AR visor is controlled through AR means rather than physical buttons. You obviously need to render the waifu (or an AR-based control system) before you can interact with them. But there's another element to this that might make it possible for the waifu to exist persistently. The current concept has her being forced to stay near a marker and vanishing whenever you look away from the marker. But we could make her stick around and move realistically through the environment without a robot to denote her position if the AR visor is capable of scanning the environment and creating a digitized version of it. At that point, the holowaifu would exist within this mixed VR/AR space and wouldn't vanish when you lose sight of a marker; she only needs the marker to instantiate herself. Obviously, the more realistic the digital clone of your local environment looks, the more processing power will be required to render it, so if you want to go this route it's probably best to produce the cloned environment with cel-shaded graphics because these are more forgiving in terms of system requirements, but more realistic graphics are pretty accessible these days. Another possible method is to incorporate the aforementioned smartphone projector into the visor and then use a system similar to Star Wars: Republic Commando to issue movement commands to the waifu; this is only possible if the waifu has persistent existence. In RepCom, the player character Delta-38 can issue movement commands to the members of Delta Squad by using the D-Pad to project a holographic waypoint much like the markers we're discussing here. The waypoints are also context-sensitive, so if a Delta Squad member is ordered to a position that has special properties (i.e. a sniping position, a bacta tank healing station), the squad member will take the appropriate action for the context. The holowaifu should behave the same way; if you use the visor's built-in projector to project an action marker onto a chair, she'll go to the chair and sit in it, while if you tell her to go to a certain spot and dance, she'll do that. If the visor is capable of recognizing tracking markers for hand gestures and the waifu's position, it should also be capable of recognizing interactables in the environment, particularly if action markers are projected as an assist for the image recognition. To be continued
>>42994 This picture would be better if it had a tracking marker and an actual holowaifu overlaid over it. >>42996 >name my band DivaCircuit WaifuCore DreamWave Zero Starfall Synthetics Miku & the Meerkats The Galactic Waifu Empire
>>43009 >>43010 POTD >>43011 >This picture would be better if it had a tracking marker and an actual holowaifu overlaid over it. All in due time! :^) <SOON.EXE >The Galactic Waifu Empire Kinda like this one! :D
>>43012 Thanks for the POTD, but I was hoping for a more detailed evaluation than that. But there's a lot to respond to here and I can wait a little while, because responding to something this big isn't going to be quick or easy.
>>43013 Yeah, I'll have to make some time to think this through. Till then. P.S. This basic idea has been frequently brought up by an Anon here in the past, though yours is much more fleshed out (probably in large part b/c the recent discussions about the HoloWaifu project ITT?) Cheers, Anon. :^)
>>43014 I didn't know somebody else had an idea like that. I'd be interested to hear what their version was like, but yeah, take your time with this.
>>43015 Heh. One of the issues of being on a board with this much technical information is locating it on-demand. We also have the odd situation of about 5'000 of our OG posts from back in the day not having been migrated here yet.
>>43009 >>43010 All that effort to edit these and I still made some mistakes here. Like this: >but there still some aspects I'm not sure about I rewrote this sentence several times, and then left it hanging, moved to edit other things and then forgot to go back and fix it. >only BlackBerry and a few other outliers that have very small market share This strikes me as slightly awkwardly worded and may not be a completely accurate representation of the smartphone market. But I guess this could be considered a first draft version of the proposal. You rarely get something worthy of a final draft on an imageboard project.
>>43017 I'll be happy to go in and make any edits for you, that you'd like to have HoloAnon. :^)
>>43020 Meh, I don't think it's needed at the moment. Just focus on giving the proposal a thorough evaluation and we can make edits to it after that.
>>43001 >>43006 >>43008 It works by holding the phone screen close to you. Mechnomancer was wrong, it doesn't use depth perception, for the reasons Chobitsu alluded to, stereoscopic content is rare. Sure, it's not 3D, but it's like a first person game, but even closer. Your eyes cant focus on the entire screen, but that's part of it. Since it's not 3D, it can work with any POV content. Steam link works for games. (any POV content ;)) What's funny is that I started this before HoloAnon's (public) work, and I didn't make the connection until you said it, but I do agree, it would help with the VR app. After all, one of the secondary functions is a pair of digital night vision goggles
I should add something to the above pair of posts; I don't think physical robowaifus are going to really start being a thing until they start incorporating many biological parts. Biological components inherently cost less, use less power and produce less excess heat than electronics or mechanical parts, but we're some years out from seeing this integration happen. Robowaifus are mostly going to be techno-organic hybrids like Mega Man Legends' Carbons or the YoRHa units from Nier Automata, with fully technological androids being either reserved for the wealthy or outright obsolete because there are a lot of things that biological components just plain do better. But until we reach that stage, I think raunchy marionette sex while wearing an AR visor is going to be as good as it gets.
tl;dr on >>43009 and >>43010 for your convenience Phase 1: AR Holowaifu Visor with VR/AR Hybrid Features (epic roastie trolling) Phase 2: AR-Enhanced, Robot-Controlled Marionette Sex (total roastie ownage)
>>43006 Actually I have a question. If you can just get these enclosures and turn any smartphone into a visor, why aren't visors a lot more common by now? Is there some kind of limitation to implementing a visor like this?
>>43024 Contact Ribose, he's the /robowaifu/ expert on biotechnology Telegram: @Ribozyme007
>>43027 The answer is: lack of content I was always into VR, I even got a cardboard for my birthday. But most of the content was basically shovelware tech demos and a few experimental videos. It also wasn't that good, the lenses meant you saw the pixels.
>>43029 Maybe we can build a better enclosure, then. It has to be easier than building a full visor from scratch. Of course the content is going to be the real reason to get this, but it wouldn't hurt to have a better visor. Having the visor be a smartphone attachment is probably the only way to reach a large audience anyway.
>>43030 Maybe, but we already have good phone headsets. Just search up "phone VR", and you will have plenty to choose from. I got one in the clearance section of Ross: Dress for Less with plastic construction, padding, adjustable lenses, and elastic straps. I do support the idea of an open-source VR headset, but what we actually need is content, otherwise it would just fall into obscurity again.
>>43035 >I do support the idea of an open-source VR headset, but what we actually need is content, otherwise it would just fall into obscurity again. We need both, obviously...a so-called "Chicken & Egg" problem. BTW, thanks for the phone vr tip! :^) I intend to create some basic facial animation work as "proof of concept" *, but I'm going to need a headset of some fashion for prototyping the product effort. --- * For starters, I just mean for it to be a stylised, "floating" robowaifu'y cartoon face (likely dears Chii-chan, or Sumomo-chan to begin with), then build out from there.
Edited last time by Chobitsu on 11/25/2025 (Tue) 00:13:08.
>>43038 Well, if we have a stated goal, then we must make the egg first, a platform to make desired content on. >I intend to create some basic facial animation work as "proof of concept" *, but I'm going to need a headset of some fashion for prototyping the product effort. Exactly
>>43035 >what we actually need is content Undoubtedly. But does this modular approach to the headset where the smartphone is basically a Nintendo Switch that can operate either docked or undocked mean that what we're creating here is basically just a smartphone app? Or is that an overly reductionist way of looking at it?
>>43042 For now, yes
Please talk me out of buying these XReal Ones, bros : ( >>42928 )! :D They're US$400 right now, and look remarkably-suited to our HoloWaifu project (as the 'display-only' portion of the problemspace). * What's wrong with them that I'm not seeing yet? * Why is this grossly-overpriced for Anons? * Why would <<other_product>> be a much-better choice rn? --- Here's a general user's review: https://www.youtube.com/watch?v=3duYMt020_M A rather more-technical review with nice tight closeups of the device (also evaluates the more-expensive Pro version in comparison): https://www.youtube.com/watch?v=9TnBpCnX31c <---> PLS SEND HALP before I do it soon-ish! * --- * I was already planning to make some kind of investment purchase towards this project sometime around this upcoming /robowaifu/ birthday weekend period : ( >>1591 ) [so, a decision before next Monday when the improved pricing ends].
Edited last time by Chobitsu on 11/26/2025 (Wed) 13:19:21.
>>43050 $400 is a very good price, but how secure are they? Glasses that can see everything you can are a huge security liability if they can't be completely locked down. That said, if they are secure, then they'd be great for all of us.
>>43050 - I've had some difficulty finding a gyroscopic sensor that can calculate the z axis (vertical). Theoretically the GY-521 can do it somewhat, but there is drift because it cannot find a reference point. Chatgpt gets grump when you try workarounds so maybe I'll do some irl experimenting to prove the grand oracle wrong lol. - Supposedly small, hi-resolution screens are expensive. And if you're not doing a pepper's ghost it has to be transparent as well. But I just got a small 720p projector for like $20 at a discount store so I dunno about that. I might take it apart to see exactly how they do it. I have certainly been tempted to make my own diy vr headset but haven't done much beyond icon tracking. Maybe some GY-521 silliness might hold the key.
>>43051 >$400 is a very good price, but how secure are they? >Glasses that can see everything you can are a huge security liability if they can't be completely locked down. >That said, if they are secure, then they'd be great for all of us. As-is, they are very secure since they are output-only devices. You can simply think of them as 1080p monitors you can wear on your face! :D (And that you can also see through like basic sunglasses wherever there's no 'screen' being displayed.) That said, the entire goal of this project overall is to provide two-way video, so we can track aruco & QR markers, [1] and overlay Anon's waifu somewhere nearby to that marker (as 'Augmented Reality' [AR]). These can't do that quite yet, but still provide enough basics to go on with for me to begin working on the software side of the project. During this interim period, we can just 'lock' the display window into place wherever we want the waifu to be situated & displayed, say, floating on your computer desk. So the >tl;dr here is that security isn't an issue to begin with, but as the solution gets more sophisticated with time, then it will need to be addressed. (Simply firewalling off any network access for starters.) Does that answer your questions, Greentext anon? AMA if you need further info. >>43052 >Maybe some GY-521 silliness might hold the key. Heh, I think I have one of those laying around somewhere. Good luck and please keep us up to date with your progress on this, Mechnomancer! :^) >Supposedly small, hi-resolution screens are expensive. These use hi-quality Sony micro-OLEDs as the source projectors, 1 per eye. * >I've had some difficulty finding a gyroscopic sensor that can calculate the z axis (vertical). The basic XR 1's have 3 DoF tracking via onboard electronics. If you connect the optional US$100 'Eye' addon (a monocular camera situated right in the middle) then you can switch the unit to the 6 DoF mode [2] and truly lock the virtual screen into place within your IRL environment. This is the mode we would be aiming for with this project, whether via XR 1's or something else. That way the HoloWaifu properly stays put in-place while you move around. Make sense, Anon? --- 1. https://docs.opencv.org/4.x/d5/dae/tutorial_aruco_detection.html 2. The system augments the glasses' electronics with SLAM via the monocular camera to achieve this. https://www.uploadvr.com/xreal-one-glasses-become-6dof-with-xreal-eye-camera-xreal-beam-pro/ https://en.wikipedia.org/wiki/Simultaneous_localization_and_mapping * See spec details on the page linked : ( >>42928 ).
Edited last time by Chobitsu on 11/26/2025 (Wed) 18:00:45.
>>43053 $100 could buy quite a bit of hardware. Supposedly adding a compass could make the z axis easier (and appears GPT the oracle) but I could never get a proper reading with a compass sensor. A vr setup I'm thinking of would probably be a raspberry pi to do the gy-521 silliness and send that data over LAN, and use a wireless hdmi (also LAN) for video so it would be simply plugging it into your hdmi port and having a companion python script to receive sockets and turn into game input. I'm thinking about making my own version of steel battalion (mixed with darling in the franxx) and a diy vr headset might be a good gimmick in addition to/instead of a funky controller.
>>43054 >$100 could buy quite a bit of hardware. Fair point. I'd be trading expediency for gold (a common tradeoff, AFAICT! :D Also, the reliability that supposedly comes with a proper commercial product. * >I'm thinking about making my own version of steel battalion (mixed with darling in the franxx) and a diy vr headset might be a good gimmick in addition to/instead of a funky controller. The Iron Man LARP'rs system seems to be very-sophisticated as a DIY VR headset! ( >>42822 ). He claims it supports 9 DoF (incl. magnetic compass). --- * This is the 3rd or 4th generation of these things by this company, so its not just a 'johnny-come-lately' thing.
Edited last time by Chobitsu on 11/26/2025 (Wed) 18:12:27.
>>43055 >The Iron Man LARP'rs system Jesus! How can people ramble about such a simple thing for 30 minutes?
>>43053 >output only Huh, I always assumed that cameras were necessary for AR (the image tracking and intelligent placement being the "augmented" part). That is good for security, of course, but it does seem a bit pricy as just a monitor. Still, it's not a bad starting point for getting a sense of how everything should look.
>>43009 >>43010 Okay, HoloAnon thanks for your patience. I can finally take some time today to discuss your project ideas a little more. Just off the top of my head two things stand out: 1. I think it's a feasible idea to overlay a waifu over many possible physical artifacts, including a lever-driven, actuated snuggly doll. 2. This is a primarily SFW board, given our engineering focus (cf. >>3 ). As an accommodation, we have an echii "containment thread" of sorts ( >>419 ). Let's keep any sexual-oriented discussions constrained to that thread please. TIA. <---> I really like the way you've broken the problem down into elements, Anon. >Control system >Behavioral logic >Rendering I would add the obvious >Physical puppet & control into the mix. --- >Up until now this holowaifu project has relied on the principle of summoning the waifu when a suitable tracking marker such as a QR code is detected. Actually, we're not constrained by the visibility of markers at all. We can generate as many waifus & other virtual artifacts as the system in question has capacity for. I just wanted to point that out. >But there's no reason to limit this only to the waifu. You could introduce multiple markers that represent different AR objects. Yes, absolutely! Basically just like a game, you can create a practically infinite variety of virtual items. >The second is using hand gestures. The reason I picked these ideas as opposed to something else like physical buttons or head movement detection is because they operate on the same fundamental principle as the waifu herself does, the principle of AI image recognition. Yes. While I'll be starting out with a special 'pointer' for my initial development system, eventually we'll be able to perform pose analysis on the Anon operator, I predict. This will include hand gesturing for control. >although I might like to incorporate a voice command system at some point simply because it doesn't step on the toes of the other control systems and it creates more of a sense of the waifu being your OS. We're definitely going to need voice recognition at a relatively-early stage in the affair. >I think this portion of the program should be written as a state machine, much like the control system likely will be. State machines represent basically any sort of nontrivial programming I might be able to do because they just intuitively make sense to me. But we could have the waifu switch between states according to hand gestures and other forms of interaction that govern her behavior, including interactions with other AR objects you summon through their separate markers. Please proceed with prototyping that for us, HoloAnon. Even just pseudocode will be an advance atm. >This is the part that I have the hardest time understanding, which is kind of a problem because under this system the AR visor is controlled through AR means rather than physical buttons. You obviously need to render the waifu (or an AR-based control system) before you can interact with them. The basics of it are not much different than @Mechnomancer and @GreerTech have already prototyped for us... at least for starters. As the quality goes up, so will the complexity. My own goal is to have something that will put X.ai 's Ani to shame in the end! :^) >...and wouldn't vanish when you lose sight of a marker; she only needs the marker to instantiate herself. Again, that's not an issue. You'll probably understand better once I can post some videos of my initial prototype work with the visors. >...The waypoints are also context-sensitive... We could devise any kind of game-mechanics like systems that Anons can dream up (given enough actual developers working hard on the project). But for now, let's just keep it to the basics: A simple cartoon waifu that can stay in place, as you move around. Make sense, HoloAnon? <---> The echii stuff let's discuss further in the thread mentioned. If you would kindly repost your 2nd part in that thread, then that will get things started for that conversation. Deal? Anyway, thanks again for this great effort-post Anon. It's a good start for things that you have so many ideas already in mind. Cheers & Happy Thanksgiving! :^)
>>43057 Heh, he's enthusiastic about his project! :D >>43070 >Huh, I always assumed that cameras were necessary for AR (the image tracking and intelligent placement being the "augmented" part). Yes, that's true. For now, these are about the best compromise of (cost/capability/comfort) that I've managed to find so far. There are full-on AR goggles out there, but they are neither comfortable nor lower-cost. All this is still in it's infancy I deem, so much-improved systems should be available to us over the next 2 or 3 years. But we should be able to get started with these, as-is, I think. I'm planning to 3D-print some kind of clip-on for the frames that will support at least one camera (possibly several) to provide the input video to the computer system. I'm sure this will develop over time. >Still, it's not a bad starting point for getting a sense of how everything should look. Yeah. And these are probably the single most-comfortable visor ATM to begin with, and seem a fairly-mature design as well. We'll see! :^)
>>43057 >>43073 >Jesus! How can people ramble about such a simple thing for 30 minutes? He's a YouTuber. They're second only to politicians in their ability to talk as much as possible without actually saying anything.
>>43054 >I'm thinking about making my own version of steel battalion Sounds like a really cool application of this technology. Vehicle games in general (racing, flight, etc.) could see a lot of use with this, but maybe mech games will become more popular with visors. >>43072 >We can generate as many waifus & other virtual artifacts as the system in question has capacity for. This is of course true, but I'm thinking with an eye towards immersion here. In Pokemon Go they generate Pokemon over your smartphone camera's field of view, but they don't really feel like part of the environment because they don't interact with it. The reason I want to use markers and possibly a digitized clone of your surroundings is so you can create AR entities that actually feel realistic. Like if you created an AR dog but had a real water bowl in your room, the dog would recognize the bowl and would sometimes take a sip from it unprompted. >hand gestures and voice recognition Glad to hear these are plausible. But why do you think voice recognition would need to be present early? >Please proceed with prototyping that for us Yeah, I want to do some work on this. I'm not completely sure what I'm doing here, but I'll talk to DeepSeek and see if I can at least come up with some pseudocode. >As an accommodation, we have an echii "containment thread" of sorts That's fine, the part about the marionette is outside the scope of purely visual waifus, so that does belong in another thread.
>>43080 >This is of course true, but I'm thinking with an eye towards immersion here. In Pokemon Go they generate Pokemon over your smartphone camera's field of view, but they don't really feel like part of the environment because they don't interact with it. The reason I want to use markers and possibly a digitized clone of your surroundings is so you can create AR entities that actually feel realistic. Like if you created an AR dog but had a real water bowl in your room, the dog would recognize the bowl and would sometimes take a sip from it unprompted. Very interesting. This scenario would require at least 4 degrees of "ping-ponging" back and forth to make work correctly. Given the very-tight time budget constraints on computations for realtime interactivity, it's no wonder it hasn't been solved yet! But who knows, lets see what we can manage here. >But why do you think voice recognition would need to be present early? Simply b/c any other form of interaction will quite rapidly become incredibly clunky, in my estimate. We'll see. >Yeah, I want to do some work on this. I'm not completely sure what I'm doing here, but I'll talk to DeepSeek and see if I can at least come up with some pseudocode. DOOEET! :D >>43080 >That's fine, the part about the marionette is outside the scope of purely visual waifus, so that does belong in another thread. Great! I'll wait on your repost there. Cheers, Anon. :^)
>>42893 In accord with @Kiwi's sensible admonition -- and since I'm already quite-familiar with this product's platform -- I've ordered a little wheeled vehicle * for transporting the ArUco Markers & QR codes around on surfaces, just in case any'non here wants to follow along with my doings for this project. The plan is to print little tracking markers to affix to it, and directing it to drive around on a surface "carrying" the holowaifu along with it (in other words, as an expedient stand-in for the "BallBot" conceptualization ITT). --- * It has a camera + sensors, differential 4-wheel drive (can spin in-place), is compact (~ 6" cubed), and runs on the UNO R3. It's currently just US$55 (for Black Friday). https://www.amazon.com/dp/B07KPZ8RSZ
Edited last time by Chobitsu on 11/28/2025 (Fri) 15:28:22.
>>43082 >I've ordered a little wheeled vehicle Wait, I thought we agreed that we didn't need a robot for this, just graffiti.
>>43084 Eheh, yea we did. But I still want the little waifu to be able to traipse about in my flat before long. I simply decided to pull the trigger on this while it was still a Black Friday deal. No one else need get this yet. And beside, I still need to work out the camera+clip setup before that phase as well (cf. >>43073 ).
>>43085 So you want to try this because you think it would be faster to implement than digitizing your surroundings and rendering a waifu into that?
>>43088 Well you put it like that, yes (though that wasn't my reasoning). At the moment at least: a) I'm rather confident I can do the camera+tracking of the Aruco/QR car with my current knowledge. I'll just need to make the time to assemble everything and write a bit of code to support that. b) I'm not confident in the same way about digitizing my surroundings (though I'm sure I can get a handle on it when I make time to focus on it). Unless I literally went into Blender or Maya and modeled them. In either case, the rendering of the waifu is pretty straightforward. Again, we'll start smol and grow big with her designs & details (as with everything else for this project).
I just wanted to give a quick update on this. I ended up getting sidetracked by other things, and the chatbot site I use started experiencing technical difficulties; I got multiple 502 errors across different websites, so I'm not sure what's going on here. But I did find another site that seems to be usable for now, which is good because my ability to code doesn't exist without that.
>>43202 Hello HoloAnon, welcome back! So I've now gotten the XR1 visor glasses + a few other accessories -related to hopefully make it work well for us for the HoloWaifu project. I should have something visual to show in about a week. Don't expect much fancy at first, just something to demonstrate the platform itself works OK. BTW, the glasses are quite comfy and the brightness of the internal screen is excellent (I was rather concerned they would be too dim).
Edited last time by Chobitsu on 12/09/2025 (Tue) 07:28:07.
>>43009 >>43025 > (part 2 of 2 -related : >>43203 )
>>43207 Based, you've done more on this project than I have. Which was pretty much what I expected, like I said before, but what exactly are we looking at here? Do you have a full waifu program or just a demo for the ability to project AR holograms onto a marker?
>>43212 Thanks! I'm really hopeful that prices on all this stuff will begin coming way down once more competitors enter this arena. My credit card is feeling the pinch from this right now! :^) >but what exactly are we looking at here? Do you have a full waifu program or just a demo for the ability to project AR holograms onto a marker? No we won't have any of that stuff yet. I'm going to take baby steps at all this to keep the impacts on my time & energy low-ish. For now, I'm simply going to demonstrate displaying a static image of dear Chii-chan in a window, floating above my work table. Should be about a week as I get everything assembled for it. Next, I want to print out aruco markers, making a smol printed cube of them. Then I need to 3D-print a PLA clip to go onto the nosebridge of the visors to hold a little mini-webcam right in the center of them. This will be the input video feed back to the computer, * and which we'll use to create the most basic OpenCV tracker program so that it keeps a target reticle fixed on that cube, regardless how I move my head. After that, we'll explore more about registering the waifu image above the target cube within the visors, as we mentioned. It'll still just be a static image for now. Then we'll start working on creating a "blocking" 3D waifu character (blocking is a film term, it just means lo-rez & clunky) to register above the cube (now replacing the previous static image). We'll use the blocking character to begin programming the initial crude animated movements. That's enough to go on with for the moment! :D I hope all this makes sense, Anon? --- * Remember, these visors are merely the output from the computer part of the problemspace. (cf. >>43053 )
Edited last time by Chobitsu on 12/09/2025 (Tue) 08:28:06.
>>43213 >My credit card is feeling the pinch from this right now! Shit, sorry about that. I didn't mean to get you to spend money on this that you can't afford. I know how bad the economy is right now. Maybe I should shop for food deals for people or something to ease the pinch a little. If I had the programming skill I'd create an extreme couponing AI that finds all the best deals on essential goods. But then again I'd rather just have those things actually be affordable to begin with so this wouldn't be needed. But yeah, your demo concept makes sense. It might also help my understanding to actually see the proof of concept in action. I think what I need is a basic list of core actions the waifu hologram should be able to do. Ideally you'd be able to piece together more complex actions from combining this core set in different ways.
>>43215 >Shit, sorry about that. I didn't mean to get you to spend money on this that you can't afford. LOL. Don't worry Anon. I'm a grown man and know how to budget. I'll grudgingly give the kikes their pound of credit jewgold flesh to see our dear HoloWaifus come to life! :^) >Maybe I should shop for food deals for people or something to ease the pinch a little. If I had the programming skill I'd create an extreme couponing AI that finds all the best deals on essential goods. But then again I'd rather just have those things actually be affordable to begin with so this wouldn't be needed. No. I don't recommend anything like that! Please just stay focused on perfecting your basic colloquial C++ skills for now. That will be far more valuable to us here! >I think what I need is a basic list of core actions the waifu hologram should be able to do. Ideally you'd be able to piece together more complex actions from combining this core set in different ways. Sounds like a great concept! Please begin compiling the basic animation couplets we need a holowaifu to perform. Good thinking, HoloAnon. Cheers. :^)
>>43216 >Please just stay focused on perfecting your basic colloquial C++ skills for now. This is what I'm going to do. I've been talking with DeepSeek about it and got a list of about 20 different core actions, but this many core actions may result in spaghetti code. I was hoping it would be more like 5-10. That's why I wanted to ask people what they want their holowaifu to be able to do.
>>43247 >This is what I'm going to do. Excellent. > I was hoping it would be more like 5-10. That's why I wanted to ask people what they want their holowaifu to be able to do. Please don't get discouraged if it goes WAAAY beyond that. Slow & steady, Anon. Cheers. :^)

Report/Delete/Moderation Forms
Delete
Report