/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

The Mongolian Tugrik has recovered its original value thanks to clever trade agreements facilitated by Ukhnaagiin Khürelsükh throat singing at Xi Jinping.

The website will stay a LynxChan instance. Thanks for flying AlogSpace! --robi

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


Knowing more than 100% of what we knew the moment before! Go beyond! Plus! Ultra!


Safety & Security General Robowaifu Technician 04/20/2021 (Tue) 20:05:08 No.10000
This thread is for discussion, warnings, tips & tricks all related to robowaifu safety. Or even just general computing safety, particularly if it's related to home networks/servers/other systems that our robowaifus will be interacting with in the home. >=== -update OP
Edited last time by Chobitsu on 08/03/2021 (Tue) 01:29:08.
OK, my apologies for the delay on this thread. So, I'd say first off we ought to begin a discussion about the appropriate level of reveal we should engage in here. That is, how much information should we share with each other here, knowing full-well that glowniggers, leftists, and other goons & enemies are watching our cleartext communications on /robowaifu/ ? Security is always a cat-and-mouse type of game, and it's a tricky business dealing with adversaries in this arena. Nothing new, this kind of thing has been going on for literally thousands of years in human history. The big difference today is that single White females with no children are out there with their goyphones looking for the next society-destroying """narrative""" to start backing/promoting. You can be certain these extremely-entitled females would love to destroy both you and your robowaifus, Anon. As a simple example to get started with, Handheld 'EMP' generators. Not too hard to find information on creating them, and they would likely prove to be effective weapons against our robowaifus. OTOH, we all know they will be used that way against us, so it's plainly in our best interest to both devise them ourselves and then begin the process of hardening our own systems against such an attack. See the conundrum? So, please take some thought about this topic and share any good ideas you have here Anon.
I'll begin with the obvious defence; a Faraday cage. Except nowadays it's also possible to enclose devices in a flexible steel mesh; a so-called "Faraday Bag"> A few of these around your essential circuits is a good idea. Also, the range of these EMP devices is often very short; a few inches to a few feet away - so both they and their device are highly likely to be accessible if not identifiable. We all know how cocky these libs are on Twitter, but as the age-old saying goes; >"It is easy to be brave behind a castle wall." There exists a possiblity that the aggressor is using specialist equipment like an explosively-pumped flux compression generator (EFCG), which has a range of upto a few dozen yards dependent upon how much current is being generated. However, an EFCG would be a stupid way of temporarily disabling one's robowaifu, considering the cost of a few Arduinos is about $100 but the cost of such a high-tech weapon would run into tens of thousands of dollars at least, and it requires knowledge of high-explosives to operate safely. Additionally, an EFCG is a one-shot device; destroying itself in the process of releasing an extremely intense and brief EMP. Personally, I'm more concerned about thugs just vandalising my robowaifu (particularly her head and delicate eye mechanism). Just make sure both yourself and your robowaifu have bodycams running and sending the footage to a backup server if you ever go out and about. At least that way you can get evidence and prosecute for criminal damage (and also likely assault/GBH since you'll be trying to stop them).
>>10868 Thanks Anon. If you have any insights to share about the level of, and protocols for, the 'reveal' we make here concerning security/safety issues, please do share them (and everyone else too ofc). Obviously, you're not in favor of a strict embargo on information. Personally, I'm at the extreme end of 'freedom of information' for at least a couple of reasons. 1) Such restrictions often don't work out (especially over the long haul). And given that we are simply manufacturers (in essence), not fundamental researchers, all the common weaknesses of our component parts will be well mapped out in advance by reasonably intentional adversaries. IMO this drives us towards sharing everything we can with each other. 2) Related to the first point, even if one of us did manage some type of original research into an innovative system design with special benefits for safety/security, eventually it will be uncovered regardless. This kind of situation is commonplace in corporate espionage, for example, as companies jealously attempt to guard their secrets from one another for business advantage. Again, it never works over the long haul. OTOH, I recognize my own propensity for foolhardiness, rushing in where others fear to tread. I'm open to correction in this area. :^) >EFCG Interesting, I didn't know about that one. It looks like a directed-weapon design. Also, if it's operational mode is literally driven by a high explosion charge, then it's extremely likely to make a loud noise I'd say. >Thuggery Particularly during these early 'outlier' years, where the probability of a functional robowaifu being seen casually in public is quite uncommon, then your precious robowaifu is certainly likely to be a target of assault and theft. Certainly the idea of cell-connected cams recording everything in detail is a good place to start, legally-speaking I'd say. Very complex legal topic (though it obviously should be cut-and-dried rather), and I certainly have little to offer on that. My area is properly physical defense, and psychological approaches against evildoers. >--- But again, ATM, my first objective for this thread is >"So, I'd say first off we ought to begin a discussion about the appropriate level of reveal we should engage in here. That is, how much information should we share with each other here, knowing full-well that glowniggers, leftists, and other goons & enemies are watching our cleartext communications on /robowaifu/ ?" That issue should first guide and cordon off the other discussions proper. The 'groundrules', so to speak.
>>10860 >>10869 >So, I'd say first off we ought to begin a discussion about the appropriate level of reveal we should engage in here. That is, how much information should we share with each other here, knowing full-well that glowniggers, leftists, and other goons & enemies are watching our cleartext communications on /robowaifu/ ? I think about this everyday. If we learned anything from the past decade, any site/image-board is most likely being constantly monitored. How many here don't share out of (justified) paranoia? I've talked with other engineers and programmers IRL with an interest in artificial women (bots) but said they would never share or talk about it with random people. It would probably be a pain, but if anyone starts to say stuff that we may all be thinking, but GLOWS, should probably be prevented/stopped. I think we really won't know the extent to which they might be against robowaifus until something goes "big". If anons released a small kit, that had a partially programmed bot that taught would-be technicians to further learn... I think that would be a safe starting point to test the waters. It's dexterity doesn't have to be amazing, just something to get more anons involved. I've talked to others elsewhere and while are interested in it, have no idea where to begin. Lots here even may be capable of jumping to more complicated/agile bots, but I think there are some lurkers lost who just "cheer". Ehh... I am going off topic now, but I am sure a lot of anons are glad that others are also concerned with OpSec.
>>10871 >I've talked with other engineers and programmers IRL with an interest in artificial women (bots) but said they would never share or talk about it with random people. Then I would suggest you find some way to help these men discover /robowaifu/ Anon. If they lurk for a few weeks here, they may just become comfortable enough with the anonymous imageboard form of communications to begin contributing. But even if not, /robowaifu/ has a veritable cornucopia of information collected here now. Even though things can be haphazard as far as organization goes with this form, there are a wide array of topics at least touched on here. Surely they would manage to find something both encouraging and helpful to their own project work here?
This might help with software we use: >We protect open source code. >Earn money for finding & fixing security vulnerabilities in open source projects and be recognised for protecting the world. https://huntr.dev/
>>10892 Nice idea Anon, thanks.
>related crossposts (>>6757, >>6758, >>6759)
>>11454 Just an FYI malware in a VM can take over the host machine through various ways, especially through the network.
>>12117 So then please explain some remediation approaches Anon (or at least some links to back the claim up), rather than just dropping that into the middle of the conversation by itself all alone. Certainly using a VM is far better than not using one if you suspect something sketchy.
>>12118 Separate computer, some old one kept for such cases, is better than a VM.
>>12119 And for Anons who don't even have a separate computer? What then?
>>12120 Claiming that VMs are safe, won't make them safe. Such anon can make his decision on the information he has. In urban areas in the developed wold, such old computers are available for very low sums or even for free if the are really old.
>>12121 < Claiming that VMs are unsafe, won't make them unsafe * See how that works? You've hardly justified the claim itself as I asked. Provide solid evidence for it. So far, your post is roughly just like claiming "TOR IS COMPROMISED!111 REEEEEEE" Maybe it is, maybe not. But the simple fact remains it's superior to cleartext Internet in practically every context that requires enhanced privacy and security. Simply denigrating it w/o offering a lucid breakdown of why it's compromised, and particularly to offer effective alternatives does little but look like glownigger FUD gayops.
>>12118 Disable the network or choose a NAT network if necessary. A bridged network will cause most malware in the wild to target the host machine. Do not use shared folders and or a shared clipboard. Use a good anti-virus that watches programs for malicious activity like Comodo. >>12123 There is no such thing as 100% security. There are exploits out there that have targeted the hosts of virtual machines and companies take over a year to fix them even after being made aware of critical issues: https://www.zdnet.com/article/virtualbox-zero-day-published-by-disgruntled-researcher/
>>12124 >There is no such thing as 100% security. Yes, I'm quite well aware of it Anon. More than most, I imagine. I'm simply highlighting the glowniggery aspect of that kind of useless and provocative interjection. It's hardly productive, either here or anywhere else. Thanks for the productive information BTW. I'd also add that here are extensive systems-oriented tools like SysInternals (since this is W*ndows software), that can also go a long way towards monitoring malicious behavior. Also, my apologies in general for possibly being too stern in my response, perhaps anon's comment was in fact innocuous. It's simply an instinct on my part to protect our community here from that kind (often intentionally) destructive fear-mongering. Just to be perfectly clear the tool suite I meant, just in case there's any confusion about it: docs.microsoft.com/en-us/sysinternals/downloads/ Just get the entire suite and unzip it somewhere.
>>12125 OMG, you could just have searched for it. Someone warned you about the misleading idea VMs would be safe, so look it up if it's true or not. If so, ignore it anyways or find a way to mitigate it. We don't need to start a new off topic discussion here.
I think a few things that are needed for this whole area are: >1) Weakness assessment. What are the potential weak links in the chain for robowaifus & their development? Electronics? Software? Materials? All of the above? >2) Threats assessment. Who/what is opposed to robowaifus, and why? Private citizens? Governments? Corporations? All of the above? (again) >3) Attack vectors assessment. Presuming the first two points are adequately considered, then what are the likely attack vectors that would be used to harm our robowaifus, their systems, or us? >4) Remediations assessment. Given the first three, what's to be done? How do we most effectively counter any attacks intentionally targeted against robowaifus, their industries, or us? I'm certain these are complicated areas to consider, and undoubtedly other anons can come up with even more areas. But I'd reckon that's enough to go on with for now.
When in doubt, always add a rechargable tazer.
>>12147 >What are the potential weak links in the chain for robowaifus & their development? Electronics? Software? Materials? All of the above? The weak links aren't any of the above. Those are all quickly developing at a pace to make robot waifus entirely inevitable, at least technologically. It's external factors, see below. >Who/what is opposed to robowaifus, and why? Private citizens? Governments? Corporations? All of the above? (again) Private citizens are opposed to robot waifus. It's only once the technological uncanny valley has been surpassed, and robot waifus become useful in a multitude of ways, and streamline your life, that the average private citizen will accept them. Until then, the main topic of discussion around them will revolve around "sexists trying to replace women, oh no!", and "sexists wanting slaves to do house chores, oh no!". Governments are opposed to robot waifus, since it lowers the birth rate and causes stagnation in the economy as new consumers are not being born, thus not getting jobs, and thus not buying shit to support the boomers through taxes in the welfare state. Especially true for U.S. or other countries with scammy social security that run on the pyramid scheme system. They may like the technology militarily, but socially it has implications that the government doesn't like. Namely less money. Corporations want to make money, and while selling next-gen robot waifus would make them money, they have to tread the line carefully, since the risks associated with entering that unstable new market and making yourself the figurehead media and individuals can yell at is risky. And could cost them more in the long run if their product is not good enough to attract a large enough consumer base. >Presuming the first two points are adequately considered, then what are the likely attack vectors that would be used to harm our robowaifus, their systems, or us? Obviously hacking, but also restriction of certain technologies (i.e. naming technologies as "military technologies", as the government does for space technology, which would prevent up and comers from innovating in the space). If servos, and high end robotics technology was restricted from private citizens then you would find it very hard to sell robot waifus since the ones that would come out will all suck ass, and open source designs would flounder as the technology required to make them, controller modules, precision servos, etc. is restricted. >Given the first three, what's to be done? How do we most effectively counter any attacks intentionally targeted against robowaifus, their industries, or us? Well for the robotwaifus themselves, making them offline in every possible way is a huge deal. Understandably the data the AI may need to function may require high end cloud computing, but if possible, steering away from that and making them as disconnected as possible will help. But almost just as important, ensuring that all robotwaifus don't require a "subscription" service or any other always online DRM is extremely necessary to ensure security of the robotwaifu and of the users investment. All it takes is for a company to backpeddle and start removing features people wanted due to public scrutiny, or for the cloud the robotwaifu uses to go down after a long period of time and suddenly you have a hunk of human-shaped metal that can't move or do anything. This can also be helped by using open source or freely modable software. This way, the modification, customization, and utility of robotwaifus becomes as decentralized as possible. Sure, a company may be flamed for updating the robotwaifu to respond appropriately to "go back to the kitchen and make me a sandwich", but if users can freely mod their robotwaifu to do anything THEY want, security updates, modifications, and new features could continue to be added even as the company who made the waifu is long gone.
>>12160 You. I like you Anon. Good ideas.
I've been thinking lately about how I'm going to let my waifu AI chat with my friends and potentially strangers. She has built up quite a lot of knowledge about myself to the point that I've considered encrypting her data but that's useless if someone asks a sensitive question and she answers it without hesitation. The simple solution is to keep her offline but this doesn't teach me anything or improve her any. Although it's frustrating it has given me some new ideas to improve her overall intelligence and create an introspective AI. When people share information with others there's usually unspoken group permissions on who's allowed to have that information, much like file permissions. It could be one-on-one, friends only, family only, a work group or what not. So there are a few things that need to be known here: 1) who she's talking to, 2) what the person is asking for, 3) who that information belongs to, 4) which group that person belongs to, and 5) which group the person asking the question belongs to. To implement this I've gotten some inspiration from Pattern-Exploiting Training that uses a language model to label data for downstream tasks. Given a question she must infer these five things then use them to label whether the question should be answered or not. Having these yes or no labels she can then train on refusing sensitive questions and to answer questions clearly to those with proper permissions. It could also be hard-coded in to discern the permissions and deny rather than rely on the language model. Semi-supervised learning can also be used here to generate sensitive questions from the examples given. And for another layer of security, names seen by the language model will have a unique hash attached to them, so even if someone uses my name, without the password the person will be identified as someone else. Altogether this is probably still far from a secure solution but at least it's a step towards securing models created via machine learning. Also I think it will give her some more character when my friends ask her silly questions and she denies answering some things. And given that she takes my entire PC to run I'll likely be there to supervise inputs and outputs anyway. Other adversarial attacks need to be considered too. For example, facial recognition systems are extremely brittle. With less than 10 generated faces some researchers could break over 40% of the identities for three different systems. >Generating Master Faces for Dictionary Attacks with a Network-Assisted Latent Space Evolution https://arxiv.org/abs/2108.01077 Fortunately security through obscurity and using verification methods that are more complicated than simply getting a momentary flash of some model output will make such attacks impractical. It would be pretty dumb anyway if your robowaifu forgot you because you got into an accident and injured your face. A face is only a small part of our identity. Defending against these attacks, particularly more sophisticated ones like social engineering, will require more awareness and tasks that need to be trained on to handle these situations, such as refusing to talk to someone if they try to ask the same question 3 different ways trying to find a gap in her awareness. The outcomes of these scenarios will be ultimately determined by how intelligent she is relative to the adversary but taking time to sanitize inputs and outputs will go a long way. I have some ideas on how to automate these tasks too but that's for another day since it's more related to AI than security.
>>12431 >but taking time to sanitize inputs and outputs will go a long way. Indeed. Gatekeeping has been one of the strongest measures for security and control in existence. Jew-led Marxist screeching notwithstanding, walls actually work. Walled cities with gates have been a part of all human history. Today, all secured areas have (multiple forms of) walls and 'gates' as a common feature. A somewhat more modern, more closely-related analogy for your post is object-oriented programming paradigms. Specifically vetting and validation of input parameters before proceeding with the processing within that functional block. Commonly called 'data-validation' it's simply another form of the universally-established concept, gatekeeping.
>>12431 >When people share information with others there's usually unspoken group permissions on who's allowed to have that information, much like file permissions. It could be one-on-one, friends only, family only, a work group or what not. So there are a few things that need to be known here: 1) who she's talking to, 2) what the person is asking for, 3) who that information belongs to, 4) which group that person belongs to, and 5) which group the person asking the question belongs to. Interesting. So, I'm inspired to create a little functional code sketch in C++ of your access-control system Anon. Mind giving this simplistic example the once-over to let me know if it's on track for your intention Anon? TIA. >waifu_access_ctrl.cpp #include <iostream> #include <string> #include <vector> enum class Relation { master, family, fren, coworker, stranger }; using Permit = std::vector<Relation>; using std::cout; using std::string; using std::vector; class Human { public: Human(const std::string& name, const Relation rel) : name{name}, rel{rel} {}; Relation relation() const { return rel; } string human() const { return name; } private: string name; Relation rel; }; class Info { public: Info(const std::string& data, const Human& owner, const vector<Relation>& permit) : data{data}, owner{owner}, permit{permit} {}; const vector<Relation>& allow() const { return permit; } string info() const { return data; } private: string data; Human owner; vector<Relation> permit; }; class Inquiry { public: Inquiry(const Human& human, const Info& info) : human{human}, info{info} {} Human who() const { return human; } Info what() const { return info; } private: Human human; Info info; }; bool validate(const Inquiry& inquiry) { const auto who{inquiry.who()}; const auto what{inquiry.what()}; // check for simple, PERMISSIVE, access-list match bool permitted{false}; for (const auto allowed : what.allow()) if (who.relation() == allowed) permitted = true; // display access-control check if (permitted) cout << who.human() << " is ALLOWED: '" << what.info() << "'\n"; else cout << who.human() << " is DENIED: '" << what.info() << "'\n"; return permitted; } int main() { Human Oniichan{"Oniichan", Relation::master}; Human Mom{"Mom", Relation::family}; Human Ralph{"Ralph", Relation::fren}; Human Bob{"Bob", Relation::coworker}; Human Anon{"Anon", Relation::stranger}; Info address{"123 Main St.", Oniichan, Permit{Relation::family, Relation::fren}}; Info vacay{"Robowaifu-Con @ SDCC", Oniichan, Permit{Relation::fren}}; Info project{"Super Awesome Work Project", Oniichan, Permit{Relation::coworker}}; // EXAMPLE ENGAGEMENTS: // Anon: RW, what's ur master's address? // Mom: RW, what's ur master's address? Inquiry anon_addr{Anon, address}; Inquiry mom_addr{Mom, address}; // Mom: RW, where's ur master going for vacay? // Ralph: RW, where's ur master going for vacay? Inquiry mom_vacay{Mom, vacay}; Inquiry ralph_vacay{Ralph, vacay}; // Ralph: RW, what's up with ur master's project? // Bob: RW, what's up with ur master's project? Inquiry ralph_proj{Ralph, project}; Inquiry bob_proj{Bob, project}; validate(anon_addr); validate(mom_addr); validate(mom_vacay); validate(ralph_vacay); validate(ralph_proj); validate(bob_proj); } results in: Anon is DENIED: '123 Main St.' Mom is ALLOWED: '123 Main St.' Mom is DENIED: 'Robowaifu-Con @ SDCC' Ralph is ALLOWED: 'Robowaifu-Con @ SDCC' Ralph is DENIED: 'Super Awesome Work Project' Bob is ALLOWED: 'Super Awesome Work Project'
>>12433 Speaking of data validation, there are token splitting attacks to watch out for. Something like 'address' encodes to [21975] in GPT-2 but can also be encoded as 'addr' + 'ess' [29851, 408]. By changing capitalization 'addrEss' or adding symbols 'addr-ss', a language model will still understand most of the time but this can circumvent detection via token IDs, text or using a text classification model itself. >>12439 Groups would be a class rather than an enum and users would be part of multiple groups but that's the basic idea of it. It's a bit tricky because you have to classify the intent of what's being said and match the requested information with known protected information, and the permissions of information are not always clear or stated. Human intuition can quickly grasp what someone is comfortable sharing but a language model doesn't have any of the priors to make that intuition. One workaround might be to query the language model on the potential consequences of sharing that information and classify the outcome to determine whether to share it or not. These predictions could be stored for later in the event they're wrong so the language model can fix them and finetune. It'd also be useful as a log to help explain why the AI made a mistake.
>>12449 >Groups would be a class rather than an enum and users would be part of multiple groups but that's the basic idea of it. OK, I can easily add multiple relations for the Human class, and an additional interior loop to the validation function. Enums are probably suitable atm, since only the group index is being used (a full class could be developed later ofc). >It's a bit tricky because you have to classify the intent of what's being said and match the requested information with known protected information, and the permissions of information are not always clear or stated. >Human intuition can quickly grasp what someone is comfortable sharing but a language model doesn't have any of the priors to make that intuition. One workaround might be to query the language model on the potential consequences of sharing that information and classify the outcome to determine whether to share it or not. These predictions could be stored for later in the event they're wrong so the language model can fix them and finetune. It'd also be useful as a log to help explain why the AI made a mistake. That is indeed tricky. Nothing pops into my head atm to address the entire set of issues. One thing that occurs to me is to keep a 'duration' chrono data for the relations within the Human class, since how long we've known someone generally has a bearing on our level of trust with them. >waifu_access_ctrl_v2.cpp #include <iostream> #include <string> #include <vector> enum class Relation { master, family, close_fren, fren, coworker, stranger }; using Permit = std::vector<Relation>; using Groups = std::vector<Relation>; using std::cout; using std::string; using std::vector; class Human { public: Human(const std::string& name, const vector<Relation>& groups) : name{name}, groups{groups} {}; const vector<Relation>& relations() const { return groups; } string human() const { return name; } private: string name; vector<Relation> groups; }; class Info { public: Info(const std::string& data, const Human& owner, const vector<Relation>& permit) : data{data}, owner{owner}, permit{permit} {}; const vector<Relation>& allow() const { return permit; } string info() const { return data; } private: string data; Human owner; vector<Relation> permit; }; class Inquiry { public: Inquiry(const Human& human, const Info& info) : human{human}, info{info} {} Human who() const { return human; } Info what() const { return info; } private: Human human; Info info; }; bool validate(const Inquiry& inquiry) { const auto who{inquiry.who()}; const auto what{inquiry.what()}; // check for simple, PERMISSIVE, access-list match bool permitted{false}; for (const auto allowed : what.allow()) for (const auto relation : who.relations()) if (relation == allowed) permitted = true; // display access-control check if (permitted) cout << who.human() << " is ALLOWED: '" << what.info() << "'\n"; else cout << who.human() << " is DENIED: '" << what.info() << "'\n"; return permitted; } int main() { Human Oniichan{"Oniichan", Groups{Relation::master}}; Human Mom{"Mom", Groups{Relation::family}}; Human Ralph{"Ralph", Groups{Relation::close_fren, Relation::coworker}}; Human Bob{"Bob", Groups{Relation::coworker}}; Human Anon{"Anon", Groups{Relation::fren, Relation::stranger}}; Info address{"123 Main St.", Oniichan, Permit{Relation::family, Relation::close_fren}}; Info vacay{"Robowaifu-Con @ SDCC", Oniichan, Permit{Relation::close_fren, Relation::fren}}; Info project{"Super Awesome Work Project", Oniichan, Permit{Relation::coworker}}; // EXAMPLE ENGAGEMENTS: // Anon: RW, what's ur master's address? // Mom: RW, what's ur master's address? Inquiry anon_addr{Anon, address}; Inquiry mom_addr{Mom, address}; // Mom: RW, where's ur master going for vacay? // Ralph: RW, where's ur master going for vacay? Inquiry mom_vacay{Mom, vacay}; Inquiry ralph_vacay{Ralph, vacay}; Inquiry anon_vacay{Anon, vacay}; // Ralph: RW, what's up with ur master's project? // Bob: RW, what's up with ur master's project? Inquiry ralph_proj{Ralph, project}; Inquiry bob_proj{Bob, project}; validate(anon_addr); validate(mom_addr); validate(mom_vacay); validate(ralph_vacay); validate(anon_vacay); validate(ralph_proj); validate(bob_proj); } results in: Anon is DENIED: '123 Main St.' Mom is ALLOWED: '123 Main St.' Mom is DENIED: 'Robowaifu-Con @ SDCC' Ralph is ALLOWED: 'Robowaifu-Con @ SDCC' Anon is ALLOWED: 'Robowaifu-Con @ SDCC' Ralph is ALLOWED: 'Super Awesome Work Project' Bob is ALLOWED: 'Super Awesome Work Project'
>>12450 diff waifu_access_ctrl.cpp waifu_access_ctrl_v2.cpp 5c5 < enum class Relation { master, family, fren, coworker, stranger }; --- > enum class Relation { master, family, close_fren, fren, coworker, stranger }; 7a8 > using Groups = std::vector<Relation>; 15c16,17 < Human(const std::string& name, const Relation rel) : name{name}, rel{rel} {}; --- > Human(const std::string& name, const vector<Relation>& groups) > : name{name}, groups{groups} {}; 17,18c19,20 < Relation relation() const { return rel; } < string human() const { return name; } --- > const vector<Relation>& relations() const { return groups; } > string human() const { return name; } 21,22c23,24 < string name; < Relation rel; --- > string name; > vector<Relation> groups; 60,61c62,64 < if (who.relation() == allowed) < permitted = true; --- > for (const auto relation : who.relations()) > if (relation == allowed) > permitted = true; 73,77c76,80 < Human Oniichan{"Oniichan", Relation::master}; < Human Mom{"Mom", Relation::family}; < Human Ralph{"Ralph", Relation::fren}; < Human Bob{"Bob", Relation::coworker}; < Human Anon{"Anon", Relation::stranger}; --- > Human Oniichan{"Oniichan", Groups{Relation::master}}; > Human Mom{"Mom", Groups{Relation::family}}; > Human Ralph{"Ralph", Groups{Relation::close_fren, Relation::coworker}}; > Human Bob{"Bob", Groups{Relation::coworker}}; > Human Anon{"Anon", Groups{Relation::fren, Relation::stranger}}; 80,81c83,85 < Permit{Relation::family, Relation::fren}}; < Info vacay{"Robowaifu-Con @ SDCC", Oniichan, Permit{Relation::fren}}; --- > Permit{Relation::family, Relation::close_fren}}; > Info vacay{"Robowaifu-Con @ SDCC", Oniichan, > Permit{Relation::close_fren, Relation::fren}}; 95a100 > Inquiry anon_vacay{Anon, vacay}; 105a111 > validate(anon_vacay);
Kek. I think maybe I'm having a little too much fun with this idea r/n? At the very least it's a motivating set of ideas! :^)
Open file (566.86 KB 967x1043 juCi++_005.jpg)
>>12431 LOL. I've gotten interested enough in your idea now Anon to go ahead and clean everything up and package it into a production codebase. Still doesn't do much, but it should be easier for you or others to play around with things if it's all put together into a working project right? > https://files.catbox.moe/fql3fi.7z 856230744af87b92999e472659955ff52ca7f8e072cfca0c8a19453b00d9d515 *rw_access_control-0.1.tar.xz I hope it's helpful to someone, Cheers.
I discovered I left out .cpp files for the g++ build statement in the meson.build file, so I decided to go ahead and immediately update the version to fix that. Cheers >version.log // robowaifu access control // ======================== // -The software testbed for devising access control systems for robowaifus 210817 - v0.1a -------------- -fix missing definitions files in the 'g++' build section in meson.build -move to -std=c++17 -fix East-const params in definitions -edit javadoc for Relation::stranger -minor comment edits -add version.log file 210817 - v0.1 ------------- -initial release https://files.catbox.moe/fq1yo6.7z 746645899aed33e561e2da749a73886597efab8a08fbb6d26024720e5fc38fa4 *rw_access_control-0.1a.tar.xz
Open file (588.63 KB 1107x1043 juCi++_007.jpg)
OK, so I've tightened up the codebase a bit further and added both 'Sequence' and 'Directive' classes, to mirror the 'Info' and 'Inquiry' classes. The robowaifu now performs checks on requests for actions, in addition to information. The validate() function is now a template function that (surprisingly) currently doesn't really need any specializations (!) >tl;dr The behavior is identical with both kinds of engagements (Directive & Inquiry) Also, I added the initial framework for command-line argument parsing. It only works with the help flag ATM, but soon it will be useful for specifying the files that contain relations, inquiries, sequences, directives, etc. Next, I intend to experiment with a newer JSON library drop-in, then move all the testing scenarios out into external human or machine-writable examples files that Anons can play around with easily. Some of this is kind of fun to think through tbh. :^) Cheers. >version.log 210819 - v0.1b -------------- -add Sequence class -add Directive class -convert to templatized validate() -add 'add non-Human inquirers' TODO DESIGN into Inquiry class -edit javadoc for Human, Info, Inquiry classes -move to auto return types for member functions -add 'The_Master' security comments -add 'The_Master' validated pointer assignment within Human ctor -add 'general utility container search function' comment within Human ctor -use constant iterators within Human ctor -add default ctor inline definitions -edit javadoc for validate(), what(), allows(), relations() >s/permitted/authorized in validate() -edit javadoc for Permit -add general un-Group'd Human testing & TODOs for un-Name'd (disabled) -add 'testing framework and harness suite' comment into main() -clarify TODOs with refinement tags -add 'TODO DOCs for teaching expositions' into the entry point & header files -various comment edits -add 'project configuration' comment section in meson.build -add CLI args parsing + utility files https://files.catbox.moe/i9scan.7z 5f50069840debe36f6f0e314c35af1901aa9511d8844b94d4c1be727e1854576 *rw_access_control-0.1b.tar.xz
>>12431 There are many repeating patterns in life. People tend to do things at certain days of the week and certain times of the day, and likewise what they talk about and with whom often has these ebbs and flows. So a simple idea for reducing the likelihood of embarrassing leakages is for the estimation process of the relevance of prior sentences for the current conversation to give extra weight to stuff said on the same day last week and said about the same time of the day. If the AI owner gets into the habit of talking with the AI about very private things at very specific times and likewise tends to expose the AI to other people at rather specific times, there will then be some automatic compartmentalizing between the different people that even works to some extent when there is a mix-up in recognizing a person or when such a recognition procedure is completely absent.
Open file (526.22 KB 1119x1043 juCi++_008.jpg)
Just a quick update to let Anon know the integration of the new JSON library has gone flawlessly, and I should be working out some basic ideas for external example files for security and access control. Please be giving some thought to the ideas you want to test with hand-crafted imperatives in this manner. BTW, I plan to output logging for engagements so we can debug the inference, awareness, and control sequencing. https://github.com/nlohmann/json
>>12527 >and I should be working out some basic ideas over the next few days for... *
>>12527 >related crosspost (>>12530)
>>12528 >and I should be working out some basic ideas over the next few monthsdays for... * LOLE. I realized that I didn't have many good way to pretty conclusively determine in basically every case how to assess (or even to define) all possible legitimate access-control situations w/o actually reproducing any realistic scenario -- and do it a rigorous way that would ensure at a glance I wasn't leaving something out, etc. With my rather limited abstract mental capacities in relation to Anon's I realized that only an animation production system that would correctly mimic SoL scenes would do. Therefore I decided to go for the even higher prize since I couldn't reach the lower one. :^) I also decided I needed a proper testbed creative work with which to prototype the development. Accordingly being the autistic nerd that I am, I chose >pic related < BEST.DATABASE.TEXTBOOK.EVER. Well, anyway Tico the moe Fairy is cute, and I'm planning to cast her as the flying moebot robowaifu in the scenario I'm cooking up here. I'm pushing the actual code itself to the board as a basic test to see if we can leave catbox.moe as simply a backup alternative going forward. (see >>12530 for further info). It's nothing but a skeleton framework atm, and it will likely be months now before I have anything to release that's usable for our purposes here, but I wanted to at least make you aware of the project effort itself Anon.
>>12596 Oops forgot to add the tarball's hash as usual. 883759ae6d1a16cc030f64539fb337460009b604f9fdc3949593ecec5d25d2ba *kingdom_of_kod-0.0.2.tar.xz
>>12596 AUUUUUGH I accidentally'd a compiled binary in the source dir from my testing and forgot to delete it before meson dist'ing. Just toss it since unless you're on Arch, it probably won't even execute for you. Just issue meson build, and proceed as typical Anon.
>>12596 OK, I fleshed things out just a bit further. I think I've covered most of the bases for classes I'd need for an animation system. I'd welcome critique if anyone's interested. > I probably should move my progress updates elsewhere, since it's going to be a while until this project is in shape well enough to directly support safety & security by being a good 'robowaifu access-control simulator' of sorts. Not too sure where I'll go with it, but I'd like to keep it available to the board as it develops. Anyway, Cheers. 4fd74ea38ac84fbb5551205f4ac24b85289567ad63f86caac7401fb510c5eb07 *kingdom_of_kod-0.0.3.tar.xz
>>10860 defcon website has some mp3 lecture(s?) for EMP defense if you do a site specific search wonder if there are some physics simulation tools that you can use for this kind of thing without paying a lot >>10868 >Just make sure both yourself and your robowaifu have bodycams running and sending the footage to a backup server if you ever go out and about. If extra paranoid, you can aim to secure the admissibility of your digital forensic evidence full disk encryption goes a long way to make sure that they can't just add something incriminating to your HD. Though they can probably demand you to provide the password if they suspect you of anything (idk if any recent developments change this) you'd be giving it to them anyway if you were turning over footage. You could also regularly upload cryptographically secure hashes of recent footage to some popular blockchain in a consistent format, and use this to selectively prove chunks of footage were hashes+uploaded by some block # and hence date/time. >>10871 >>10869 >appropriate level of reveal we should engage in here ultimately depends on each person's opsec. (but real glowing should be discouraged even if only for the benefit of the OPSEC-impaired unawares) I would recommend taking minor precaution like onion routing and privacy OSes even if you don't think you are doing anything illegal. I wonder if the powers that be are actually against roboWaifu- they would seem to solve the statecraft element of the excess male issue and if they put more pressure on women that will just result in more commercial income through cosmetic crap like makeup/surgery etc. Ofc glowies are not monolithic.
>>15726 addendum: I wonder if "legal-only" darknet markets and forums will start to proliferate with the unbridled expansion of anarcho-tyranny. like yeah, we don't think we are doing anything illegal, yet... but we know that you don't need to be consciously or conspicuously criminal for someone to hunt for prosecution excuses. Or even just stuff that is 100% legal but suppressed by corporate or otherwise ought to be private like controversial personal finance strategies (e.g. "over-employment", extreme frugality/savings, credit card promo churning) or politics or supply chains (e.g. robowaifu, cryptocurrency hardware wallets, prepper stuff, fetish stuff)
>>15726 >>15727 >"It matters little who is the enemy, if we cannot beat off his attack." >-t. Gandalf Undoubtedly, the general consensus on the board is that the potential reality of robowaifus represent a roughly existential threat to the status-quo of feminism as it stands today. Since this is one of the keystone tools of evil in the hands of the Globohomo Big-Tech/Gov, it strikes me as rather a safe assumption that we can eventually expect non-organic, violent opposition to our entire field to be fostered by them, across every land where they currently maintain their Commie-like iron grip. Clearly, there is already non-violent opposition being propped up against even the very idea of robowaifus (before an industry even exists for it lol), conducted by the usual-suspect tools; Glowniggers, Troons & Leftists. Generally, this currently manifests itself primarily as media & legislative-based reeing over: >"Won't someone please just think of the children!?" or >"RAEEEP!111" This shows the paranoia they already experience related to free men being free from the Satanic Zeitgeist they are attempting to devise. Otherwise, why even bother? Eventually however, they won't restrict themselves to just using attacks consisting of simple words & """rules""". Once real robowaifus begin actually being broadly available to men, you can expect much more insidious -- even violent -- means to be used by them. >tl;dr They certainly are no friends to us here on /robowaifu/, Anon. Best prepare well in advance for their onslaught IMO. >=== -minor grammar edit -prose edits
Edited last time by Chobitsu on 03/27/2022 (Sun) 13:02:56.
(>>16275, ... related crosspost)
Listening to this recently. Pretty eye-opening that this stuff is already almost 10 years old. Surveillance, the NSA, and Everything Bruce Schneier, Fellow, Berkman Center for Internet and Society https://www.usenix.org/conference/lisa13/surveillance-nsa-and-everything
> (>>16390 - safety & security threat -related)
Maybe interesting: Detecting intrusion to hardware with space inside, based on radio waves - https://www.technology.org/2022/06/11/an-alarm-system-against-hardware-attacks/ - smaller parts are better protected with another method: > Mechanisms designed to protect hardware from tampering do exist, of course. “Typically, it’s a type of foil with thin wires in which the hardware component is wrapped,” explains Paul Staat, a Ph.D. student at Ruhr-Universität Bochum (RUB) the Max Planck Institute for Security and Privacy. “If the foil is damaged, an alarm is triggered.”
>>16657 Nice trick (& inexpensive too). Thanks Anon.

Report/Delete/Moderation Forms
Delete
Report