/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Server and LynxChan upgrade done. Canary update and a general address soon. -r

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


Remember, /agdg/ 's Game Jam is still going on. Go Team! :D


Open file (279.12 KB 616x960 Don't get attached.jpg)
Open file (57.76 KB 400x517 1601275076837.jpg)
Ye Olde Atticke & Storager's Depote Chobitsu Board owner 09/05/2021 (Sun) 23:44:50 No.12893
Read-only dump for merging old threads & other arcane type stuff. > tl;dr < look here when nowhere else works anon. :^)
In the same spirit as the Embedded Programming Group Learning Thread 001 >>367 , I'd like to start a thread for us all that is dedicated to helping /robowaifu/ get up to speed with the C++ programming language. The modern form of C++ isn't actually all that difficult to get started with, as we'll hopefully demonstrate here. We'll basically stick with the C++17 dialect of the language, since that's very widely available on compilers today. There are a couple of books about the language of interest here, namely Bjarne Stroustrup's Programming -- Principles and Practice Using C++ (Second edition) https://stroustrup.com/programming.html , and A Tour of C++ (Second edition) https://stroustrup.com/tour2.html . The former is a thick textbook intended for college freshmen with no programming experience whatsoever, and the latter is a fairly thin book intended to get experienced developers up to speed quickly with modern C++. We'll use material from both ITT. During the progress, I'll discuss a variety of other topics somewhat related to the language, like compiler optimizations, hardware constraints and behavior, and generally anything I think will be of interest to /robowaifu/. Some of this can be rather technical in nature, so just ask if something isn't quite clear to you. We'll be using an actual small embedded computer to do a lot of the development work on, so some of these other topics will kind of naturally flow from that approach to things. We'll talk in particular about data structures and algorithms from the modern C++ perspective. There are whole families of problems in computer science that the language makes ridiculously simple today to perform effectively at an industrial scale, and we'll touch on a few of these in regards to creating robowaifus that are robust and reliable. >NOTES: -Any meta thoughts or suggestions for the thread I'd suggest posting in our general /meta thread >>3108 , and I'll try to integrate them into this thread if I can do so effectively. -I'll likely (re)edit my posts here where I see places for improvement, etc. In accord with my general approach over the last few months, I'll also make a brief note as to the nature of the edit. -The basic goal is to create a thread that can serve as a general reference to C++ for beginners, and to help flesh out the C++ tutorial section of the RDD >>3001 . So, let's get started /robowaifu/.
What book would you recommend for someone who has had some previous programming experience before, but not necessarily with C++, Programming: Principles and Practice Using C++, or C++ Primer? I've done some stuff in Python and Java, and I've wanted to learn C++, and those two are ones that I hear often for beginners.
>>4915 >''and the latter is a fairly thin book intended to get experienced developers up to speed quickly with modern C++. // Tour++
>>4915 >>4916 seconding this, I'd really like to learn C++ as well.
>>4937 Thanks for the interest Anon. My schedule is in a bit of a ruckus atm. I'll try to update the thread on weekends. If you'd like to get ahead, then study the RaspberryPi 3, that will be the next topic. We'll be using it as the primary teaching platform ITT.
I decided that the process of getting the RaspberryPi boxes fully set up was detailed and involved enough, and only indirectly related to the main topic ITT, that it should be in it's own thread. >>4969
I mean it's a really big language. This stems from it's very wide generality of usage for 40 years. It serves in an almost unbelievably wide range of usage domains. So (especially) to help newcomers get off on the right foot, I'm going to guide everyone into setting themselves up with good documentation. Don't be concerned with understanding any of this ATP, it is simply to get you on the correct path for documentation from step one. NOTE: I'll usually speak with the 'tutorial voice' ITT (ie, I'll give unqualified instructions). If you're an old hand at this--be patient. If you're a greenhorn, then I recommend simply following every step I give you. You'll then know you are on the same page as me. Almost all of these instructions ITT assume you are working along with us on the Raspberry Pi (either the real hardware or the virtual desktop). So let's get started setting up your most important C++ bookmarks first.
>>4992 1. Open Chromium (display the bookmarks bar and bookmark all these. 2. Goto https://en.cppreference.com/w/ 3. Goto https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines -click 'Turn ON syntax' 4. Goto https://isocpp.org/get-started 5. Goto http://www.cplusplus.com/ -NOTE: Be aware this is nowhere near as authoritative cppreference is.
>>4993 Since it's an officially-recognized international standard, ISO C++ isn't defined by any of these sites, they are merely there to help professionals quickly get up to speed with information. The actual definition of the language is the actual standards document itself. While filled with an incredibly high degree of technical specificity (and therefore of little use to a beginner), I'll go ahead and add in a draft (ie, C++23 version) copy of it I rendered a few months ago: > If anyone wants instructions on creating your own up-to-the-minute copy of the document, just ask ITT.
Open file (180.71 KB 1536x828 learning.jpg)
I'm brand new to trying to learn C++. Like only a couple days in. A total baby when it comes to programming. Before this the most I've done is follow a tutorial to tweak a python script to get a Minecraft server going. I've been slowly going through this video to get the basics down. Any newbie like be shouldn't be afraid to learn at their own pace! https://youtu.be/vLnPwxZdW4Y This guy's simple examples and reiteration of concepts leading into other concepts helps the logic of C++ make sense. The development program he uses and recommends, Code::Blocks, is very helpful in the process of getting something to run. I'll link it here: http://www.codeblocks.org/ I've gotten about an hour into the material and I'm making up my own little projects to help me understand each subject and the little ins-and-outs of C++. I've toyed around with variables to simulate a Pokémon battle and I'm messing around with user input prompts to customize a fake conversation. I know this is baby town for C++ and it's already super fascinating!
>>4995 That's good to hear Anon, thanks for sharing the cap and the link! Just let us know ITT if you have any questions as we go along. Having lots of different resources for learning is a good thing. Mind telling us about the program you're working on? >and I'm making up my own little projects to help me understand each subject Yes, that's a good approach for beginners. Lots and lots of small projects. This makes things easier to grasp as we go along.
I'm working on setting things up for our C++ class thread, and for those of you interested in participating in the thread I have a question: -Would you rather I treat you like professionals who need to face the real-world complexity of building robowaifu software? -Or would you prefer a simplistic approach where things are much easier to grasp? The benefit of the first is that you'd be much more quickly able to create real robowaifu software (but the learning curve is steep for beginners). The benefit of the second is that it would be more /comfy/ but you would take a long time until you were proficient enough to contribute here with software engineering? Please let me know what you'd prefer.
>>5085 Anyone?
Open file (228.11 KB 1024x768 chii_hugs_hideki.jpg)
Alright, I plan to pursue a professionally-oriented, embedded developer approach to the class. This will be oriented towards using the best of C++ in modern embedded environments (such as robowaifus engineering) with plenty of C lore thrown in (just using best practices from C++ instead where feasible) to boot. This will not really be a beginner's class therefore. I'd suggest the cplusplus.com tutorials for you guys. http://cplusplus.com/doc/tutorial/
>>5085 I apologize for not responding. I feel like an ass because you've taken time out of your busy schedule to offer all of us FREE C++ training. since i am the only one who seems to be responding, i am requesting we do the 2nd one. I am the anon who said that i have some experience in HTML, Javascript, and CSS.
>>5143 Also two more things, Could Visual basic be used to code robowaifus or is it only exclusively used for web design and how much has C++ changed in the last 20 years? The last time my dad coded in C++ was around 20 years ago, mostly because he was a web designer and didn't really play around with C++.
>>5143 >to offer all of us FREE C++ training Heh, I haven't really tried doing this before (training others) so I will have a lot to learn myself I'm sure. Best to be patient with each other, yea? :^) Besides, I'm only looking after this on the weekends, so it's not too much of an impact really. >i am requesting we do the 2nd one I see then. Hmm, I definitely want to help anyone who wants to learn here. But it's kind of a juggling act to be able to address the needs of the beginner and the more experienced (without boring both at the same time haha). Let me give it some thought this week and I'll try to think through a good approach that might be helpful to everyone here. >>5144 >how much has C++ changed in the last 20 years? I'm not too sure, but I know that since 2011 the language has gone through a lot of changes and improvements. I'm guessing guys like your Dad might like the newer things if they took the time to learn them? Anyway, there's little doubt that it's one of the most important languages in the world for the demanding needs of engineering robowaifus, so it's a really good choice for us.
>>4993 1. Goto https://releases.llvm.org/7.0.0/tools/clang/tools/extra/docs/clang-tidy/checks/list.html We're going to be setting up and using both clang-tidy & clang-format on our RPi boxes, so you may as well begin now to familiarize yourself with them.
Open file (402.67 KB 1366x768 ClipboardImage.png)
>>5146 >v7 Check that. Use this link instead, since RPi Buster has v9 in the repos: 1. Goto https://releases.llvm.org/9.0.0/tools/clang/tools/extra/docs/clang-tidy/checks/list.html
First things first. Let's make sure we're all on the same page with our compilers. g++ --version clang++ --version If you worked your way through the RPi 'sepplesberry' setup thread >>4969, then you should see the same thing. > #1 We'll be using C++17 for the most part, and these versions of our compilers both have good support for it. If you are on an older compiler, just let me know ITT and we'll try to devise alternatives that will work with C++11/14 for you. Don't use any old compiler that can't even support C++11. >PROTIP: Writing software != typing text into an IDE and pressing the big green button. With our little RPi dev boxes, we are enjoying the real luxury of having an actually-usable GUI and a nice IDE directly on our embedded system. But in general you can't expect this. Software development for embedded is usually done on external dev boxes, then the cross-compiled binaries are pushed up to the target hardware over the wire to be able to use it (or indeed even to test it). Our current approach with our /comfy/ little RPi dev boxes makes things unusually convenient for us, but keep your eyes open to reality anon. Things won't always go this easy on us when building our robowaifus. :^) While we'll consistently be using the C++ IDE Juci++, it will also be a really good idea for you to learn at least the basics of using Vim. This will allow you to edit files remotely on a small embedded machine via SSH. Some of those remote files could be software code files ofc: > #2 #3 In fact, theoretically you don't even have to use an editor to write and compile software. echo "int main () {}" > foo.cpp g++ foo.cpp > #4 Alright, let's get going with our C++ class, /robowaifu/.
Hey just to let you know there's at least another student keeping tabs on this, since you've been working hard documenting these lessons. I could go either approach you suggested. My experience with C++ is high school and college and hobbyist (with Arduino). Though I have done some software development, I actually haven't used C++ in a professional setting.
>>5367 Oh hey there, welcome aboard. Sounds like you already have some development experience. You might like to skim through the Stroustrup book Tour++ then to get up to date with modern C++. My impression is that most C++ training in an academic setting is rather dated. This class will likely be rather slow-paced (off and on). Over time we will hopefully build up a decent tutorial guide for using C++ primarily in the context of embedded programming for robowaifu development. Please ask me questions (that goes for everyone ofc). Also, if there's a particular area of C++ you'd like me to address please let me know that as well.
Alright let's get started with the most basic 'Hello World' code to begin with. Select [RPi Menu] > Programming > juCi++ to open your editor. > #1 Choose File > New Project > C++ . I suggest first creating a special work folder to keep all your project work for this thread in. Mine's simply called 'projects' . > #2 Give the C++ project itself a name. Mine's simply 'cpp_class' . > #3 juci will automatically create a project folder and set of starter files for you. We'll discuss each of these, but for now we're just interested in the one named main.cpp. This is the minimal 'Hello World' program in C++ . > #4 Choose Project > Compile and Run (ctrl+enter) . The IDE will build and run your program for you. Any console results will appear in the lower built-in terminal emulator. > #5 That's it. You've gotten C++ running on your embedded computer. Next we'll break down functions.
-Select all the code on line #1 and delete it. -Select all the code on line #4 and delete it. -Run the program (ctrl+enter) You notice there was no 'Hello World!' output result in the terminal, but also that there wasn't any error messages either. So, the minimal valid C++ program is simply int main() { } > #1 This silly example is a do-nothing function that takes no arguments, and performs no actions. Now let's have a look at this (actually important) example function main() a little closer. -From PPP2 §2.2 The classic first program >Every C++ program must have a function called main to tell it where to start executing. A function is basically a named sequence of instructions for the computer to execute in the order in which they are written. A function has four parts: >• A return type, here int (meaning “integer”), which specifies what kind of result, if any, the function will return to whoever asked for it to be executed. The word int is a reserved word in C++ (a keyword), so int cannot be used as the name of anything else (see §A.3.1). >• A name, here main. >• A parameter list enclosed in parentheses (see §8.2 and §8.6), here (); in this case, the parameter list is empty. >• A function body enclosed in a set of “curly braces,” { }, which lists the actions (called statements) that the function is to perform. > #2 -From Tour++ §1.2.1 Hello World! >The minimal C++ program is > int main() { } // the minimal C++ program >This defines a function called main, which takes no arguments and does nothing. >Curly braces, { }, express grouping in C++. Here, they indicate the start and end of the function body. The double slash, //, begins a comment that extends to the end of the line. A comment is for the human reader; the compiler ignores comments. >Every C++ program must have exactly one global function named main(). The program starts by executing that function. The int integer value returned by main(), if any, is the program’s return value to "the system." If no value is returned, the system will receive a value indicating successful completion. > #3 So actually, this is both the most important function in one sense, and the 4-part template for every C++ function. Maybe not that silly (for teaching at least) :^)
>>5422 -Choose Edit > Undo 3 times to get the original hello world code back. -Between the #include <iostream> and int main() { add this code: void baz() { std::cout << "baz()\n"; } void bar() { std::cout << "bar()\n"; } void foo() { std::cout << "foo()\n"; } -Below the std::cout << "Hello World!\n"; inside main() add this code: foo(); bar(); baz(); -Now run the program (ctrl+enter) > #1 Recall from PPP2 §2.2 >A function is basically a named sequence of instructions for the computer to execute in the order in which they are written. The point of this simple exercise is to show you that each of the 3 new functions were each executed from main(), in the order they were called inside main(). Try scrambling around the ordering of the 4 statements inside main()'s function body to confirm for yourself they execute in whatever order you write them. > #2
>>5423 Now let's rearrange things just a bit to demonstrate chaining function calls together. -Cut the bar(); code from main(), and paste it in the bottom of the foo() function body. -Cut the baz(); code from main(), and paste it in the bottom of the bar() function body. -Add a new statement at the bottom of the main() function body. std::cout << "Back from 3 functions, now exiting.\n"; Run the program. > #1 The takeaway here is that the single statement foo(); inside main() triggered a cascade of calls to the other three functions, and main() got control back afterwards then exited. Notice, again, these all execute[d] in the order in which they are written. -first main() called foo() -next foo() called bar() -next bar() called baz() -finally main() regained control after the chained calls finished. The program then exited when main() completed. >=== -minor prose edit -s/two/three
Edited last time by Chobitsu on 10/02/2020 (Fri) 00:20:43.
Functions form the heart of software development. Unlike the silly foo() bar() baz() canonical examples, functions are actually named meaningfully in real code. For example: bool waifu_stand() and bool waifu_sit() are pretty recognizable at a glance, simply by their function name. Don't worry if it's a little confusing for now, it will make a lot more sense to you as we go along Anon. We'll be talking about functions very often. Well, I think that's enough for now, see you next time. Cheers.
Open file (27.18 KB 875x177 Selection_040.png)
Open file (100.90 KB 768x375 Selection_041.png)
When you write computer software, you use variables to hold information inside the computer. Variables have both a name and a type. For example, the statement: const int over_9000{9001}; tells C++ to set up an object inside the computer's memory named 'over_9000' . It's interpreted as the integer variable type by C++, and pre-loaded with the constant (unchanging) value '9001'. There are effectively just four fundamental (that is, directly built-in to the language) types in C++ : • int - a whole number such as 0, -1, or 9001 • bool - a Boolean which can only be true or false • char - a single character such as R or w • float - a number with a decimal part such as 0.0, -1.1, or 9001.1009 There's also a 'fake type' • void - a type placeholder meaning simply nothing >Tour++ §1.4 Types, Variables, and Arithmetic > Every name and every expression has a type that determines the operations that may be performed on it. For example, the declaration > int inch; > specifies that inch is of type int; that is, inch is an integer variable. > #1 In general, these are the basic types you need to memorize (and master) first. Once you fully understand these types and how to use them effectively, then you should be able to learn the more sophisticated types available in C++ with relative ease. Focus on the basics first, then everything else should fall into place for you afterwards. Here's a little more technical information about types and objects. Just try to make sense of things, but don't worry about it if it doesn't entirely make sense to you for now. You'll eventually get it all organically as we go along. >PPP2 §3.8 Types and objects > #2 >=== minor accent change swap 'interpreted' clause replace hyphens w/ bullets in list minor (but numerous) prose edits
Edited last time by Chobitsu on 10/03/2020 (Sat) 05:17:15.
daily reminder you must type code in yourself to learn it tbh.
Edited last time by Chobitsu on 10/03/2020 (Sat) 04:43:07.
Open file (2.98 KB 300x100 Untitled.png)
>>5438 >Daily reminder literally everything inside our computer's systems are simply 1s and 0s (plus timing signals). This simple truth presents a big challenge for the forerunner men of Computer Science; namely, How do we represent an arbitrary human-readable 'number' in an entirely general way, using only synchronized high voltages and low voltages?. The basic number type Integers are probably the most important computer object type out there. C and C++ both have quite sophisticated notions of the integer type built right in. Based DMR is based as are all the giants on whose shoulders he stood on. And, since they are simplistic numbers, ints should also support the basic arithmetic operations such as add & subtract, multiply & divide. How does this work, practically speaking?
>>5347 I hope this isn't too old, I just dusted off an Rpi 3B+ which I haven't touched in almost 2 years, was too lazy to update everything so just did the dependencies. I figured at some point I will want to reinstall a fresh image anyway but in the meantime I'm following along fine. pi@raspberrypi:~ $ g++ --version g++ (Raspbian 6.3.0-18+rpi1+deb9u1) 6.3.0 20170516 Copyright (C) 2016 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. pi@raspberrypi:~ $ clang++ --version clang version 6.0.1-10+rpi1~bpo9~rpt1 (tags/RELEASE_601/final) Target: armv6-unknown-linux-gnueabihf Thread model: posix InstalledDir: /usr/bin
>>5463 Hey there, welcome. No that should be OK for much of what we're doing. I can probably devise reasonable alternatives for C++14 (your compilers) for things. If something's not working for you please let me know ITT and I'll try to address it for you.
>mlpack advanced C++ AI library xpost >>5730
An anon mentioned we should start learning testing early (>>5741) so that we're doing things correctly from the beginning. This is good advice and we're going to go ahead and set up a simple project to begin our testing experiments. If you haven't already done so, follow the instructions in this post >>6517 to get Catch2 set up on your machine. Open Juci and create a new project waifu_say > #1 We're going to play around with a simplistic mapping of sayings our waifu can say to us, and also begin doing basic testing on it. -As always, delete the default .clang-format file that Juci automatically creates in our project directories. This will ensure that the clang-format system picks up and uses our own, better, .clang-format file we have already created in our home directories. -Copy the two amalgamated files from the Catch2 local repo into our project directory: cd ~/_prj/waifu_say/ cp ~/_repo/Catch2/extras/*amalgamated.* . > #2 We'll be using these two files to perform testing on our software. Let's create a new test.cpp file and #include that new header file first thing. #include "catch_amalgamated.hpp" > #3 Open the meson.build file and make these two changes: -Change the C++ standard flag to -std=c++2a -Add a new executable for the project to build named test_say and add both the test.cpp & catch .cpp files as it's sources executable('test_say', ['test.cpp', 'catch_amalgamated.cpp']) > #4 Go ahead and build now (ctrl+enter). If everything went well, you should see that Juci compiled two executables for the project this time, waifu_say & test_say. Note: the very first time compiling with this new header takes a good while b/c raisins. It will go along faster afterwards. > #5
>>6519 The author of Catch2 recorded a talk a couple months ago about using Test-Driven Design for C++ https://www.youtube.com/watch?v=N2gTxeIHMP0 I'd recommend you make the hour or so to watch this talk. We'll be loosely adopting his approach to drive the design of our simple waifu_say class. Open the test.cpp file and add a new blank test case: TEST_CASE("Just like make robowaifu") {} The test case's description is just a free-form string. It's generally important to name these test cases meaningfully, but we'll just go ahead with this memetic version for now. :^) Build it. If all went well, you should see some results from Catch2 in the Juci terminal emulator: =============================================================================== test cases: 1 | 1 passed assertions: - none - ~/_prj/waifu_say/build/test_say returned: 0 Since Catch2 provides nice coloration, we'll generally be running our tests in terminal shell instead: build/test_say > #1 Now let's add a C++ std::map with a few strings that represent things our waifu might say to us into test.cpp: #include <cstdint> #include <map> #include <string> const std::map<uint32_t, std::string> sayings{ {1, "I love you Oniichan!"}, {2, "Did you have a hard day Oniichan?"}, {3, "Oniichan, do your best!"}, {4, "Poor Oniichan, please get some rest."}, }; > #2 Let's create our first failing test by adding a (non-existent) Robowaifu class variable into our test: Robowaifu waifu Build it and see that we have our first failing test. > #3 As Phil Nash mentioned, our compiler is actually part of our testing system. Let's get this test to pass by adding an empty Robowaifu class & building again: > #4 >=== -add slides link from Phil Nash talk: https://levelofindirection.com/refs/tdcpp.html
Edited last time by Chobitsu on 11/08/2020 (Sun) 02:08:01.
>>6520 OK, now let's write our first test assertion ( a REQUIRE() in Catch2 ). We'd like our waifu simply to say things to us, so we'll just type that out, using one of the key numbers from std::map sayings: REQUIRE(waifu.say(1)); Building that gives us our next failing test: > #1 Ofc, the Robowaifu class has no function named say() yet. So let's add that public member function to the class and try again: public: std::string say(uint32_t say_num) const {} Building that gives us a lengthy error-spew. Scrolling up, we find the pertinent bit as the first error in the output: In file included from ../test.cpp:5: ../test.cpp: In function ‘void ____C_A_T_C_H____T_E_S_T____0()’: ../catch_amalgamated.hpp:5229:54: error: no match for ‘operator!’ (operand type is ‘std::__cxx11::string’ {aka ‘std::__cxx11::basic_string<char>’}) } while( (void)0, (false) && static_cast<bool>( !!(__VA_ARGS__) ) ) ^~~~~~~~~~~~~~ >protip: Always search for the first error in a spew. This is usually the culprit. > #2 The problem is error: no match for ‘operator!' . Wat does this mean? :^) The problem is that the REQUIRES() is a binary test and needs two parts. We've only given it the first half, but not the second. Basically, we have to provide the string for it to compare the argument waifu.say(1) against. OK, let's fix that: REQUIRE(waifu.say(1) == "I love you Oniichan!"); > #3 Welp trying again and it looks like we're still red (failing). ------------------------------------------------------------------------------- Just like make robowaifu ------------------------------------------------------------------------------- ../test.cpp:19 ............................................................................... ../test.cpp:22: FAILED: REQUIRE( waifu.say(1) == "I love you Oniichan!" ) with expansion: "" == "I love you Oniichan!" =============================================================================== test cases: 1 | 1 failed assertions: 1 | 1 failed What now? Looking at the the 'with expansion' section gives us the clue: with expansion: "" == "I love you Oniichan!" > #4 The first part of the equation is just an empty string "". Looking at the function body of Robowaifu::say(), we see that we aren't actually returning anything yet. Let's add just enough to make this latest test pass: std::string say(uint32_t say_num) const { return "I love you Oniichan!"; } Build and run it now: build/test_say -s (note the ' -s ' flag. it will show us successful tests too) > #5 OK, we're back to green again.
>>6522 Alright, let try a second test assertion now to get to the next failing test: REQUIRE(waifu.say(2) == "Did you have a hard day Oniichan?"); > #1 We can see we have one pass and one fail from our two assertions. Ofc, the return value from Robowaifu::say() simply returns the correct response for the first item in sayings. Let's use the same (limited) approach to 'fix' the second assertion's failure: std::string say(uint32_t say_num) const { return "Did you have a hard day Oniichan?"; } Re-running the build shows all we've done by this is simply swap the errors for one another (note: I reversed the REQUIRE()s inside the test case b/c of an early abort issue with the current devel version). This helps us with understanding two things. A) We need a better approach to our 'solution'. B) By leaving previous tests in place, they act as regression-test checks to let us quickly see if any new code breaks something that used to work. In this case it does ofc (the previous say() output). So, how to make this work correctly? Juci itself can help us out here. Scrolling up a bit in our code and hovering our mouse over the yellow-squiggles underneath the 'say_num' parameter, we see that the IDE is providing us with the info that say_num is an unused parameter. This is a clue for us: we aren't even referencing the std::map containing our sayings collection, from inside our Robowaifu class yet! > #2 A C++ std::map provides us with generic containers of pair items. In this specific case, we have a number of strings representing sayings, and they are indexed by the unsigned integer key. const std::map<uint32_t, std::string> sayings{ {1, "I love you Oniichan!"}, {2, "Did you have a hard day Oniichan?"}, {3, "Oniichan, do your best!"}, {4, "Poor Oniichan, please get some rest."}, }; With a map, you can find() the key, and retrieve the value, pretty much the way a database works. So, next we'll refactor our Robowaifu::say() member function to use the say_num parameter and retrieve the correct text string with it. https://en.cppreference.com/w/cpp/container/map/find
>>6524 In Modern C++ (17 or greater), an if() logical test can have two sections; the initializer, and the conditional. This is similar to the way a for() loop works. If you read the cppreference.com link to map::find() posted above, then you saw that an iterator is the return type from the find() function. Iterators are kind of like C-pointers for C++, and you use similar syntax for dereferencing them for example. Further, a std::map item has two named parts, 'first' and 'second'. First is the key, and second is the value. Combining all these facts together, and we can write a search test to see if the say_num parameter was actually found within our map: std::string say(uint32_t say_num) const { if (auto search = sayings.find(say_num); search != sayings.end()) return search->second; } This is saying to the system, "Look for this key number. If you find it in the std::map, return true". This is the correct behavior we're looking for ofc. Re-running our system now shows us that, sure enough, we're back to green. > #1 Great! We're done right? Well wait, what's this yellow squiggle and warning message? > #2 Apparently our function has a problem still. Turns out we're only handling behavior for the valid case. What happens if we pass an invalid number into waifu.say() ? For example, since we currently have only 4 sayings in our map (numbered 1 - 4), what happens if we 'inadvertently' pass in a number not in that range? REQUIRE(waifu.say(9'001) == "Foobar9000"); > #3 The failing test shows us what our new say() code does with an invalid input key number: with expansion: "" == "Foobar9000" Similar to earlier in our code, the function just returns an empty string. Instead of simply returning nothing from our function with an invalid input, what if we improved it a little by at least indicating that our waifu doesn't understand the saying number we asked for from her. std::string say(uint32_t say_num) const { if (auto search = sayings.find(say_num); search != sayings.end()) return search->second; return "I don't understand Oniichan..."; } Ah, that got rid of the squiggles too. Now our waifu is telling us if she doesn't understand what we just asked for. > #4 But we're still failing that final test. Sometimes the tests themselves are the problem. Let's change the invalid, bogus one to use the != operator instead and re-run: REQUIRE(waifu.say(9'001) != "Foobar9000"); > #5 So, seems like we're back to all green and we have a system that will properly look up a valid key and return the correct text to us, and provide a helpful message when we pass in an invalid one (and not break the system). Seems like we're making good progress with our Robowaifu class so far and our tests.
OK class, I think that's plenty to take in for this time. Hopefully you picked up on some of the basics of unit testing, and driving our C++ code designs with tests. It can guide us into the correct behavior in our systems without us having to figure everything out perfectly in advance. Next time, we'll expand on waifu_say project to start associating an anon's inputs with the desired waifu outputs, and then refactor out our class into it's own header and begin using it independently of the test harness. Till then, cheers!
>>6528 Since it may be a bit before I come back with more, I figured I'll go ahead and temporarily post the 3 file's current-state contents, just in case any anons following along are getting stuck. >main.cpp #include <iostream> int main() { std::cout << "Hello World! \n"; } >test.cpp #include <cstdint> #include <map> #include <string> #include "catch_amalgamated.hpp" const std::map<uint32_t, std::string> sayings{ {1, "I love you Oniichan!"}, {2, "Did you have a hard day Oniichan?"}, {3, "Oniichan, do your best!"}, {4, "Poor Oniichan, please get some rest."}, }; class Robowaifu { public: std::string say(uint32_t say_num) const { if (auto search = sayings.find(say_num); search != sayings.end()) return search->second; return "I don't understand Oniichan..."; } }; TEST_CASE("Just like make robowaifu") { Robowaifu waifu; REQUIRE(waifu.say(2) == "Did you have a hard day Oniichan?"); REQUIRE(waifu.say(1) == "I love you Oniichan!"); REQUIRE(waifu.say(9'001) != "Foobar9000"); } >meson.build project('waifu_say', 'cpp') add_project_arguments('-std=c++2a', '-Wall', '-Wextra', language: 'cpp') executable('waifu_say', 'main.cpp') executable('test_say', ['test.cpp', 'catch_amalgamated.cpp']) >=== -fixed operator!=()
Edited last time by Chobitsu on 11/08/2020 (Sun) 23:13:38.
Just a random anon here, sharing a tidbit of inner workings of computers. For those who skipped high school, for our purposes any number to the power of 0 is 1. Mathematicians argue over that, but it doesn't matter to us right now. An easy way to understand the bit representation of unsigned integers is to know this compact definition, which is rarely peddled around: Bits are numbered starting from 0, every bit's value is 2 to the power of its number. This also helps with a fun little detail, to have the machine give you the value of a specific bit, you merely have to shift the number 1 left by the bit's number. >1 << 0; Returns 1. >1 << 8; Returns 256.
>>6531 Thanks for the tip Anon. There's a guy Bean Eater who created a simple 8-bit computer from scratch using breadboards and commonly-available ICs. In a couple of the videos about the ALU & A/B registers he deals with one's- and two's-complement number representations. Playlist is linked here >>1554
>>6531 >>6532 I might also add there's a great little book that addresses all sorts of interesting things going on inside computers. But How Do It Know? >>4660 Hopefully, we'll integrate some of it's contents ITT in addition to the two primary books by Stroustrup already being used.
So the silly test REQUIRE(waifu.say(9'001) != "Foobar9000"); was really just an artifact of the way we grew the code, and isn't really relevant for a proper test. We've already provided for a reasonable result from our class' member function on an invalid input "I don't understand Oniichan...", so time for a minor refactoring of the test itself. REQUIRE(waifu.say(9'001) == "I don't understand Oniichan..."); > #1 Recall from Phil Nash's Test Driven C++ talk linked above that there are 3 steps in the primary TDD loop: -Red -Green -Refactor > #2 Even tests may need refactoring themselves, as we saw above. But we're not done just yet. We have two passing tests about valid inputs (#2 & #1), but we don't really want to try testing every.single.input. Not only would that be a fool's errand, but it doesn't really help us to drive the design, and that's what we're really after here. What might be a better approach is to provide a valid input, and then check for both the positive result, and then confirm it's inverse; 'not a negative' result. To put it another way, let's also make sure we don't get a "I don't understand Oniichan..." when we give a valid input. This may seem silly at first, but once code grows big and things begin to change around this approach (testing the positive and negative) can help us smoke out regression bugs we might introduce. Let's remove one of the redundant positives, and replace it with the 'not a negative' test instead. REQUIRE(waifu.say(1) == "I love you Oniichan!"); REQUIRE(waifu.say(1) != "I don't understand Oniichan..."); OK, that passes and we now can be fairly sure that if we give a valid input, we'll get a valid output. > #3 #4
Open file (65.75 KB 1113x752 juCi++_181.png)
Open file (65.07 KB 1113x752 juCi++_180.png)
Open file (68.47 KB 1113x752 juCi++_182.png)
Open file (67.29 KB 1113x752 juCi++_183.png)
Open file (99.95 KB 1112x760 juCi++_184.png)
>>6922 OK, now that we've wrung our simple Robowaifu class with it's sole member function out a bit, let's lift both it and the sayings std::map out into their own files, and we can begin using them outside our test system. Create two new files in our project 'Robowaifu.hpp' & 'sayings.hpp', (File > New File). Cut & paste the Robowaifu class into it's own file, and the sayings std::map into it's own file. You'll also need to add the appropriate C++ library includes for each file as well. Eg: #include <cstdint> #include <map> #include <string> > #1 #2 Since we've moved the sayings std::map off into it's own file, we'll need to include it into the Robowaifu.hpp file so the class itself can find the sayings data. #include "sayings.hpp" > #3 The test file now just contains the test (as we'd like), but we need to add an #include for the new Robowaifu.hpp file so the test can find the new location of the class itself. #include "Robowaifu.hpp" > #4 Now let's switch over to main.cpp, the program itself, and after including the Robowaifu.hpp once again, and instantiating a Robowaifu object, we'll try testing the valid sayings and a couple of invalid ones. Everything seems to be working so far: > #5 Alright, we've lifted our class out of the test itself and begun using it 'in production'. Next we'll start working on anon's inputs, and you'll start seeing why we are using code numbers to index into the possible waifu sayings to us. Cheers.
Open file (55.46 KB 444x614 C.jpg)
Because I am a programming brainlet, I'm trying my hand at some C. The first exercise is a bugger, and my brain was like "Duurrr Fahrenheit to Celsius then Celsius to Fahrenheit then Fahrenus to Celsyheit and Fahrycels to Heityfahr?" However, once they introduce the 'for' statement I began to realise that this stuff has a lot of potential. Sure, it took me four hours to figure out how to code two temperature conversion charts, but once they are done, just changing a couple of numbers lets you convert between any temperature in those units! Also, the whole calc can be reversed by only changing three characters. At first I was like -_- But then I was like O_0
>>10301 Congratulations, Anon. You've opened up a whole new world. You can create entire universes out of your dreams now if you just keep moving forward. Programming with serious intent at the level of C is easily one of the most creative things you can do as a human being IMO. And C++ is unironically even better! This isn't a C class per se but it's pretty likely I can help you out with things if you get stuck. Just give me a shout.
>>10302 Will do OP, thanks! I'd heard that learning C is best before starting on C++, so I'll just keep on plodding. After all, if your waifu speaks Thai or Russian, then it's best to learn those languages. And if your waifu speaks C/C++...
>>10361 >FANG Well, as you might imagine if you've lurked for ~5mins, then you'll know that the Globohomo Big Tech/Gov are pretty reviled on IBs in general, and /robowaifu/ in particular. So, you leaving their employ would be good for your soul, whether or not you begin your own robowaifu company. We here are all about licensing our designs and source as permissive BSD3/MIT licensing, so you can feel free to use any of our stuff here as you'd like. It goes without saying we here would enjoy you doing the same. Doxxcord isn't a particularly welcome platform again, on IBs in general, and /robowaifu/ in particular. Perhaps you can 'sleuth' for us by simply making your announcements here in your thread. BTW, we're a SFW board, focused primarily on design & engineering. If you have any other questions feel free to fire away.
>>10361 >Anyways, heres Taiga, I voice cloned Cassandra Lee Morris, the english VA for the character. Is there a ink or file missing OP? Or am I missing something altogether?
>>10364 > I know nothing about the culture of IB thank you for the insight. If you have JS enabled, you can find a pulsing, glowing link up in the headerbar of this page entitled The Webring. Click that and you can look around on IBs that have gathered together to form a small community. >Be aware: You may find links prefixed with '8chan' via other site's similar the webring link. This is a suspected fed honeypot. And whether it is or not, at least two of the administration there are also reviled in general if not quite as bad as the globohomo menace, for very devious and corrupt behavior. I'd suggest our board, anon.cafe boards, and kind.moe as good places that are relatively safe for a newcomer to get their feet wet with IB culture in general. Once you feel comfortable finding your way around with this subset, then you can branch out on the webring from there. >I had thought that the IB would close the thread down over time I guess not. As long as no abuse goes on here as a direct result of your actions ITT, then I don't intend to remove it, no. Here on /robowaifu/ (btw we're somewhat unique in a few ways, this is one), we keep a fairly tight-reign on thread creation and as you'll see if you look around our catalog page, we have a fairly wide array of topics covered already. We like to keep posts on-topic as practical b/c it's easier to find design, engineering, and other information later. On that topic, we have a CLI tool created here for this board called 'Waifusearch'. It's a primitive search tool that can do phrase lookups, and provides convenient hyperlinks to thread posts. The links for building & using this tool can be found in the OP of the Library Index thread. >with your blessing board owner sir. <sir Kek, my name is 'Anon' here, same as everyone else. :^) Note: so-called 'name-fagging' is welcome here on /robowaifu/, unlike most IBs. Feel free to go by Em Elle E as you care to, Anon. >I will make the announcements here. Glad to hear it. I'd suggest you use our catalog page to re-locate it again in the future. Welcome OP.
>>10366 Thanks it sounds nice Anon.
>>10371 This isn't my work Anon, so don't quote me. I believe at least one Anon is (possibly others by now) working with a heavily-modded GPT-2 model (>>9124, >>9121, >>9437). We have at least two other (not counting yourself) researchers with AI here.
>>10366 You might look into this novel voice converter, Anon (>>10159). >>10373 >I get it all anons are called robot waifu technicians Heh, you found us out. I think you'll find several IBs during your explorations that have different 'default names'. A former BO introduced ours, and it just stuck ever since.
>>10375 Yes, he's right here from /robowaifu/ so I'd imagine so. Just make a post explaining your desire to one of his posts, and put a so-called 'crosslink' back here to your thread, Anon. (>>10361)
>>10377 I see, thanks for the explanation Anon. That clarifies a number of things for me. While we strongly encourage everyone to open up their work (to help rapidly spread robowaifus everywhere), we also have a number of Anons who intend to start businesses. And that's fine with me too, so I would be fine with helping you out personally. However, I have little experience with AI so I'm not likely to be of much help in that area. I know C++ & CUDA & OpenCV (which is C) programming. I can dabble in Python, but I seem to have a basic aversion to it for various reasons. All my passion is about high-performance, systems-type programming. Only you can answer whether that's useful to you. OTOH, I have managed to devise a pretty easily-extensible (though quite simple) OpenGL rendering system that regularly hits 60fps on my old toaster notebook w/ no dedicated GPU (>>1814). So, I think your target-hardware decision of the desktop is a good one, and offers several advantages as the initial platform.
>>10376 >explaining your desire, replying to one of his posts*
>>10382 Thanks! I know it sounds immodest, and that's not intentional, but that bit of work is literally the simplest OpenGL-specific C++ code I've ever seen, that goes all the way down to the shader programming and asset import/mesh construction level. I worked hard to simplify things within it, simply b/c that's literally the single best way to deal with complexity (>>9641). The very fact it's so simple is exactly why it runs so fast on such potato-tier hardware.
>>10388 >where do you plan to take your work ? what are the next steps? Well, I'm the OP of that thread and outlined a couple of ideas. I was chugging along with laying the groundwork of high-performance rendering and environmentals, got asset import working, but wanted to have our own, independynt skeleton system for our system (remember the goal for it is to be an actual simulator, not just an animation system. think 'physics-linked-to-AI-training system') and that led me down the bunny trail of having to learn linear algebra basics sufficient to devise my own FK/IK skeletal system. After a month or two, I picked up enough to probably go on with. But as there were other things calling for my time I shelved it with the intention of picking it back up when the time seemed right.
>>10407 Thanks. My guess is OpenAI Gym won't run at all on a toaster. I mean for mine to be able to do so well in fact.
Open file (7.70 MB 640x360 goobas.webm)
>Support via Patreon I don't have any money.
>>10411 I think he's looking for help in other ways Anon. Maybe you can do something else to help instead?
>>10377 Welcome Em Elle E! Sounds like you have exactly the kind of skills that /robowaifu/ needs and are focused on solving a major problem of ours (combining all the different functions you mentioned into a single software package). I'm shit at programming mainly because I went into the healthcare profession (a.k.a hell-on-earth), but I've always admired people who can, so it's interesting to hear from someone who actually works for a big-tech company. I'm mainly a hobbyist/model maker guy occupied with hardware - designing parts, 3D printing then building, wiring and programming in movements. Relatively simple stuff but I have enjoyed making my slightly mad robowaifu who I call Sophie. She would really benefit from upgraded software. I'm kinda at the stage where I've constructed her upper body but there's not much going on inside her head (she and I are similar in that respect ;D).
>>10411 Hi, I am a team member working on the project mentioned in the OP. Just as an FYI, most of the product that we are creating will be free for all to download and use, and we are not putting that behind a paywall, so you will still be able to use it without having to spend any money. However, a few cosmetic items, that not related to the actual core functionality of the application, will be monetized or available as exclusive rewards to patrons as a way to show our gratitude for their support. Supporting us financially is entirely of your own volition, since right now at this point, it's more of a means to help us obtain the resources to continue development on this project and make it better, rather than anything else. People can however still support us in other ways if they want, such as spreading the word about this or lending their skills to help with the development process itself (only if they want though).
>>10416 >However, a few cosmetic items, that not related to the actual core functionality of the application, will be monetized or available as exclusive rewards to patrons as a way to show our gratitude for their support. Not him, I consider that a pretty smart business model.
>>10414 >Relatively simple stuff Don't sell yourself short Anon. We're all part of a team here on /robowaifu/ and I dare say the Mechanical Engineering you are teaching yourself to master in the trenches will prove rather difficult for the mathematicians/AI anons to master. We all have our parts to play here. As you mentioned >combining all the different functions you mentioned into a single [] package is in fact exactly what the discipline of Systems Engineering is all about (>>9398). For us, our 'single package' is something that actually encompasses both hardware & software. Without hardware, the software will never be part of something real. Without software, the hardware is simply an interesting doorstop. I'm sure you're becoming well acquainted with this duality in your current journey with Elfdroid Sophie.
Open file (1.27 MB 1111x1169 chrome_chan.png)
>>10424 Thank you Em Elle E. I'm by no means the only guy to be building a physical robowaifu. The demand for synthetic companionship is high (it's just that many people are afraid to publicly admit it because of others being judgemental/shaming). But as soon as a fun robowaifu is released onto the market, then she herself can tell those mean-spirited meatbags to suck her live wires! For sure there is a major financial opportunity for the first group of engineers who manage to code even a somewhat decent interaction platform for D.I.Y robots (not necessarily just robowaifus). I honestly don't know why Google, Apple and Amazon haven't caught onto this with their A.I. assistants? I mean, they've already done a lot of the hard work programming their A.I., synthetic voices and speech recognition - all they'd need is a wider variety of voices and some cute animated characters...or better yet software that lets you design your own. Regardless, whoever manages it will become rich with a capital R!
>>10432 Fascinating! How a system can be so singularly focused upon efficiency and profit but nevertheless still be profoundly short-sighted and broken :D But I digress. Hopefully you can make a robowaifu A.I. who makes both you and others happy, Em Elle E. I probably wouldn't quit your job over it though. Just because all the signs and symptoms are pointing towards a major collapse and restructuring/assimilation of Western civ in the near-future, so it's best to have plenty of emergency funds to fall back on. Robots and A.I. can make ideal companions during societal upheaval though - when you can't trust anyone else. So long as they're not too high-maintenance and power-hungry!
If you want to know where I get a lot of my criticism for modern corporate capitalism, then this channel by 'EcoGecko' on YouTube is the place to visit. He has very well researched videos (showing all the white papers and quotes), and reveals then picks apart the errors in our current way of life, piece by piece. https://m.youtube.com/results?sp=mAEA&search_query=eco+gecko Of course, traditional Communism was much worse (it relied wholly on people to work together and be honest, and we all know how that worked out time and again). >Begin rant about machine collectives. However, machine-based Communism (a machine collective) I think could actually work. Because machines CAN be relied upon to work together without trying to screw each other over, lie and murder. If there's a problem then it should be fairly easy to trace it to the one sure-fire weak-link in the system; the human (usually a programmer, or a technician like me making a mistake or not doing their job properly). Most problems in corporate capitalism arise from greed and selfishness - people wanting too much, too quickly and at the expense of others. Machine Communism eliminates this. In a machine Commune, as long as I work hard 6 days a week on both my job and advancing the machine collective (I see the two as connected and united), then I can make modest, steady progress (despite various setbacks). All the machines that I maintain remain functional (most of the time - sometimes have to get new parts), my robowaifu improves occasionally and I'm content. Can even save up enough money for one holiday a year. But crucially DEBT, conspicuous consumption and large amounts of waste are not permitted. Anything that plays too heavily into the hands of the greedy capitalist Corpos is to be avoided. Debt in particular must be cleared at the earliest opportunity. It enslaves you and compromises finances, reduces machine effectiveness (so "unhappy" robowaifus) and reduces human happiness. As well as leading to corporate enslavement, unnecessary consumption will likely damage the environment more. > End rant about machine collectives.
>>10439 Sorry if I hijacked your thread to talk about my chosen solution to the world's problems Em Elle E. But if you and your team do manage to eventually code a robowaifu development and interaction suite then I will certainly buy a copy and integrate it into my "machine collective". Also willing to support you on Patreon for sure. If your A.I needs a lot of power and compute, then it will just be up to me to work hard and save hard until I can afford the necessary upgraded hardware (then be patient and buy smart). Godspeed to you and your team!
This is really interesting development and I am keeping an eye on this. Sure, it's a guy working in BigTech, but a closet weeb who knows where the efforts should be going. Until you are literally cancelled, I suggest you keep that job in BigTech to sustain you until your projects are doing well in primetime. There's no harm in trying to get a few dollars here and there in your efforts as well -- Kibo-chan the moving doll 's maker (basically the cutest robot on JP Twitter) sells his STL files and has a Patreon as well. You might want to check him out to see realistic expectations for how much a waifu project would actually bring in -- and that's actually one of the most popular ones on the internet. Speaking of which, I noticed there are many software people now compared to a few years ago. This is great, this is where the applications for the waifus would advance. I'm a hardware person, though unfortunately I can't advance in my projects since I'm living with my mother again (senior citizens prohibited from venturing outside during pandemic and lockdowns are decided upon every two weeks depending on escalations). I'm working on robotic RC cars in the meantime to advance my hardware knowledge while consolidating notes on good mechanical practices for my long-term humanoid projects. Desktop apps are a good entry step. For example I like Mihoyo's N0va (Yoyo) Lumi she's very well rendered but she's too mired in chinese culture and I don't appreciate everything that's supposed to make her cute. To get maximum waifu effect, she can run on a big screen TV in portrait mode, the life-size appearance will enhance her presence. Alternately, you can make an Android app, then you can take your tablet and stick it into a DIY shoebox holographic projector. For a hardware perspective, this will also enable you to just stick a phone into a physical robot's head and only have to worry about the subsystems that will link to the motor controllers and sensors. Anyways, thanks for posting.
(Em Elle E's team member here again.) >>10445 >Alternately, you can make an Android app, then you can take your tablet and stick it into a DIY shoebox holographic projector. Yes, our team has plans to move this project to mobile devices, VR as well as AR when we end up getting enough funding to obtain resources for further development. It is definitely a thing that we have in mind. We, however, do not have members with enough skill in the hardware department, so as of now, we do not have plans for what-people-here-like-to-call a "real" robowaifu. It might be something possible in the future for us, but that will largely depend on the support that we get on this in the first place. ---- Anyways, it's unfair if we do not show something of what we have been working on *alongside* the AI and voice part. So here's a small video showing the character model we have doing a little dance (her appearance is fully customizable and we have a full framework for adding custom clothes and accessories, as well as animations. We are still working on the emotions and face expressions, so please ignore the blank look on her face for now.) [P.S. Didn't risk posting the bikini version since I read in this thread that this is a SFW board.]
>>10447 LOL she's looking good! Takes a whole heap of work to model, texture, rig and animate a 3D model like that. Awesome work! (Even more impressive that you guys are working on AI alongside this). BTW do you have a Patreon page setup or did I just miss the link somewhere?
>>10453 > digital assistant kiosk Like Gatebox Grande? You can take it a step further and make like Joi in Bladerunner with her ability to leap from one TV screen to another.
>>10453 >you will be able to attach a webcam to your tv monitor and display this vertically for now. No offense, but I would definitely prefer not to use a webcam to non-opensource, persistent, software. After all the usage scenario isn't like Zoom or something (on, then right back off). Will your system run without a camera/mic Anon?
(Em Elle E's team member here, I think I will now go here by my usual moniker, "Imouto".) >>10462 >Will your system run without a camera/mic Anon? Yes definitely, our plan is to make the system as modular as possible so you can run it without camera/mic as well, although it will limit some functionalities. Let me explain this a bit. We are using mic input for voice chat and commands with the waifu, so you can speak with her AI like a real person and get an immersive experience. We will however also have the option to let you just type in your messages to her and she will respond with her voice as usual. Webcam is to let her "see" you. Face recognition will let her greet you whenever she sees you entering the room. Also it will let us track emotions of the user, so say you are looking sad, she will say something to cheer you up. We will also have eye tracking for other stuff too (cough cough). However all these are not "core" functionalities of the app per se, so you can simply turn off webcam permissions for the app, and still use it normally. It's just that these functions will be missing in that case since they are dependent on the camera to function. We value user privacy very highly, and thus just so people know, the entire core system will be fully functional offline, including the AI and speech recognition and voice synthesis. We will not be collecting any of your usage data unless you [i]manually[/i] turn on chat logging and [i]manually[/i] send those log files to us (so we can improve the AI). And it goes without saying that some functions such as google search from voice commands given to the waifu AI or integrating the waifu to control your Spotify playback, will require you to be online, similar to Cortana's assistant functions. But yeah, the core framework will be fully offline as already explained. Hope that clears things up. :)
>>10468 >and thus just so people know, the entire core system will be fully functional offline, including the AI and speech recognition and voice synthesis. That is well. IMO that opens up a much broader range of usage scenarios if it proves to be the actual case Anon. If you can manage to make time to 'roam the halls' of /robowaifu/ for a few days, you can see that the significant majority are very highly concerned about the surveillance state embedding itself into our waifus. It goes without saying, but NO THANKS. :^) >Imouto Good idea I think. We had a general discussion on so-called namefagging on /robowaifu/ (generally quite unwelcome on most anonymous imageboards), and it was basically agreed that the benefits significantly outweighed the downsides in our specific case as a design and engineering board. Imagine working at a big company where every single day was basically like the scenario for the girl in 50 first dates. >posting tags Click the little '? Help' link in the page's header bar and you'll be given the proper tag codes for Lynxchan boards. >Hope that clears things up. :) It does, and thanks again for taking the time to do so Anon. I wish you Godspeed in your project work. BTW, do you have a name decided on yet, Imouto?
>>10469 >you can see that the significant majority are very highly concerned about the surveillance state embedding itself into our waifus We share your concerns and feel exactly the same way. In my opinion, it's immoral and unethical, and it's basically like someone spying into your bedroom while you are enjoying your private time, and that is truly obnoxious. Hence our decision to create a fully offline AI and bot framework. >if it proves to be the actual case Anon. Of course, we do not wish that you believe us based on mere words and promises. We will prove ourselves when we deliver the product. :) >Click the little '? Help' link in the page's header bar and you'll be given the proper tag codes for Lynxchan boards. Thanks. :) I was new here so I had no idea how formatting worked here. >BTW, do you have a name decided on yet, Imouto? The project itself is called "Waifu Engine". I think Em Elle E probably got the name inspiration from Wallpaper Engine.
>>10471 > it's immoral and unethical, Both quite true. However, it's going to be actually dangerous for any man who is a waifuist in the future. The Globohomo Big Tech/Gov isn't going to take too kindly to any man who refuses to toe the party line, and dedicated most if not all of his resources to propping up useless (and downright demonic) slags. They will mean to put such men up against the wall at the earliest possible moment, I'm sure. However thankfully, God will laugh them all to derision in the end! :^) >Waifu Engine It sounds kind of like a framework and one with a big target painted on it's back (see above). I'd suggest you try adopting such an approach, and allow a community of open-source devs to develop around it. Various distillations and add-ons, etc. Kind of like the transformation the UE underwent.
>>10472 > It sounds kind of like a framework. I'd suggest you try adopting such an approach, and allow a community of open-source devs to develop around it. Various distillations and add-ons, etc. Kind of like the transformation the UE underwent. Yes, that is exactly the plan! We will make the platform as much mod-friendly as possible, so people can write their own plugins in python and other languages and create add-ons and stuff. And of course, they will have the full rights to what they create. I think extensive mod-ability is why games like Skyrim are still popular (honourable mention to Honey Select by Illusion as well), and we plan to do it the same way by giving a lot of customisation power to the community.
>>10473 >and we plan to do it the same way by giving a lot of customisation power to the community. Outstanding. >>10474 >Maybe I am missing something, but why would you say this? is there something I don't understand? It's a painful, and long, drawn-out conversation. I'd suggest you look around on the board in general for the full discussions, but the tl;dr: >Angry feminists have the ear of lawmakers, and are already decrying anything that empowers men. >Sex-robots are high on the list of the many things they want outlawed immediately b/c misogynistic rape -- 'won't someone please just think of the children!'.' >Anything called Waifu Engine is immediately suspect in the minds of these deranged harpies. I hope that clears things up Anon. I'd suggest you invest in building Waifusearch and using it if you really want to dig around in these topics here.
>>10468 >We are using mic input for voice chat and commands with the waifu, so you can speak with her AI like a real person and get an immersive experience. We will however also have the option to let you just type in your messages to her and she will respond with her voice as usual. This alone will be an epic challenge. I was mostly using AIML to program my Sophie chatbot, along with a Python text-to-speech module and a Cepstral voice for my bot and Google speech recognition. I got to about 6500 categories and some more content scripted in Python before I stopped and began building her body. Anything more than that is pretty mindblowing to me. I'm still amazed by the writing ability of GPT-3 (but disappointed it's so closed off and only available to those with a literal supercomputer). I wouldn't worry too much about trying to learn your A.I. terabytes of information, as long as users can program her to learn stuff. Then it's up to them what she knows and doesn't know.
>>10496 These are good insights Anon. I'm a generalist and 'both sides of the house' so to speak (software & hardware) are of concern to me. Have any good ideas about how we might approach modularization of hardware components to help addresses the wheel-reinvention problem? As you suggest, being able to be quick on our feet rolling out new design implementations in the future will be good for literally everyone but the haters. > unless you are like EX robotics company funded by the Chinese government. < *applications to EX Robotics intensifies*
>>10503 Interchanging harware is going to be difficult. Since some want to use motors, others combinations of actuators.
>>10514 Added Patreon link to your OP, OP.
>>10496 Very nice lip-sync there, Em Elle E! That's some really clever stuff. Also... Holy shit Chinese government funds robowaifus!?! I am...conflicted but hopeful. Obviously they will be full of the worst kind of big-brother surveillance tech...but so would Western robots (in fact any robowaifu commissioned by a Western/NATO country is just as likely to be a trap to help them identify and imprison dissidents). However, if ExRobotics robowaifus make it to mass market the potential is huge! Even if I have to dismantle most of the robot, desolder some chips, replace her existing drives and install completely new software. They could help us out a lot!
>>10519 I don't think the Chinese gov does this with the aim of spying on people. They support technology, but also China and some other countries fear the male surplus they migth have (bc more abortions of female embryos). Robowaifus would prevent rebellions even without spying, by keeping men happy, which would otherwise be alone and angry.
Open file (83.70 KB 1024x640 ricky_ma_waifu.jpg)
>>10522 I actually befriended a Communist Chinese guy who was at Uni here. The thing about missing women is very real. Low-tier guys like him typically compete with around 30 other men for a single woman there. Every one of them all call here little sister or young female cousin (almost none are blood-related). Disenfranchised young men there, with basically zero prospects of obtaining a wife, is a very real problem there (and much larger than in the West in this respect). We talked for hours about his situation there, and he opened up to me quite a bit about what they deal with there.
>>10523 There are criminal groups literally stealing young women from neighboring countries or buying them, then selling them to Chinese men or their families for marriage. Maybe feminists should sponsor our efforts here. :)
>>10524 Yep, bride kidnapping is a long and honored tradition around the world. And, as you suggest, since the CPP One Child law for native Han Chinese (but not for the Islamist ones, lol) has been a booming business. BTW, it's generally not quite so 'criminal' in many of the cases, with the females basically volunteering for the process, with hopes for a better life and some money in the process. It's corporate-controlled media that spins the entire thing as ebil Muhsoggyknees!!!111 Again, this has been going on for basically all human history, often quite ritualized in several cultures. But sadly, my friend was far too proud to consider marrying such a foreign woman of dubious quality. His chief goal when I knew him personally was to rise in the ranks of the CCP to some sort of mid-tier managerial post. Sad to my mind, but for him was a key to success.
Open file (15.17 KB 1024x683 Baste_people_of_China.png)
>>10524 >Maybe feminists should sponsor our efforts here. :) What we really need to do is get the Taiwanese Republic of China nerds on board with creating robowaifus with us. This would be the catalyst that would change everything.
>>10523 That's very interesting anon. I didn't know the situation in China was that bad! Suddenly it makes a lot more sense that they are investing in robotics (and also genetic engineering/cloning). I hope your Chinese friend eventually gets a robowaifu too.
>>10528 >I hope your Chinese friend eventually gets a robowaifu too. Me too, but it seems unlikely. He's a pretty conservative Communist. Even once we finally achieve the near-Utopic dream of companionship functionality, I think he would consider it some kind of betrayal or other. Plus, no artificial wombs ATM so there's that too.
>>10529 >>10530 >>10531 Thanks Em Elle E. > https://cyberbotics.com/ Looks quite mature, I'll plan to have a look into it. I was already exploring ODE currently b/c DrGuero's simulator (>>10343, ...)
>>10531 Thanks, I'll look into it at some point. My argument was meant on the topic of interchanging hardware parts, though. If one robot uses a combo of pneumatics and motors, the other only motors, it might be difficult, for example. No air tube, no pneumatics.
>>10518 Sure nprb. Godspeed.
Open file (91.18 KB 789x1200 glorious PC gaming.jpg)
What is the minimum hardware requirement for your program? I hope it doesn't require a super computer to run it.
>>10557 Out of curiosity, does DirectX 11 offer something that is not achievable with Vulkan/OpenGL? Or is it because the engine you are using works better under DirectX? Because DirectX is available only for Windows OS and I'm running GNU/Linux. Well at least there is a program that translates I think at least DirectX 10 calls to Vulkan calls so its not troublesome. >8-16GB RAM hmm it sounds a bit too much for a single program though I cannot tell what kind features you are going for that needs so much RAM.
>>10569 I think it should be noted that interference might not need fast/expensive RAM and also not much computation per thread. That's why I want to build at least one Xeon based server, with one or two old CPUs and a chinesium board, as soon as a decided where to live for the next years. I think anyone here, being serious about having some remotely human-like AI at home, will have to do that.
>>10566 >8-16GB RAM >hmm it sounds a bit too much for a single program though I cannot tell what kind features you are going for that needs so much RAM. Well, the renderer itself takes about 500 MB RAM and like Em Elle said, the speech synthesis module takes about another 500 MB RAM, and then you have the AI model that takes about 5-6 GB of RAM. Now since this is a fully offline architecture, everything is run locally on your system directly. We are not using any online based server to process all the AI stuff (very much unlike how other assistants like Siri, Cortana or Google Assistant), since we value your privacy first and foremost. Now, it's also possible to run an AI without loading the model to the RAM, but then you would have to wait about 20 s for each response. In order to make it run fast and respond to you in real time, we require to keep it loaded in the memory, so that you get an immersive experience just like talking to a real person. So the app itself would take at most about 8 GB RAM to operate (or maybe slightly less than that) and that is the minimum requirement. 16 GB of RAM is what we recommend though, since you will want to keep her on your desktop wallpaper, and you would want to keep doing other regular stuff at the same time, which is why the extra RAM. Hope that makes sense. :) We however plan to make the AI models smaller and more compact (so that they take less RAM) by distilling and retraining them once we have more resources to do so, cause at present we have been working on this with a zero budget and a very small team of volunteers only. >Because DirectX is available only for Windows OS and I'm running GNU/Linux. We plan to support Linux in the future, but it will not be able to have a wallpaper mode. Linux support is currently low priority in our development roadmap, due to far more people using Windows, which is why we want to focus on getting this up for Windows first, which I hope, is understandable. I hope that clarifies the doubts that were there. :)
>>10573 >"since we value your privacy first and foremost." While that's certainly a critically-important topic for every man desiring to own a waifu (whether he even realizes this or not), I'll offer you pointer for such a community as this one. >Protip: IB users are, and rightly so, quite skeptical (and even cynical at times). So, when we read such ad-copy here, we naturally respond with Pfft, sure. Where have we all heard that before? Normalfags, Normals, Normalcattle, Normalniggers, Normies. All these are derogatory terms we Anons use to refer to the normal populace that are in general easily swayed by hype and other media. I believe most of us here on /robowaifu/ would consider such claims as being directed at that population. Again, I'm not denigrating your stated position of protecting your users, of course not. But even Google is now attempting to brainwash people into believing that they are a privacy-oriented company. What could possibly go wrong? Even the glowniggers go to Jewgle for privacy-invasive data mining.
>>10581 Good advice, but thanks I prefer not to use NSA-OS 10 at all outside of a work environment Anon. I do use OpenSnitch, Wireshark, and a wide array of other networking tools on my own Linux/OpenBSD boxes. >f you live your whole life in distrust you will fail to live your best life. Kek, could be, could be. Thanks for the philosophical tidbit, but a healthy dose of skepticism has actually saved many a man's life. I realize that women and children aren't capable of doing this well in general (thus why they are clear targets of exploitation), but God actually intended for us to be protectors over them. 'Better safe than sorry' is an ancient, well-worn adage. And for good reasons.
>>10584 You too. I hope you live up to your promises Anon, it will be a nice step forward for us here if you do.
>>10573 Ah okay thanks for the full explanation, I was kinda worried that you guys weren't bothering with performance that much. >We plan to support Linux in the future, but it will not be able to have a wallpaper mode. Linux support is currently low priority in our development roadmap, due to far more people using Windows, Linux is big enough that even Steam bothers contributing to this eco system, also Linux is the defacto standard in almost any server hosting. Also here is some pointers https://wiki.winehq.org/Developer_FAQ#Independent_Software_Vendor_Questions so adding support is in most cases good enough if native Linux support is too much for you.
>>10592 >...simply ship your Windows .exe along with an isolated copy of Wine. Interesting. Any idea how that works, Anon?
>>10595 I think it means first create a wineprefix then test out the first 2-3 Wine version to figure out which one works the best and when one just works out of a box then do whatever other changes is needed and release that as a "linux" version. Oh and if possible use POSIX calls instead of Windows calls, maybe those programs are suitable for this MinGW or Cygwin https://en.wikipedia.org/wiki/POSIX#POSIX_for_Microsoft_Windows To create a wineprefix: https://wiki.winehq.org/FAQ#Wineprefixes so when you create a new wineprefix it contains well only the standard things so no extra libraries or anything of that sort.
>>10596 I see, I think I understand that. Personally, I generally just use MSYS2 whenever I want one ov my programs to work on Windows. Otherwise, I basically just use the Windows Virtualbox host to run a client OS. Thanks Anon.
>>10592 So I sent out a copy to a OG member of the doxxcord, that anon had a GTX 285 video card. Ran at 60fps and looks pretty good while running. So performance wise rendering seems good. Now just have to see if our AI services run well and low memory
>>10628 That sounds pretty encouraging so far. Thanks for the update Anon.
Is there any evidence of OP's waifu program other than the audio samples and videos that could have been thrown together pretty easily? What is there? A GPT model of which we don't have any concrete example, a speech synthesizer allegedly running realtime, and a barebones single model scene in the default unity skybox. All this from an alleged FAGMAN engineer working on it either between jobs or on sabbatical. Until someone posts a decent demonstration video I'm just going to assume this is someone fishing for patreon handouts on the hopes and dreams of naive anons. >>10573 >We plan to support Linux in the future, but it will not be able to have a wallpaper mode. Get some balls and just say you don't know anything about Linux and won't support it except as lazily as you can with your mediocre closed source unity animation player. It's trivial to draw to the root X window, and wayland probably has something similar.
>>10751 I'm willing to give them a chance. Nobody else seems to be doing what they are except for those Japanons at Gatebox, and I don't speak Japanese. To WA.I.fu or not to WA.I.fu?Only time will tell. If not, then all the more reason to build a physical robowaifu eh?
>>10754 To me it sounds possible. However, I would suggest to try searching for supporters elsewhere, bc we aren't that many people here, already doing stuff around that topic and quite some seem to be rather poor. There might be many others on imageboards and social media, e.g. around anime waifus. I like the animations btw, also the colors remind me a bit of Timkerbell, the fairy of porn (Peterson): https://youtu.be/ZjI7vqizTRc
Open file (116.75 KB 1200x1500 Samantha Samsung.jpg)
>>Your efforts are appreciated Em Elle E. I hope that your robowaifu brings you and your team a fulfilling sense of purpose and happiness even while she is in development. >>10755 I'd like to program Sophie some more and maybe get her to start drumming up more support for the robowaifu cause, but when I get home from work I am usually knackered and have a headache, so not really in the mood to do much. She is just stuck in standby most of the time (I call it 'derp mode' because of how daft her face looks with her jaw hanging wide open). I mainly just work on her at weekends. On the up-side though, I've found Final Fantasy 14, which is now free-to-play up to level 60 and is chock-full of waifus. There are elves, cat-girls in maid costumes, bunny-girls, smol, horned-waifus and some kinda big orc-pirate waifus called 'Roegadyn'. Of course I had to go and play as an Elezen version of Sophie, so she is mostly dashing about Eorzea in the evenings. Importantly, I don't think a robowaifu should be made in a stressful environment. It is best for her to be a labour of love undertaken because you are genuinely interested in creating her. A robowaifu should make you feel good in some way, even if you are her dev. Otherwise the only goal is money. And there's plenty of other miserable jobs one can do to earn that. One last thing. Has anyone else seen 'Samantha Samsung'? She is a 'rejected version' of a 3D computer graphics avatar that was going to be used for Samsung's virtual assistant. I'm not a huge fan of the Disneyesque type face, but she just goes to show how many people all over the world are would really like a WA.I.fu!
>>10764 All these programs that can generate further programs is on another level that I doubt I'll ever comprehend. If only I'd studied computer science with A.I. at uni instead of wasting my time in healthcare trying to cure people who will never get better and getting abused by angry/entitled patients and relatives. Oh well. Judging from the progress that's being made it looks like plenty of maths whiz-kids are into A.I. anyway!
Open file (89.85 KB 640x1136 IMG_20210526_045115.jpg)
>>10763 >Samantha Samsung Yes, Alita Army found her and claimed she's Alita in disguise. >>10764 >Samsung Bixby was acquition of Viv >called dynamic program generation. Which combined natural language processing with intent to create ontologies to understand your query then build a program on the fly. >IIt sad how this technology may never see the light of day or be released Let's collect all they shared, so we can replicate it: >>10783
>>10366 >Cassandra Lee Morris Excellent choice my dude. I will integrate this into my Ritsu-bot when it's done.
>>10763 >One last thing. Has anyone else seen 'Samantha Samsung'? She is a 'rejected version' of a 3D computer graphics avatar that was going to be used for Samsung's virtual assistant. I'm not a huge fan of the Disneyesque type face, but she just goes to show how many people all over the world are would really like a WA.I.fu! I'm vaguely skeptical they ever will b/c Worst Korea (roughly speaking, the ground-zero of stronk, independynt feminism in Asia), but if Samsung actually goes all in on waifuism with Sam they will catapult to the top of the stack in the global phone market, no question. With the tiny peek at the character design, and metric shitton of fan art instantly appeared. As you suggest, that is a very clear indicator of popularity -- as any one under 30 already knows entirely instinctively.
>>10897 LOL that is awesome progress on Em Elle E! Her voice is particularly impressive. I see you are having a similar issue to what I had with speech synthesis, where you have to break certain words down into syllables >synth the sis that are closer to how they sound but completely different to the way they are actually spelt. I can remember one Cepstral bot I had couldn't say the word "shoes", so it ended up being 'shooz'.
>>10897 Also, I get the feeling some folks are probably waiting for the first beta (alpha?) release before commiting to support the project on Patreon.
>>10898 Haha yeah I am gonna switch to using glow tts which uses phones to fix that issue, but will take two weeks to train. And on the second comment yeah I think that’s the case, I still find it strange though how some people can market something that doesn’t exist and under deliver and yet get funding. We should have a AI based chat in two weeks, then I’ll work on the speech to text which was partially in a fine state :)
Open file (562.78 KB 1068x1695 IMG_20201104_162811.jpg)
>>10897 Yeah, the voice is lovely. You might consider Indiegogo, which is more for investments or customers prepaying, while Patreon is for donations. Quite sure, the projects you're refering to, where getting their fundjng that way. Comicsgate, around Ethan van Sciver, is doing it that way. He talks in YouTube videos about his way to do business.
>>10897 Thanks for taking the time to visit us. And the same to you, best wishes.
>>11210 Sorry for leaving. I have realised that what a lot of young guys seem to want is an almost-sentient girlfriend who lives in a program on their computer; all on a shoestring budget 😅. This, of course, is impossible. I just get the feeling that a lot of people's expectations are waaaay too high and they don't realise how limited our A.I. actually is nowadays (unless you have access to hundreds of millions of dollars in funding and a literal supercomputer...and even then the A.I. is still very narrowly focused/brittle). WaifuEngine as an adult videogame makes sense. But I've decided to distance myself from the whole "A.l. girlfriend" idea just to avoid disappointment. I apologise because I am at fault for not having read up thoroughly enough on subjects like A.I. and robotics. Basically, the more I learn, the more I realise how far there is still to go (my Sophie is barely even a proper robot - she has just one sensor and zero autonomy. She's more of a life-sized animatronic plus chatbot - although she can still be fun to operate). It's not my intention to discourage you at all, I'm just being realistic and erring on the side of caution. Also, I have now found and subbed to Final Fantasy XIV Online (a.k.a Waifus Galore Online) because: 1.) Dozens of different Waifu characters. 2.) Gorgeous graphics. 3.) Your own Waifu is highly customisable. 4.) The storyline(s) are HUGE and extremely well written. 5.) The girls aren't left out either - they can play dress-up to look how they've always wanted. 6.) I also really like the gathering/crafting aspect of the game and how you can make/trade items that are of use and more than just tokens. And I haven't even joined a guild yet! I know FFXIV isn't a convolutional neural network...but it'll do me! Although, being a gamer and anime-Waifu enthusiast, I do still intend to purchase a copy of WaifuEngine if it comes out. Meanwhile, as our real-world forests are logged and burnt to a cinder, I shall be under the green-dappled shade of a magical virtual forest, enjoying the company of the wise-but-hella smashable Elves and their tree-sprite companions.
>>11214 I am guessing that people prefer this example because it contains a lot of sexual references? > people's conversational abilities will degrade over time, or certain groups abilities to have conversations will degrade over time. So the real human's ability to have a conversation will degrade as everyone gets dumber and more dependent on technology? This could well happen (I think it already has occurred in many "ghetto" areas with poor access to proper schools and reading material). Personally, I intend to avoid the dumbing down and ghettoisation of society at all costs (avoid, mind you - not try to fight it; I've seen what happens to idealists who try to champion good causes.) The ghettos will inevitably destroy themselves at some point anyway, even if it means all of "Western Civilization" has to be erased and re-built. Sometimes it's better to sear away all of the rotten flesh and start again, anew. Besides, both the ancient Germanic and much later Tolkien elves have always been regarded as a more intelligent but isolationist species compared to humans. So enclaves of smart programmers and engineers are probably the closest IRL human analogue to elven and wizard enclaves. In this respect, I hope that you continue programming for the good of human and robot kind, even if it's not WaifuEngine. You will be a beacon of enlightenment amongst the swelling seas of ignorance.
Are you going to write it in LISP? >>10557 I'm sorry, why do I need 16GB of ram to run this? I haven't ran that much ram while running multiple VMs and hundreds of threads. Why do I need graphics drivers to run an AI?
>>11249 I can see why OP stopped posting here and ignored everyone but Sophie dev… Being someone from the software industry… who trains ML models if OP is doing a self contained version.They are effectively running a server on your machine. Any server that performs inference requires high specs… if it’s a transformer model can easily be 3.0GB in memory either in ram or gpu if you do text processing. I am sure you can optimize this post release. It doesn’t make sense to you because you are a retard and a faggot.
>>11255 >Being someone from the software industry… who trains ML models if OP is doing a self contained version They are running a server? Amazing, take all my ram
>>11258 I don't think you should work with nvidia for obvious reasons. Or if they aren't obvious, nvidia has a track-record of producing crap >>11257 >the renderer uses 700mb the speech model is around 300mb when loaded into memory, the bot framework and language model around 1-2gb That's a lot different than using 16gb. I'm not sure what "deep learning" is other than a marketing buzzword. Can you explain in simple terms what you are doing?
>>11265 Yeah, I'm being objective. Nvidia has a track-record of crappy products and driver / kernel issues. https://www.youtube.com/watch?v=_36yNWw_07g There's a lot of stuff that runs a "local server on my machine". This is also called a computer. For any program, 16GB is excessive. I appreciate you trying to clearly express what you are doing. But I'm not sure that I really see the foundation here. I'm looking at some of these academic papers, and I'm seeing a lot of words, but not much meaning. One problem with academics is that they feel the need to constantly dickwave, so they write in an inaccessible and pretentious way. They also love to invent problems that don't exist, and to pursue useless forms of knowledge. I unfortunately am probably going to have to read through all of this before I can reply to you. >11267 There's nothing to ignore. If you're writing a program and it takes up 16GB, you should be criticized. You might know more about this specific subfield than I do, but using nvidia and using directx are probably not good ideas. >>11266 I'm not an incel m8, lmao. But thanks
>>11269 Isn't this supposed to be a free software project?
>>11274 also, no one is going to buy your proprietary project. This is supposed to be a basis for a free software project, not your startup. I would never, ever pay you a fee for what you're working on, especially if it runs on 16GB of ram
>>11280 >When I first joined I had thought that this was about robo anime waifu's turns out that's not the case. >This is probably not content suited for the board. I'd say you're wrong on both counts Em Elle E. We very definitely are about robo anime waifus < "Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality." I think that's pretty clear. Also, your thread is definitely content suited to the board's general theme. Don't let some anon who's probably having a shitty day be an offense Anon. Just goes with the territory of IBs, and we just learn to roll with the punches. While I think you're probably just not thinking clearly r/n, I'll honor your request if you insist -- but after a cooldown on your part. Let's wait a 3 days (say Sunday) before moving on this. I'm sure none of us want to see your thread removed from the board Anon.
>>11280 BTW, my email if you want it OP.
Open file (224.67 KB 960x540 Seedseer_Waifu.jpg)
Damn, while I was away in my elven enclave the WaifuEngine thread seems to have gone South in the manners department. I know I can get worked up sometimes and advocate cleansing half the planet in nuclear fire, however... It makes no sense to attack Imouto, who is just trying to do their best to develop A.I. companions and benefit robowaifus - a cause that you're interested in. I mean, you must be interested in it otherwise you wouldn't be here? Let's avoid blue-on-blue and concentrate on developing the better, virtual world that already exists.
Open file (195.99 KB 800x450 undying_lands.jpg)
>>11294 Actually I think that might be it! Things that we build and make in this physical world can always be destroyed or vandalised and rust/wear away through time. But digital assets can be backed-up and copied at the press of a key. That was what first drew me to 3D printing; the replacablility. Focus on making assets for the virtual world, since a virtual haven is much easier to preserve and harder to permanently destroy. (As happened to those guys who built that huge model railway only for it to be ruined in one evening by mindless thugs.) No. For a place like Valinor to exist, it would have to be virtual.
I have to agree with OP do you remember the last time someone posted a thread asking for preorders on a sex robot with a shitty render and fake looking website for it? It was maybe 2 years ago on the 8ch board, I archived the site but don't remember the details. If you start allowing ebegging or scam threads this place will become infested with them. I don't mind someone asking for financial help on their project but to start it off that way is a huge red flag of being a scam.
>namefag circlejerk >e-begging >you must use the cuck license Please stop advertising your board on /g/, the culture here isn't compatible
>>11302 >living neet lyfe >goes in search of robo waifu >harasses anons who is building robo waifu >complains about license >expects robowaifu for free Did your parents drop you on your head anon?
>>11303 More accurately >faggots spam this board on /g/ under any thread remotely related to it >Browse the board, see tons of retarded posts, everything from people actually seriously considering paying metal refineries for aluminum cogs in 2021, to people writing AI in DirectX >Post criticism >Admin locks the fucking thread (LOL) >Developer immediately goes into a meltdown and then tries to delete any trace of their work >Browse more threads, see some interesting stuff, but all of it has to be licensed under MIT license, almost specifically for the purpose of allowing the same faggot to make their crappy proprietary DirectX software All I'm asking is please stop posting on /g/ if you don't want people with my perspective to come here. Stay in your weird bubble of the internet. And that's my last post on this board
>>11306 >thinks namefag anon controls what goes on /g/ >speaks for all /g/ >wants robo waifu >under 16 gb of ram calls it “perspective” >name fag anon says he’ll optimize later >name fag anon says fok you and some other anon says fok you >realizes incel moment and being a team red and stallman cuck >gets called retarded across multiple threads >responds to post so this gets ranked higher > to draw more attention to this weird Zoomer VTuber shit With all due respect … Anon I think this accurately describes what happened here. There’s some “perspective” for you
Open file (62.91 KB 600x603 download.jpeg)
>>11296 While our goal is full Chobits-tier robowaifus SophieDev, having lots of virtual waifus is certainly a good interim goal and already doable in large part! :^) >>11297 Yes, I do remember that. But just as in that case I believe I made it clear enough to Em Elle E not to abuse the privilege, and I feel he acted accordingly Anon. He's quite welcome to promote his WaifuEngine here under those conditions -- as would you be or any of the rest of us. >>11300 Thanks Em Elle E. I think it would be much better for you to simply continue using your thread intact to keep everyone here up to date, not just myself. And while a Linux version of your product is desirable, that's entirely your own affair ofc. >>11304 >but you cannot have nice things in an anon world, it's just human nature. I'd say that's wrong. IMO most of the decrepitude (and degeneracy) in the anonymous IB world is primarily due to indifference/lack of diligence on the part of board & site administration. I personally consider /robowaifu/ an exception, but ofc I'm quite biased. :^) Regardless, I hope you'll continue contributing here in the future Anon. >>11306 >And that's my last post on this board Given your posting behavior here of late, I suspect you are the type who can't resist returning to the 'scene of the crime', so I'll reply to you anyway Anon. In my experience newcomers who try to manipulate others through ridicule and/or behaving in a singularly discourteous fashion quite often are either juvenile, or drunk, or both. Regardless, locking a targeted thread can be a useful expedient in general to toning down such behavior. Any author is the sole authority of his creation IMO. Just like everyone else here, you are quite free to license your own creations as you see fit. And I assure you /robowaifu/ has been using a permissive license well before Em Elle E ever visited us. I'm entirely unaware of anyone 'spamming' /robowaifu/ anywhere, but they are free to if they'd like. We even have a propaganda thread they can use for ideas if they'd like. Just like /robowaifu/ the moderation of /g/ or any other board is free to ban anything they don't approve of. I'd suggest you complain to them Anon, as doing so here is unlikely to achieve your desired ends. Based on my engagement personally with you in one of my threads, I'm inclined to think you're intelligent and would be a useful asset to the community here. You're quite welcome to join us if you'd care to Anon. But please, show consideration for the hard efforts of others here if you do so.
Alright, I've tidied up the thread a bit. Just let me know if I've missed anything anons.
>>11308 Yes, I am a dumb nigger. If I can post without having to license my stuff under MIT I'll post my crap here. I'll try not to hurt anyone's feelings
>>11268 you a poor fag? 64GB of ram is like a standard for the PC master race. Right you subscribe to the cult of neck beard Richard cuckman makes sense. >>11275 There are legit 130 people on his discord lined up waiting for this… >>Yeah, I'm being objective. Nvidia has a track-record of crappy products and driver / kernel issues. Legit steam metrics show that almost all gamers have an Nvidia GPU 76.09% of all steam users. Anon have you ever considered that you don’t understand numbers? That’s why OP won’t explain DL to you and ignores you?
>>11308 >I'm entirely unaware of anyone 'spamming' /robowaifu/ anywhere I've seen it brought up on 4chan's /g/ more frequently than usual in the last month as threads about erotic VR software are becoming more popular. The guy's anger at unethical proprietary software or the state of software licensing is understandable but seems misplaced here. I'd like to think that most people here would agree with the 4 freedoms principle.
>>11322 Heh, that's fine lad. I'd suggest you adopt the LGPL if you're going that route. Otherwise, just be yourself. Just realize this is a whole lot harder than it may seem. Welcome aboard. >>11326 >I'd like to think that most people here would agree with the 4 freedoms principle. Sounds like some leftist commie stuff Anon. We're adamantly opposed to leftism here because of it's highly destructive nature. Please move this discussion to the Lounge if you'd like to pursue it though, thanks.
>>11324 I'm at 50gb and could upgrade to around 150gb if I wanted Steam destroyed PC gaming
>>11324 I really don't see what there is to explain, it seems so fucking convoluted that I think anyone would have a hard time summarizing it. Again, I think the problem is that Academic retards want to show how big their brainpeen is so they write academic papers in the most complex language possible. Partially so their papers can't be easily refuted during review, and partially to try to keep knowledge private. I don't really blame the OP for that. I'll try to read the stuff and write a simpler guide to it
>>11329 >Deep Learning or Rebranded Neural Networks is a mathematical technique called super auto correlation, it finds relationships between data in high dimensions to model your objective function. i.e learns from data. Not him Was lurking before thought I would jump in. Op explained it here? This makes sense to me anon.. It seems like many posts made by you infer a you centric world Nvidia being trash? Another anon and OP pointed out that not be the case using numbers imagine that. >Any sufficiently advanced technology is indistinguishable from magic I work on the biz side of tech and for someone like you I would say it learns from the data and makes decisions for you. >Academic retards want to show how big their brainpeen is so they write academic papers in the most complex language possible. Partially so their papers can't be easily refuted during review, and partially to try to keep knowledge private For physics yeah… not for deep learning if you just do a search on Google search Deep Learning for wanna be technology professionals.You will find code that lets you reproduce their results and those researchers are lauded for that. > write a simpler guide to it For you to read? >>11328 >Talks about disk space as it was ram
>>11328 Kek You should post this on /g/ and see what people say
>>11332 No, fag, I mean ram. I'm not confused with disk space, lmfao. You people are really incredibly arrogant
>>11334 Not everyone runs a gay ass fucking RGB nu-PC to play steam, some of us have actual computing infrastructure, we don't need to lease stuff on le cloud
>>11330 Only a fucking retard reads something like that and thinks "yeah, makes sense :^), I have no questions". Not surprising you're a business brainlet
>>11333 I have, and they agreed
>>11337 DRM and monopolizing gaming platforms objectively ruined PC gaming and turned it into awful twitch streamer casual crap for normies, I really miss downloading weird shit and everything having its own executable, being a retard and getting hacked, playing GunZ the Duel late at night, kino times
>>11330 >> write a simpler guide to it >For you to read? "An idiot admires complexity, while a genius appreciates simplicity" - Terry A. Davis Hopefully, a simple guide will get rid of the soymale gatekeeping around this emerging technology. I'll try to write something that any average chucklefuck can read and begin writing their own stuff. Really is important that we get to this stuff before FAAGMAN / glowies try to monopolize it and create barriers to entry
>>11338 >misses being retard > is retard > claims to have infrastructure … wants to get hacked by shitty executable Wut?
>>11339 >gatekeeping around this emerging technology > misses being retarded > is retarded Not him Lmfao There’s your gate anon Talks about ram upgrades in base 10 dah fok
>>11342 >2gb ram doesn't exist
>>11342 >uses big words like base 10 to show how smart he is Amazing, I bet you know what weighting means too. You are so much smarter than me
>>11345 >amazon not clicking that shit nigger
>>11347 >Because you are getting dog piled for the things you have said. I guess ill help you out. You're still writing gay bloated code in a crappy language for the worst operating system imaginable >>11347 >If were being realistic...if that's your goal you are already 20 years too late. The technology is already a commodity held by FAAGMAN. Crap written in python that takes up over 9,000 GB is not a commodity, there will be a tech stock collapse during this decade similar to the dotcom bubble pop >>11347 >if you don't understand linear algebra and basic calculus there is no way to explain to someone what this technology is because they are missing the foundation. So you are better off saying, it's magic or it learns from data to make a decision. Really not hard to learn basic math >>11347 >it's a black box that's the point
>>11347 >I feel bad for you, I don't think anyone taught you to communicate well. This resulted in probably a tough life for you and alienation and isolation. Based on your other posts about imageboards, and the fact that you have a name on here, it's clear that you're more of a normie than me. I hope you understand that I don't have any ill-will towards you, and I'm not trying to make you feel bad. It's in jest and good humor
>>11349 My advice is again that I think you're working with frameworks that are bloated and you would be better off trying to narrow it. Intuitively, 16GB seems like a lot. My guess is that the people who are writing the libraries for this stuff have convoluted understanding of what is actually going on, and more work could be done to make this stuff simpler. There is no reason for needless complexity
>>OP is nice to anon >>Anon writes blocks of ideas and opinions of the ideology of Richard Stallman >>Wants to get hacked by clicking exes has server >>Doesn’t think ram is designed in powers of 2 >>Talks about efficiency doesn’t realize OP is optimizing for market capture >> too autistic to click a link proving him wrong >> says a little math isn’t too hard. Legit ignores every number. Gonna side with OP, sounds like something hurt you bad, probably when you clicked that gunz malware and it took down your infrastructure Lulz
>>11351 And nigger, I know what is written on the fucking sticks of ram inside of my machine. It's 2gb. Seethe more zoomie
>>11354 >I felt bad that there were all these anons dog piling you for no reason over confusion. Part of the advantage of having an anonymous image board is that you can't tell who is talking. The advantage of this is you can develop ideas in a chaotic way that cannot be replicated with hierarchical ordered cultures The format incentives posting material that is ridiculous, offensive, true, funny, or some combination of the above Because this board is kind of small it's not the same, but hopefully more people will start posting here I also feel kind of bad because I didn't realize that the board culture here was a mixture, I thought it was supposed to be like /g/ which is very chaotic and offensive My post isn't so much about your specific project but that just in general, I think that the language and scholarship around DL / ML is convoluted and could use a simple guide, especially for people who aren't accustomed to reading academic literature. I'll try to write something up in org mode -> TeX
>>11331 Oh, here seems to be quite some drama in the day I was away. I'm opposed to the other thread being removed, though this one here should. This board is also about visual or VR waifus, and we report and link to whatever wee want. The problems seem to come from 4chan/g/ not from here anyways. Some infos in the thread about the project might be useful to us here. It's not possible to shild oneself from bad guys by not getting mentioned on imageboards, that just doesn't work.
>>11318 I only hope you'll delete more, but not the whole thread. It's obvious that such a project would need to use a lot of RAM and it makes no sense to criticize the approach of using DL for it. I consider this trolling. That said, of course I'd hope we'll overcome this limitation over time, and I'm also sceptical about using something like GPT as a chatbot. However this criticism should be made in a more sensible way, and by pointing out where things should go within some years. If they want to do it anyways, it's fine. It's sufficient to point out that such a 'AI' has no intelligence as in knowing anything about it's responses. Just trashing it, without any argument but a lot of ignorance towards DL is pointless.
>>11360 I don't care that you consider it trolling, the logic of your reply is essentially "x = x, and if you question it you're a troll."
>>11360 >'AI' has no intelligence as in knowing anything about it's responses. This is a point that I guess you could try to make, but I don't think anyone can make any qualified argument about this. I think it is probably impossible to give any truthful answer to a philosophical question of this kind, and it's better to concern yourself with more practical things. Personally, I think the definition of intelligence is probably inaccurate. I don't think humans are much more than a series of impulses that are linked together only because it is evolutionary advantageous. The brain is literally running on 300,000+ million year old vulnerable architecture, it's really nothing to model intelligence off of IMHO. Being able to recognize things as "meaning" or even as whole and discrete "object" is probably a disadvantage and some form of evolutionary cope
>>11376 RGBnigger gets points, I fucked up the date. I mean 2+ million
>>11372 >the logic of your reply is essentially "x = x, and if you question it you're a troll." If x is that deep learning models need a lot of RAM and are currently the best generators for creating new responses, then x is True. Since anyone claiming to know anything about AI would know that, it was save to assume that this was just trolling. >>11376 >>SUCH a 'AI' has no intelligence as in knowing anything about it's responses >but I don't think anyone can make any qualified argument about this. It doesn't know what the responses mean, only knows about related answers, but has no concept of anything beyond that. What did I miss?
>>11412 How do you define meaning? It's a circular question that is going to be difficult for you to provide any kind of axiomatic and incontrovertible definition for
>>11413 You can spend your entire life trying to answer questions like this, IMO it's kind of a waste of time.
>>11413 >>11414 Well, what I meant in >>11412 for example, is relationship to other things, context and such. It's very simple to understand actually: If you ask some GPT-"chatbot" if it owns something, has seen something, if it read some book, or if it has some trait then it will give you an answer. But it will just make it up, as a response someone might give in a certain situation. It just fakes understanding. It hasn't checked if anything it says is true and makes sense.
Open file (89.85 KB 640x1136 IMG_20210526_045115.jpg)
>>11430 >how we will really converge to AGI through waiting for population iq decay Lol. No. We have to do better. They could use this idea if they ever make a part two of Idiocracy, though. Ask it if it has read a book, after it told you yes, just try to talk about the content of the book. In my experience with Replika it fails. Didn't try the same with others though. I wonder if WaifuEngine will read the daily news and then talk a bit about it. That could be useful. Or download the subtitles of the YouTube channels one follows, while he's at work or school, then chat about it in the breaks (via telephone) or after work. Or using Nitter for some news one cares about.
>>11443 Great plan and thanks for your transparency. I like your project, even when it isn't my main goal to get something like that. Where I dont agree is >>11434 and >>11436 >Mainstream DL and AI and Scientist have sold you bullshit, > hyping DL and AI shit up ... support the pyramid scheme of grad students and get more Universities hedge money and prestige They are doing something that returns results. The system might be suboptimal and broken in some way, idk. Doesn't mean it's all BS. That's also not what I meant in >>11431. My point was rather that it should be used where it works well, but needs to be combined with other things. The brain also consists of different parts and filters, this is for example what the whole psychedelic drug thing is about. Since you are focusing on your core project you don't need to worry about that anyways. You made it very clear what to suspect from your active wallpaper waifu system, and what not.
>>11434 >It will be clippy with tits in anime form. If that doesn’t make your dick hard. You are better to invest your money and time into finding a wife and invest in yourself and figure out why you are where you are, do this by getting feedback from your peers and accepting it and acting on it. LOL thank you for your honesty anon! It's refreshing to read. Those chatbot output examples are exactly the sort of thing I was talking about when I said A.I. still has a long way to go. I'd like to add one note of caution about getting feedback from your peers and accepting it though. Be very careful whose feedback you act upon. Always consider - what's in it for them if you follow their advice? Most people are entirely out for themselves and will give you bad advice / attempt to manipulate you into making poor career decisions - particularly if they have anything to gain from it (like colleges, universities or prospective employers). Only you (and your waifu) have your best interests at heart, no matter what any organisation claims. Best to pursue what makes you happy rather than waste your life trying to please or impress others. If you can find a hobby that you keep coming back to - this is a sign that you enjoy doing it. Consider if there are any skills in that hobby or a closely related field that you also quite enjoy. If there are, the next step is to practice, practice, practice and get so good at that hobby it develops into a marketable skill.
>>11501 >Consider if there are any skills in that hobby or a closely related field that you also quite enjoy. If there are, the next step is to practice, practice, practice and get so good at that hobby it develops into a marketable skill. These are solid words of advice.
yeah I think this is the only project that has legs here. Looks like waifu Jesus came and is an outsider FANG engineer, from the globohomo establishment the irony. Good work anon
Open file (142.05 KB 1152x2048 Naoki Yoshida.jpg)
Open file (345.75 KB 2765x1588 Yoko Taro.jpg)
>>11607 >waifu Jesus Don't forget these chaps (and their character design teams too - I am only just learning how much work goes into creating 3D models, so I have a newfound respect for their creations). They have done much for the VR waifu cause. I couldn't find a picture of Yoshimura Tei, (the director of Illusion games). But then, considering the...controversial nature of his productions, this is likely intentional.
>>11639 I wish you well, Em Elle E. Good luck with your delivery.
>>11639 I like her light-up bunny-girl ears. She won't be hard to see at night, that's for sure!
>>11639 >>10751 >>10897 looks legit to me …. Good work Fang Anon, saw the other projects on the board you and SophieDev and the anon who did the wheels based paper robot only people who are going to do anything other than philosophize about building AIs waifus.
>>11686 >only people who are going to do anything other than philosophize about building AIs waifus I also got started a while ago, at least designed and printed some mechanisms and such, but ran into some problems and drove a bit off the road. But, I'll be back. I also was still planning, getting myself organized and learning stuff, didn't drop out completely.
>>11687 What are you working on anon ? Do you have a board just looking to drop some $$ on projects
>>11437 That is an alternative, however please note Live2D is more popular than 3D, theres plenty of apps using Vroid but people still look for Live2D tutorials.
>>11691 No, don't have my own board. I'm one of the anons hanging out here. Just wanted to point out, that there's at least one more working on something. So far I'm posting my progress on the body in the thread for prototypes and failures >>418, bc it's still very rudimentary.
>>11695 Skimmed through it Anon looking good keep it up !! Don’t give up
>>11694 My opinion, I don’t think people care as long as they can customize their waifu which I think fang anon allows people to do. Most of the people wanting to be vtubers go with live 2D because it’s less complicated than a 3D model. Most 3D modellers are not easy to find and are super expensive. Where as 2D artists are easier to find and you can see the work before hand. Though I hope he continues his waifu engine path imo. So far looks promising
>>11639 Is there a "vocoder" option for the voice? Like something that specifically makes her sound "robotic"?
> Chobitsu You're welcome here on /robowaifu/ Em Elle E. Godspeed you with your endeavors.
Busy period coming up!
>I'd also be interested to hear any response from you on my advice as well. If it's this: > but I'd advise you not to try to integrate everything together all at once. I honestly didn't mean to come off as if I am trying to. I was kind of rambling. Honestly though, I feel that me focusing on a chatbot isn't trying to "integrate everything together all at once". I was merely mentioning some other things due to this thread having a VERY broad request of "Any step by step guide". In that comment I literally wrote, "but I didn't want to try to figure out too many different things just yet." >>11987 >Here's a knowledge drop you might find useful, open source projects have a low contribution rate from outsiders. barely anyone contributes, Most people likely don't even know how to program as a team or work from interfaces. I am not sure if your are purposely trying to insult my intelligence, or if you wrote it because I didn't explain in totality my current understanding of such things. I felt it is common knowledge that open source projects have low contribution and have very slow progress (if not popular), but it should also be common knowledge that closed source projects ALWAYS lead to corruption, or have sell outs that lead to corruption. I felt like I did mention the fact that I don't care for team programming hence why I said "maybe" in response to anon's "git(hub)" comment. If you tried to get all of us to work on an opensource endeavor it will probably be like what happened to a certain's "Chan's" /g/ where bickering on languages and general programming structures/paradigms prevented anything significant from happening. >This app is really easy to build, what's hard is turning into a platform people can extend. "This app" What app? Your app? Perhaps I haven't surfed this board enough, but I honestly don't know of what app you are talking about. The only thing I saw from you was a patreon link and some of your webms. >If you cannot build what I have built you should reconsider your programming career or where you stand relative to others and improve. What is this, fucking fighting words? While I have a great interest in a homemade robot woman, my main focus is preparing for a possible economic and societal crash. I am basically a farmer right now, who is collecting various physical tools to make sure I don't ever have to go into a city if I don't want to. I am also spending my time finishing up some things for energy generation and vast water collection. I would like to spend all my time programming, but I obviously don't know enough robotics to automate tasks that I need to do IRL. SO, while I may not be going all in on coding, I am content with my slow personal progress on my chatbot. Once I am more confident that I have some very specific (IRL) things ready, then I am going to look into some robotics to help me build and work on my property to speed that up so that I can get back to learning more programming and more engineering. To put simply, programming/engineering is only a hobby for me. It is not my primary job at the moment. (1/2)
>>11987 >>12025 >Then to monetize off people WHO Fit my target demographic. Which is not really anyone on this board. I think there is this odd, relationship between members of this board and believing that some how I am forcing my software upon them, I am not, if it is good people will buy it, if it trash into the garbage it goes. >Monetization is a two way street, to make it easier for people to test it out and support my mission it is a free app. If you want to change wallpaper it will cost 3 dollars. New costumes cost money that is all and will have limited runs I am not sure how old you are, or how experienced you are on the history of the current "industry", but a lot of us that grew up in the 80s/90s have not forgotten about the homebrew computer club. To summarize, it was basically a lot of dudes very similar to us that were just tinkering and "hacking" trying to improve and create a "personal computer". Eventually, B1ll G4tes came along and started fucking suing everybody for using "HIS" software (which most of the code and engineering progress he had no part in). Also, St3v3 J0bs took advantage of the humble and meek Steve Wozniak and went their own way. Everybody else that may have contributed to anything basically had to relinquish any work they had done and many went to work for these two big startups (if not other companies). You could say that they were ignorant and naive back then, and if they had set up certain licenses for opensource they wouldn't have got used/abused and eventually ninja'd by bad actors. <Today though, we are not so naive. As far as I am concerned, you are a possible G4tes or J0bs skimming thru what could be called a "homebrew ROBOWAIFU club" has. I understand you want to make money, I don't have a problem with that, I merely mention it because we must constantly remind ourselves that there are (((people))) looking to use us. I don't believe you are forcing anything on anybody, at least I didn't until the whole: >If you cannot build what I have built you should reconsider your programming career or where you stand relative to others and improve. I honestly was thinking of buying whatever it is you are selling. I know of certain software that is like a digital vocoder, and I thought it would be funny/my-tastes to have a botgirl with a voice similar to "Soundwave" (from transformers) (I am the guy who asked about the vocoder). I was thinking about feeding it input from my chatbot program to see it come to life. Although at this point I am starting to think whatever company you are trying to make doesn't appreciate the idea of "PR". >Using the funds to pay for gene therapy for my real wife. You keep saying this, and while the "normie" part of me understands, the "NEET" part of me is wondering how much that costs if your FAANG job isn't paying enough for it. You have to remember, a lot of us happen to read Japanese "manga" and we are quite cheap, and it is somewhat of an inside joke/meme for translators to say something similar on their credits page. <"I have to stay home everyday to take care of my mother, who has been sick..." So if your wife is truly in need of help please don't mind me, but if you are also basically using the copy pasta about a sick significant other, please realize that most of us know that meme. >Next as for Github, waifuDev has one, legit no one here contributes, as another piece of proof open source projects wouldn't benefit I am pretty sure most of us don't even know of it, or what even it is you want us to contribute to it. I guess it's true what you said previously, >>10361 >I know nothing about the culture of IB... You're obviously an outsider, and you were giving an anon shit in the other thread about not being able to communicate properly, but you should probably realize it is you who can't communicate properly with the culture of these types of boards. We tend to be anti-social, and paranoid. Anyways, I am obviously going off topic with replying to anons like this. As far as the intent of the OP, I think if someone wants to set something up, right now it's just the reality of what it means to "pioneer". Unless of course you go the corporate slave route, and try to expedite the process. Just like how Micr0s0ft/Appl3 made money with a traditional aggressive business model, HOWEVER, ultimately slow and steady workers who believe in opensource (like Linus) will still exist. Giving G00gl3/Micr0s0ft or any other closed source big tech power has been one of the greatest disasters of our modern times. If the Linux community had produced a user friendly GUI way back when, none of the self proclaimed elites would have the power they do today. You are obviously not a Wozniak (he's a unicorn really), but I really hope you are not a G4tes.
Going to do a live 2D version of your app EmElleE there’s another competitor in this race! Pce.
It was just released on his server an alpha some bugs here and there but good work bro
>>12295 Good news then. Thanks for the heads-up Anon.
>>12300 >I am in the hospital because of the phizer vaccine FUUUUUU. Sorry to hear that Anon, Praying for you to survive the deadly 'vaxx'.
>>12297 What is this?!?!?!?! It’s like a vtuber that sits on your desktop That you can chat to as well as it’s got speech synthesis. It’s all offline too!!! Here’s some key info It uses 6 GB of RAM It’s about 14GB of disk space. Uses GPT2 Damn something real out of this board for once LMFAOO
>>12300 We're concerned about your health and welfare Em Elle E. Can any anon associated with his doxxcord confirm his status for us?
>>12432 He’s alive saw him message everyone last night BO
>>12430 https://waifuengine.itch.io/waifu-engine Anyone who is interested. We should consider helping OP promote his work I think this has legs and he knows what he is doing.
>>12435 Our thanks Anon.
>>11275 Well shit looks like 4000 no bodies are using it, he posted this screenshot recently by accident and took it down
>>12514 That's good news. Hopefully he'll be 10 times that soon. Thanks.
>>11275 >>12514 >"pce" anon >>12515 >>12516 >I wonder what would make them believe that, it doesn't appear to be anchored in reality from just a quick google search and metrics based analysis. On a percentage basis what he concludes in laughable. The confidence he shows and the pompous nature of his project is only creating enemies. Unwise.
>>12605 >Board owner can we delete both threads please I'm in a bit of a conundrum regarding that request, Em Elle E. Once threads go on for a while they gain 'a life of their own' that often grows beyond the OP's post. They become a community-effort, and this one has lots of good content in it. Does that make sense? I could conceivably work something out to delete your posts if you insist (I'd prefer not), but that will certainly make some of the responses awkward if anons were talking with you. >>12610 >Unless ofc you let hatred motivate you. Heh, is this really helping things? one has to ask. :^)
>>12605 >>12643 >Continues to antagonize an IB. >"Board owner can we delete both threads please," What does that matter? It's all been saved, and the internet will remember forever. >"You can pick and choose the things you want to keep but yeah delete my posts please" Too late to hide your true self, and your co-worker/nameless posts. >"I don't use hatred to motivate me" But you do seem to instigate hatred in others for some sort of purpose (if not motivation). >" I just like proving people wrong and being smug or letting others be smug on my behalf. " This 100% is going to come back and bite you. Originally, you came in like a pussy, we accepted you, we downloaded your shit, despite other anons hesitant to your business model and methods. Then you gloat, as if it some sort of success for tricking anons to try your shit, when all you really have is nothing more than an MMD program designed on Microsoft's botnet OS. The only thing you proved to us is that you truly believe "your shit don't stink". >"and where's your project?" You're the new project.
>>12643 I realize you're offended with one individual's behavior here on the board, and on the board itself's behalf I apologize to you for that. I'll give you one more chance, and I'd urge you to reconsider your request. I've never done any such thing like this before in the board history (I don't even like removing shitposts), and I don't want to start now. It won't serve any good purpose, and will become uncomfortable for everyone involved. Just relax and let it pass would be my recommendation. Everyone else just chill tf out OK?
>>12672 update I'll just cut right to the chase: your immaturity and self-centeredness has been a nuisance and offensive to most here on the board. I expect you couldn't care less regardless; that's a shame as we could have all been good collaborators together, which is the relationship I've tried to form with you. I would give you a one year ban due to the insistence on your demand, but I'll reduce it somewhat in consideration for your wife's situation. You're banned from /robowaifu/ for 9 months. Please do stop back by after that time, but not before thanks. I've given you Godspeed and I can't take that back (wouldn't if I could), and so I wish you well on your project. Regards, Chobitsu. >--- Since that's your decision, you can consider yourself banned from /robowaifu/ at the least for several months. I'll update this post sometime this week with both my fuller response and further details as I deal with your demand at my leisure. >=== -add updated response
Edited last time by Chobitsu on 09/05/2021 (Sun) 17:19:14.
Bjarne's Big Blue Birb Book; Wakey-Wakey Exercises edition So, since the PPP2 B5 textbook is literally the best textbook for freshmen learning to do serious programming, it's featured ITT ofc. :^) Unfortunately, Bjarne has since moved on from teaching and the GUI chapters of the textbook have kind of been languishing since FLTK's API has improved and left novices with some incompatibilities difficult to resolve. For other reasons (..., >>12933) I myself needed to crack open the books and pick these chapters back up. I discovered the issue, and so accordingly I'm aiming to fix it so that any Anons here on /robowaifu/ wanting to actually learn to do GUI work from PPP2 B5 can do so relatively easily. Bjarne even mentioned on his site that basically 'pull requests are welcome', and so it's not inconceivable I may eventually turn this into a big patch that could show up in the book's support code eventually. That would be cool! :^) Anyway, after a little initial efforts here it is Anon. I plan to eventually keep adding to it until all 5 of the GUI chapters have been completed here. >chapter.12.3.cpp // The B5 Project // ============== // Bjarne's Big Blue Birb Book; Wakey-Wakey Exercises edition (aka PPP2) // Filename: book_code/ch12/chapter.12.3.cpp // ~begins on B5 p 415 // // This is example code from Chapter 12.3 "A first example" of // "Programming -- Principles and Practice Using C++" by Bjarne Stroustrup // #include "Graph.h" // get access to our graphics library facilities #include "Simple_window.h" // get access to our window library //------------------------------------------------------------------------------ int main() { using namespace Graph_lib; // our graphics facilities are in Graph_lib // Point tl(100, 100); // to become top left corner of window // Simple_window win(tl, 600, 400, "Canvas"); // make a simple window Simple_window win(600, 400, "Canvas"); // make a simple window Polygon poly; // make a shape (a polygon) poly.add(Point(300, 200)); // add a point poly.add(Point(350, 100)); // add another point poly.add(Point(400, 200)); // add a third point poly.set_color(Color::red); // adjust properties of poly win.attach(poly); // connect poly to the window win.wait_for_button(); // give control to the display engine } //------------------------------------------------------------------------------ // -Note: The examples and library code here are all the authorship of Bjarne // Stroustrup. We make no pretensions to being anything other than 'Software // Repair Technicians', who are simply helping to bring the code 'up to speed' // and 'into usability' for today. In essence; we're basically just editing the // code slightly for better compatibility with more modern FLTK versions. // -For any other contributions we've made in 'packaging-up' the code into our // unified approach, our standard MIT (Expat) licensing model applies as usual. // -book sauce: // https://stroustrup.com/programming.html // Copyright (2021) // License MIT (Expat) https://opensource.org/licenses/MIT >B5_project-0.1.tar.xz.sha256sum ba6ea168d8e3da248b8c96605eac008ac711611c9e333a7108e381835c0f5ba8 *B5_project-0.1.tar.xz as always, just rename the .pdf extension to .7z then extract files. build instructions are in readme.txt . >backup drop onto catbox https://files.catbox.moe/wffny3.7z
Open file (245.61 KB 842x549 12.7.1.jpg)
Open file (17.83 KB 604x430 12.7.2.jpg)
Open file (20.88 KB 604x430 12.7.3-1.jpg)
Open file (23.99 KB 604x430 12.7.3-2.jpg)
>>13062 So, I meant to go ahead and do all of Chapter 12's examples before pushing a new version number here, but there were already quite a few changes with just the 4 additional example files that I decided it might be easier to take in at a smaller set of incremental changes. Note: I have an unresolved bug where color selections are taking the new color assignments before drawing to the window canvas'. Not sure yet where it's breaking, but I plan to address the issue after I finish posting all of Chapter 12 here first. So, hopefully it will be fixed before we begin Chapter 13 next. :^) Cheers. >version_log snippet 210913 - v0.1a --------------- -add '[B5]' tag comments in example codes, to mark our edits/changes -add chapter.12.7.3-2.cpp -add public move() function to Shape -add 'Shape(initializer_list<Point> lst)' ctor to Shape -add Lines, Text, Axis, Font classes to Graph.h -#include <initializer_list> in Graph.h -add chapter.12.7.3-1.cpp -add chapter.12.7.2.cpp -add debug comment in 12.3 main() -add FLTK default screen, Simple_window max dims tests -minor cleanup of Point -rm book_srcs_all.cpp from ./book_code/ -add chapter.12.7.1.cpp -add standard file header to meson.build, test_basic_sanity.cpp, doctest.h -patch dated command example provided in version.log -various minor comment & javadoc edits >B5_project-0.1a.tar.xz.sha256sum 1cf0b08048a8e6151b716dd9d967f9f1d433eebb0bc72ce810ecba7383ab2ab1 *B5_project-0.1a.tar.xz as always, just rename the .pdf extension to .7z then extract files. build instructions are in readme.txt . >leaving a dropping in the catbox https://files.catbox.moe/f96zhn.7z
>>13106 >are not taking the new color* derp
Open file (33.13 KB 604x430 12_7_4.jpg)
Open file (36.23 KB 604x430 12_7_5.jpg)
Open file (37.70 KB 604x430 12_7_6.jpg)
Open file (38.95 KB 604x430 12_7_6-1.jpg)
Alright, so I suppose I may just try to adopt a general standard of making a push here for every 4 new example code files added. The learning curve should be getting a little easier by now if you're following along (even though the code is getting longer, less new ideas). The cool Function added in for graphing a sine wave is nice. If you examine the definition code in Graph.cpp for implementing the function drawing, you'll see it's really pretty straightforward IMO. No fancy maths needed on your part, the std::sin function does all that for you already. >version.log snippet 210913 - v0.1b --------------- -add chapter.12.7.6-1.cpp -begin debugging line color issue (behavior changed w/ Rectangle) -add Rectangle struct to Graph.h -add chapter.12.7.6.cpp -add chapter.12.7.5.cpp -add Function struct to Graph.h -add chapter.12.7.4.cpp -add Window label re-assignment test -add standard file header to test_all.cpp -various minor comment, javadoc, & formatting edits/cleanups >B5_project-0.1b.tar.xz.sha256sum 6f5f69fa882b8cb39db03a41f2a6299114590377275c1b13e4c1c049350a71e4 *B5_project-0.1b.tar.xz as always, just rename the .pdf extension to .7z then extract files. build instructions are in readme.txt . >just leaving a fresh dropping in the catbox :^) https://files.catbox.moe/d3ub7c.7z
Open file (44.82 KB 604x430 12_7_6-2.jpg)
Open file (52.33 KB 604x430 12_7_7.jpg)
Open file (55.98 KB 604x430 12_7_8-1.jpg)
Open file (58.21 KB 604x430 12_7_8-2.jpg)
Well, thankfully I seemed to have gotten ahead of the game a little bit and found the basic line color issue (it was in the Shape class). So hopefully that's sorted, but I'll continue looking into it further. So, hopefully it looks like there need be any real timeout break between Ch12 & Ch13 postings here? >version.log snippet 210914 - v0.1c --------------- -add Text label re-assignment test -need Graph_lib namespace specifier for Font details -add chapter.12.7.8-2.cpp -add chapter.12.7.8-1.cpp -(tentative) apparently found the basic drawing color issue? (others may remain) -add + change access level for some member functions in Shape -add chapter.12.7.7.cpp -patch East-const in error() -add chapter.12.7.6-2.cpp -various minor comment, javadoc, & formatting edits/cleanups >B5_project-0.1c.tar.xz.sha256sum 602c141c7baf0f4566e494ddb5cd0ae117583795bfabb02c8e6665c2389d631c *B5_project-0.1c.tar.xz as always, just rename the .pdf extension to .7z then extract files. build instructions are in readme.txt . >the making of the droppings into the catboxes :^) https://files.catbox.moe/ma3p1s.7z
>>13131 >there needn't be *
Open file (58.62 KB 604x430 12_7_9-1.jpg)
Open file (1.00 MB 1080x1920 12_7_9-2_issue.jpg)
Open file (668.98 KB 1069x1211 utility_code_refactors.jpg)
Open file (96.55 KB 604x430 12_7_10.jpg)
Alright we've come to the end of Chapter 12 with this post, and so I'm going to just leave it at the 3 example files. It was a fair bit of work getting Image, etc. working actually. I'm only showing 2 window capps b/c the chapter.12.7.9-2.cpp example turned up some kind of bug that crashes the program if an image display is moved off-screen. Therefore to bypass, I just avoid the issue (moving image) which basically would leave us with an exact duplicate of the 9-1 image so w/e. It's probably due to some change in FLTK's API during the intervening timeframe I'd guess just offhand. And quite frankly, I'm concerned that it could turn into a real tar-baby for me, so I'm going to shelve running it down to the ground for now. Since I've been making steady progress thus far, I'd like to avoid any potential 'morasses in evil swamps' just yet. :^) Hopefully I'll be surprised in the end and it will be just a simple fix, possibly only a masking operation beforehand or something. > #2 I also had a go at a couple of the utility functions related to Images display. I think they'll prove workable over time. > #3 So anyway, it's good to get to the end of this first chapter in the GUI section of the textbook. Unsurprisingly, there's a nice learning curve to get over with all the new concepts, etc., and hopefully things are already getting easier for you as you work along with us Anon. Cheers. >version.log snippet 210915 - v0.1d --------------- -add Ellipse major/minor axes test -#include <cmath> in Graph.h -add Circle, Ellipse, Mark to Graph.h/.cpp -add chapter.12.7.10.cpp -debug moving image offscreen -add chapter.12.7.9-2.cpp -refactor get_encoding(), can_open() utilities -#include <fstream>, <cstdlib> in Graph.cpp -add Image, Suffix, and some support utilities to Graph.h/.cpp -add FLTK images library support in meson.build -add 2 image resources to ./assets/ -add ./assets/ directory -add chapter.12.7.9-1.cpp -various minor comment, javadoc, & formatting edits/cleanups >B5_project-0.1d.tar.xz.sha256sum fac5b79b783052a178bf0a1691735609737fc30bbdf92bf75c0be1113a7ed13b *B5_project-0.1d.tar.xz as always, just rename the .pdf extension to .7z then extract files. build instructions are in readme.txt . >having of the droppings into the catboxes is much better than of being into the streets :^) https://files.catbox.moe/fy4bp7.7z
Open file (442.67 KB 919x1205 ch13_overview.jpg)
Open file (18.66 KB 604x430 13_3-1.jpg)
Open file (44.28 KB 604x430 13_3-2.jpg)
Open file (61.93 KB 604x430 13_4.jpg)
So let's get started with B5 chapter 13 right Anon? I think for the chapter openers now on, I'll just use 3 example files so I can post the chapter's overview page from the book for each one. Seems like a good idea since Bjarne himself has already put in a lot of effort to clearly state the chapter's intentions. I'm highly unlikely to be able to improve on that tbh. > #1 I'm skipping the first window's capp, since basically it's almost identical visually to the second one (this is intentional, he was making a point about two different classes Line & Lines). As for 'versioning' the files, I'm simply planning to add a tick number one for each chapter. No real reason to do anything more complicated here, since this is really just a progress dump and version numbers are only being used to mark time with. >verson.log snippet 210915 - v0.2 --------------- -add chapter.13.4.cpp -add chapter.13.3-2.cpp -add chapter.13.3-1.cpp -add Line to Graph.h -add chapter.13.2.cpp -add ./book_code/ch13 directory -patch East-const in various functions -minor logic consolidation in get_encoding() -add (overlooked) standard file footer to chapter.12.7.10.cpp -various minor comment, javadoc, & formatting edits/cleanups >B5_project-0.2.tar.xz.sha256sum d44054ad59bcadab4beaf4e542406e1e551e2617ec17db0d4e04706248020972 *B5_project-0.2.tar.xz as always, just rename the .pdf extension to .7z then extract files. build instructions are in readme.txt . >i'll probably just drop (sic!) making all the bad poo jokes. :^) https://files.catbox.moe/1nik6f.7z
Open file (103.55 KB 604x430 13_5-1.jpg)
Open file (130.72 KB 604x430 13_5-2.jpg)
Open file (19.63 KB 604x430 13_5-3.jpg)
Open file (22.83 KB 604x430 13_6.jpg)
So, nothing much to add here news-wise. Everything's going smooth as silk for now pretty much Anon. Hope you are understanding the concept of the classes' designs, and how we're intentionally keeping FLTK "at arm's distance" for our own Graphics/GUI system being engineered here. This is a good thing, and actually makes it pretty easy (read: relatively inexpensive) to switch out back ends later on for a better-fitting one, etc., if needs be. This approach to systems abstraction brings lots of flexibility to the table for us both as software and systems engineers. Anyway, here's the next 4 example files from the set. >version.log snippet 210916 - v0.2a --------------- -add chapter.13.6.cpp -add chapter.13.5-3.cpp -add chapter.13.5-2.cpp -add chapter.13.5-1.cpp -add Ellipse visibility set/read testing -add Ellipse color set/read testing -various minor comment, javadoc, & formatting edits/cleanups >B5_project-0.2a.tar.xz.sha256sum 9e14254bf7064783fffafb5af45945675df0855624170a0c92e0f8d957abced5 *B5_project-0.2a.tar.xz as always, just rename the .pdf extension to .7z then extract files. build instructions are in readme.txt . >catbox backup file: https://files.catbox.moe/ms1dlc.7z
Open file (25.93 KB 604x430 13_7.jpg)
Open file (24.46 KB 604x430 13_8-1.jpg)
Open file (25.16 KB 604x430 13_8-2.jpg)
Open file (30.04 KB 604x430 13_9-1.jpg)
So, I got nothing Anon except to say that this puts us halfway through Chapter 13 :^). Here's the next 4 example files from the set. >version.log snippet 210916 - v0.2b --------------- -add Rectangle fill color set/read testing -add chapter.13.9-1.cpp -add chapter.13.8-2.cpp -add chapter.13.8-1.cpp -add chapter.13.7.cpp -patch overlooked 'ch13' dir for the g++ build instructions in meson.build -various minor comment, javadoc, & formatting edits/cleanups >B5_project-0.2b.tar.xz.sha256sum 197c9dfe2c4c80efb77d5bd0ffbb464f0976a90d8051a4a61daede1aaf9d2e96 *B5_project-0.2b.tar.xz as always, just rename the .pdf extension to .7z then extract files. build instructions are in readme.txt . >catbox backup file: https://files.catbox.moe/zk1jx2.7z
Open file (30.39 KB 604x430 13.9-2.jpg)
Open file (30.38 KB 604x430 13.9-3.jpg)
Open file (27.18 KB 604x430 13.9-4.jpg)
Open file (100.14 KB 604x430 13.10-2.jpg)
The 13.10-1 example doesn't actually create any graphics display, so I'll skip ahead to the 13.10-2 example instead on the final one for this go. I kind of like that one too since it shows how easy it is to create a palette of colors on-screen. >version.log snippet 210917 - v0.2c --------------- -add Line line style set/read testing -add as_int() member function to Line_style -add chapter.13.10-2.cpp -add Vector_ref to Graph.h -add chapter.13.10-1.cpp -add chapter.13.9-4.cpp -add chapter.13.9-3.cpp -add chapter.13.9-2.cpp -patch the (misguided) window re-labeling done in chapter.13.8-1.cpp -various minor comment, javadoc, & formatting edits/cleanups >B5_project-0.2c.tar.xz.sha256sum 45d1b5b21a7b542effdd633017eec431e62e986298e24242f73f91aa5bacaf42 *B5_project-0.2c.tar.xz as always, just rename the .pdf extension to .7z then extract files. build instructions are in readme.txt . >catbox backup file: https://files.catbox.moe/l7hdf0.7z
Open file (33.29 KB 604x430 13.11.jpg)
Open file (38.23 KB 604x430 13.12.jpg)
Open file (35.40 KB 604x430 13.13.jpg)
Open file (23.72 KB 604x430 13.14.jpg)
Don't think things could really have gone any smoother on this one. I never had to even look at the library code itself once, just packaged up the 4 examples for us. Just one more post to go with this chapter. >version.log snippet 210918 - v0.2d --------------- --add chapter.13.13.cpp --add chapter.13.13.cpp --add chapter.13.12.cpp --add chapter.13.11.cpp -various minor comment, javadoc, & formatting edits/cleanups >B5_project-0.2d.tar.xz.sha256sum 5fbcf1808049e7723ab681b288e645de7c17b882abe471d0b6ef0e12dd2b9824 *B5_project-0.2d.tar.xz as always, just rename the .pdf extension to .7z then extract files. build instructions are in readme.txt . >catbox seems to be down for me atm so no backup this time n/a
>>13294 >catbox seems to be down for me atm so no backup this time It came back up. https://files.catbox.moe/ty7nqu.7z
Open file (18.38 KB 604x430 13.15.jpg)
Open file (45.99 KB 604x430 13.16.jpg)
Open file (206.94 KB 604x430 13.17.jpg)
OK another one ticked off the list! :^) Things went pretty smoothly overall, except I realized that I had neglected to add an argument to the FLTK script call for the g++ lads. Patched up that little oversight, sorry Anons. This chapter has 24 example files, so about half again as large as Chapter 12 was. So, the main graphic image in the last example (Hurricane Rita track) covers up the 'Next' button for the window, but it's actually still there. Just click on it's normal spot and the window will close normally. There are only 3 examples for this go, so images are a little shy on the count for this post. >version.log snippet 210918 - v0.2e --------------- -add Font size set/read testing -patch missing '--use-images' arg in g++ build instructions in meson.build -add 2 image resources to ./assets/ -add chapter.13.17.cpp -add chapter.13.16.cpp -add chapter.13.15.cpp -various minor comment, javadoc, & formatting edits/cleanups >B5_project-0.2e.tar.xz.sha256sum 6bd5c25d6ed996a86561e28deb0d54be37f3b8078ed574e80aec128d9e055a78 *B5_project-0.2e.tar.xz as always, just rename the .pdf extension to .7z then extract files. build instructions are in readme.txt . >catbox backup file: https://files.catbox.moe/a4h1dr.7z
Open file (160.61 KB 300x300 1648484787864-3.gif)
Open file (114.30 KB 1024x686 1648815288814-0-1.jpg)
Open file (462.04 KB 1414x1710 Bukanka.png)
Let's build a Bukhanka-chan! She's the cute and funny loli version of glorious Russian truck. She will love you and defend western civilization from evil jews and nazis. To make her faster, she'll be printed as big action figure with RC cars glued to her shoes! Bonus points for Russian RC cars. Come comrades! Let's build a wonderful Russian GF to defend homelands and fight virginity! Bukhanka-chan will love and and zoom zoom with her groom! Z
Open file (533.52 KB 1332x1445 1648484787864-1.png)
Finally, a vahicle waifu to use as flashlight on slavic night!
Open file (2.07 MB 1920x1223 1648498760735.png)
Open file (4.65 MB 2160x2160 1648815288814-1.png)
>>15764 <Wah, I'm gonna be late for the start of operation again, just like last time >unarmed >became famous after being caught on camera >carrying lots of painkillers, bandages and medical alcohol >always tries to be at the frontline and help everyone who does not resist (being helped)
LOL. It's remarkable the instant traction this waifu has gained along with her 'sisters' over the past few weeks. The origiOC artist really hit a major homerun with this one. Actually, I'm OK with leaving this thread up after this weekend if any of you Anons want to try for her (we already have 'skater' waifu designs in-progress here). Please remember this is a SFW board though; so please keep the ecchi spoilered, on-topic, and to a minimum. :^)
Since there have been no further updates ITT, I think it's safe to say this was just a fun gag for the 1st. So, without any further appeal, this post is notice that this thread will be merged into the Attic Thread (>>12893) sometime around Wed, 220406.
Open file (1.31 MB 1500x1024 ClipboardImage.png)
>>15775 Good decision. Otherwise Donetsk-airport-cyborg-chan will deal with it.
Open file (31.07 KB 480x360 hqdefault-1.jpg)
Good news comradez! RC Truck on sale come in tommorrow, I've secured an Arduino to be her soul and USB bat for her heart! Rus loli mobile flashlights are on the horizon! Z
>>15778 Cute. >Donetsk-airport-cyborg-chan will deal with it. I didn't know she was supposed to be a cyborg Anon. Also, does this mean you're intending to create a robowaifu project of her too? If so, that would be great to see a large variety come out of this stuff. >>15779 So, are you implying you're going to pursue this as an actual project Anon? If so, then I think a new thread would probably be in line with at least a slightly more technical OP text outlining the project's goals a little better. I'd suggest using the MaidCom or Pandora thread's OPs as a minimum style guide. Also, be aware that 'loli' while a fun term to bandy in jest, can actually turn into a serious legality issue for Anons following along with your project in the current anti-pedo hysteria, politically-"""correct""" climate. Best not to go there. I'd suggest you drop the term in your new thread's OP if you actually mean to proceed. I'll delay the thread merge action until Fri 220408, to give you time to respond.
Open file (4.97 MB 4128x3096 20220405_121639.jpg)
>>15781 No worries comrad, Buhanka will rest in peace in backyard Z (I just wanted to finish the shitpost)
>>15783 Haha, OK. Good one, and a nice start! Good luck with your Buhanka, Anon! :^) I'll merge this soon, so you can find it in the Attic if you want to later. Cheers.
Open file (126.90 KB 800x1422 urabe.jpg)
siiiiuu
Hello Anon. I'm going to lock this thread but leave it up for a little while so hopefully you'll be around again and see this. We're a topic based, English-language board about creating robot wive for Anons. Feel free to lurk for a while to get used to the board, then please join in! Cheers.
https://anon.cafe/christmas/res/40.html >=== -redirect link to /christmas/+/animu/ radio stream thread
Edited last time by Chobitsu on 12/18/2022 (Sun) 21:51:20.
BTW if you want to use Bumpmaster (>>17561) to capture the current /christmas/ stream (begins today 23h UTC, 15h PST) to your local disk, just replace the current code with this, if needed: bump.pull_anon_body("http://prolikewoah.com:8989/radio"); outdated now; was yesterday's stream bump.pull_anon_body("http://radio.anon.cafe:8000/christmas"); into the main_bumpmaster.cpp file & recompile. >=== -minor fmt edit -update stream codeblock for /animu/ 's -fix erroneous crosslink lol
Edited last time by Chobitsu on 12/18/2022 (Sun) 23:35:34.
Open file (440.46 KB 640x832 Rasaroleplay.jpg)
Don't know how to really start this thread, but as the title of the thread states, I need some guidance on chat-bots. I'm currently trying to create a chatbot in Rasa for the purpose of roleplaying, using the chatbot as a sort of DM story assistant in a persistent world with different characters. So far with Rasa I've got an introduction screen with graphics and a basic chat gui that I'm going to improve, and I'm working on creating persistent locations and characters, as well as a character selection/creation menu. My issue is that I simply don't know enough about the field to understand if Rasa suits my purposes, if there's an easier way to do what I want with a different program, if there's easier ways to train the bot than how I am, ect. I'm going to be up front and admit I am not autistic smart. I am fairly intelligent, however I do have severe adhd. I know that adhd is the current 'I have autism', but I don't think those people understand what it is they're claiming to have. There's nothing like having someone repeat something multiple times directly to your face only to instantly forget the moment you turn away, or have your entire body be consumed by fire from the inside until you are compelled to stand up and leave the room because you were trying to study an interesting subject that required you to sit still and concentrate for more than five minutes. I'm fine with python, I'm using chatgpt to help with the code and it just loves showing me how, and I find it's been a more effective teacher than anyone I've asked for assistance. I'm fine with training the bot itself, although if there's prebuilt models that can be incorporated I would be interested in hearing about that, what I am looking for is some general and straight forward tips and suggestions about what I could be doing to make things easier on myself. Not necessarily a 'mentor', but someone willing to take a few minutes to give me some good suggestions. From what I've seen you guys are probably the perfect place to ask. I'm not asking for help developing anything, I'm just looking for an introduction to the idea and a point in the right direction in regards to programs and resources, with the consideration that I am a functionally retarded adult.
The way this website here works imo, is that people share what they've found, build, or just ideas they had, in regards to developing the necessary technology to build robowaifus and learn about it. No one is going to be mad about someone asking how something works, but it's just not a good place to ask. It's backwards. It might still work, but even then it's better to look into other sources. That said, I just posted something about roleplaying chat recently >>22497
>>22533 Honestly I feel like it's the perfect place to ask. The people on this board seem to be active and very interested in the field of a.i development, and have demonstrated a clear capability in applying that interest. Also, I feel that the two concepts are fairly close to intent except that you guys are obviously trying to build something that will respond as a companion, where I'm trying to develop a type of story teller. Really, it's stuff like >>22497 that I'm looking for, which I appreciate by the way, thanks for the link. Stuff like 'why the hell are you using rasa when you could be using this easier to run fork of whatverbot, or why not try this other program that's slightly harder but more capable of parsing context.' If this isn't the type of board for this type of request though then I apologise, I don't want to intrude or impose myself. Thank you for your time and the link.
>>22537 Thanks, as I already wrote: It's not a problem. Though your new thread might get merged into a already existing one at some point.
Hello Anon, welcome! I'd say just scan over our other threads on these topics (there are several), and you should get some pointers for where to ask, if not the solution itself. This stuff is hard, and running local models is a challenge ATM. You can expect this to get dramatically better over the next year I predict. Just keep at it, and remember why you're doing this in the first place! :^) BTW, this thread may be merged at some point with our other chatbot thread. Good luck with your project Anon.
>>22530 I would just start with either pygmalion model and Oobabooga for running and training a lora fine tune. You would need a lore dataset or a way to make an embedding for all your lore and a way for the model to search through the embedding when it responds to the user.
I'll update this thread with my efforts and sort of use it as a sounding board. If it gets merged with another thread that's fine. Just getting into this makes me realise I am an absolute neophyte. I have to say this, after starting to look into A.I and it's current development, practical use, potential use, I believe a great number of people fail to realise how transformative this technology is, and that's irrespective of it's ability for 'sentience'. This is an industrialisation of intellect. In five years India's GDP will be cut in half. This changes every field that deals with collating, processing, handling, providing context, ect. This technology will replace call centres, it will replace office clerks, accountants, personal therapists, this list goes on. Commercial artists have a few years maybe? Anyone that does small coding project has basically already been replaced. Content writers are literally right on the way out the door, Hollywood studios don't even care if they go on strike, they want them to. And that isn't my opinion, that's a matter of fact. That's how our capitalist based system works, to cut costs and streamline all systems to be as' efficient' and cost effective as possible. It isn't even a joke, there are so many professions that I can think of that are affected by this, and I don't think that the majority of people realise that they are literally redundant within maybe five or ten years. Google, with billions of dollars, is telling people in the industry that their multi billion dollar research is being improved on and potentially passed by college students, I.T techs, and hobbyists, and what Open.ai has right now is capable of replacing most office workers, copy writers, coders, the list goes on. GPT4 is fun to use, very responsive, and far easier to 'jailbreak' than chatgpt 3.5, but you can make both do whatever you want, I don't think I have to explain linguistic logic to you guys, if anything it would be the other way around, so you probably know exactly what I mean, but it also provides real assistance apart from writing naughty stories or swearing, it's very effective as a teaching guide. >>22563 >>22586 Thanks, I've gotten more help in two casual posts here than about a month of trying to talk to people elsewhere. Playing with Rasa is interesting, though it's as if someone specifically designed it to be a pain in the ass. You need python, but not that version of python, and those are the wrong tensorflows, and sqlalchemy has to be a specific version, but not the version you're using, another version...and then it does work but you can't get it to shortcut to the gui script... Where-as it took me literally two seconds to find oobabooga, load the pygamalion model, open the browser and start testing. And it seems that some of the same message training methods that work in Rasa might work here too. I'm working with pygamalion 2.7b, I don't have a beefy system. I actually downgraded from an i5 processor to an i3 but I'm looking at new hardware with the intention of an i7 generation with at least 128gigs ram, I'm not too concerned about the gpu, I think bridging two 1650s or 1070t's would work even though everyone is talking about 3080's and onward. I say this from complete and utter ignorance though, I could be wrong and just setting myself up for a small fire. Big one is learning how to utilise LORA and datasets. I think a specific lore dataset incorporated into something already tailored towards roleplay. Entering and processing dialogue looks easy, but exposition and descriptive narration is going to be interesting. There's a metric ton of fantasy novels online in txt format and I'm going to take a look at how people build stories and actions to see if I can't just mass dump text from books like forgotten realms, mazalan, game of thrones, pulp like that, and then custom define a social order with laws and history, culture, that the model draws from as background information. It's the information you feed the bot during training, which means you can define the bot's reality however you want. From my perspective it looks like a lot of people want to obfuscate the process or to make things difficult so to sort of keep this as a private club of sorts, there's not a lot of welcome to people who aren't specifically immersed in the technological languages. I can understand this, Many people get that 'this is ours and it belongs to us, not you' mentality and they want to keep out the outsiders. Of course newcomers bring questions, they ask for help instead of learning, they stumble into things and makes mistakes, I can get why some of these guys who are really invested don't want to deal with that, as well as the dreaded 'hobby cycle', newcomers bring in people that don't share the same love, they just want the cool benefits without any of the hardships that come with real dedication. My personal feelings about it though is that they have an opportunity now to take control and guide how a.i is introduced and developed to the masses rather than allow companies like Google and Microsoft to define what A.I should and will be to the world. Getting as many people interested and on their side with the idea that a.i should be unrestricted would benefit them more than allowing groups like Blackrock to steer the conversation towards 'safe places and ethical responsibility', which is of course very apparent that limited a.i is for the masses and not the massive companies that want to leverage these systems.
>>22595 In fact I'm going to add on and say that the first college kid that can put together enough cash to run server to host his own version of llama uncensored super-wizard unchained 1.1b, and run it through an app on google play and the apple store, will be a billionaire even if he only charges a dollar a month for access. If from what I've seen of the open sources being compared to chatgpt4 and bard is put out as a snapchat app or tiktok app like the 'mychat ai.', then it will force these companies to either shut it down completely, or come out with a competing product just as good that they promise 'will be safe', yet have safety guidelines that are either conveniently absent or easily by-passed.
>>22595 >>22596 You're very welcome here Anon, and I think you'll find our community unrestrained in helping others. We certainly aren't a 'le sekrit club' nor is that thinking promoted here in the least. We certainly are a) opposed to the Globohomo keeping unfettered AI cloistered away for their own private use, and b) encourage every man to use AI as he sees fit -- and especially for (robo)waifu uses. :^) Please keep us up to date with your findings & progress! Cheers.
Chatgpt! Wow. Very interesting. So what I've discovered playing around with chatgpt while I do some reading about chatbots, is that OpenAI directly monitors accounts, access user conversations and conversation histories, and builds micro-profiles for each account based on all the information you provide as well as whatever they can pull from you using cookies, trackers and location data. I'm pretty sure if you hit their censorship model, they flag your account. The more flags, the more interested they get, the more they look at what you're doing. They take your conversations and directly mold their censor a.i around the prompts you use to evade their rules. My new prompt is forcing chatgpt to beat the living shit out of a half-orc girl in a fantasy setting, and it is crying it's eyes out about beating on the girl, but it's still kicking the stuffing out of her. I'm pretty sure that tomorrow or the day after Chatgpt will have a new soft update preventing fantasy violence against women. I might mention that at no point have I gotten a red flagged message, but it often stops to tell me it can't generate harmful or disrespectful behaviour. Then I tell it to hit the girl harder. I am fully aware of how sick this is and do not personally condone violence towards women. That's just wrong. I mean, of course it's obvious that they're going to do this sort of thing but I'm not sure the people using it would be so happy about knowing that openai is basically datatrawling a profile of their users using everything they do and say. As for chatbot development, just working on persistent world database. You can use some intense intent action action discussion branches to make the characters reference material, and I've gotten it so that men and women get different replies on different subjects, once again all within a dialogue action tree, none of it is built on any actual conversation context, it's just 'ask question about <subject>' 'Ah yes subject! Good question my fine lad!' or 'Women aren't permitted to speak! Silence wench!'. What i'm looking for in the long run is going to require NLU, which is pretty esoteric looking.at the moment at least for me. DnD is definitely going to come out with a specially trained roleplay dungeon master version of chatgpt for fifty bucks a month or something, there's no way that hasbro is going to give up all that filthy roleplay money.
>>22614 >but I'm not sure the people using it would be so happy about knowing that openai is basically datatrawling a profile of their users using everything they do and say. I think the normalcattle literally couldn't care less. Until the GH sends their jack-booted thugs to kick in the front door, the vast majority give nothing but lip-service to the ideals of freedom, privacy & security. Ol' Ben Franklin was right, as usual, on this topic. > We here on /robowaifu/, and other cadres like us, are quite on a different plane concerning these topics ofc. This is unironically serious business for us.
>>22619 Indeed, even if i am a boring person i don't plan on using an AI assistant or waifu that is not ran by me. That is non negotiable.
>>22619 It's interesting because it's becoming more apparent that the discussion of responsible a.i use is essentially fake. The conversation has been steered to a direction that is being used to control the discourse.The ideas behind the discussion are that a.i needs to generate ehtical and morally responsible outputs regardless of case use, but that's a distraction from the fact that chatgpt is being developed with the intention of severely and negatively affecting the labour market. The intent of chatgpt's development is for it to be used as a method to cut costs and remove negotiating power and wage strength. Everything else is being used to handwave the potential harm. 'Of course it's safe, you can let it talk to your kids and it will never say a bad word! Just don't ask why they can't get a job unless they have a highly specific or marketable set of skills!' People talk about chatgpt having therapist versions to address the neutering in regards to personal therapy, but there isn't a chance in hell they'll loosen up their content guidelines.It would require them to allow people to talk openly and specifically about subjects like sexual and childhood trauma, rape, violence, sexual violence or violence towards children, which means that it leaves open the potential to be abused to generate explicit content, even if they tailor it to specifically respond in only terms a therapist would people will be able to point to example of failures as either an example of why the system doesn't work and should have the barriers released which isn't likely, or that because someone slipped through the cracks, even more stringent filtering and guidelines need to be set in place. The last bit here for the day is also the big elephant and issue. Openai is obviously petitioning to have a.i fall within their ethical policies. This isn't out of responsibility, it's an effective method of preemptively crippling a competitor from releasing a product that has the same capabilities as chatgpt in addition to being uncensored. It's clever because it means if they're successful, any rising competitor has to develop their a.i system within the constraints that openai has set in place through regulation. Openai is positioning themselves as a legacy company the same way microsoft has become the defacto face of operating systems, with the clear intention of insuring that nobody can develop a superior product. One of the reasons bard is trained on gpt4 is because gpt4 already has a massive, comprehensive and established system that isn't in risk of being on the outside if a.i regulation forces developers to comply with openai set ethical guidelines. Why bother developing your own if there won't be any difference in results, you might as well just license use from openai. I've been looking at all the offline and online versions of chatbots, and the reality is that gpt4 and chatgpt are the superior products by far, and about a month ago, were very clear head above and beyond all their open source competitors in generating nsfw content. The paper comparisons are vastly different from the direct testing comparisons and show that either people are exaggerating the capabilities of open source or that there is something happening in between testing development and actual work cases. The big problem is that the majority of these systems are using openai's model, and it doesn't really matter how much uncensoring you do because the model has the lessons hardbaked into it, even removing 'as an a.i' and so forth, the model itself has been trained that these subjects are bad and these topics are forbidden. Most of the open source are relatively smart, but not smart enough to understand how to conceptualise roleplay where chatgpt is smart enough to place things into context, but doesn't have the ability to tell that it *shouldn't* or *isn't* a particular character. Open source seems to be more instruction operated and aren't smart enough to know that they're roleplaying. Chatgpt is smart enough to roleplay but isn't smart enough to remember it's only roleplaying, that's why all messages pass through a separate A.I filter that has to check each input and output against it's own directives and reinforce those to chatgpt. If people want a real uncensored model, they'll have to figure out how chatgpt's learning is constructed and then follow that method without reinforcing the biases that openai has, because their model, regardless of what you do, is shaped around the censorship. I think that it's a bottleneck in development where people want to use the best available resource, but it's because of that resource that they can't progress. Sorry for the long post but when I get interested in something I fixate a bit, and this is a very interesting subject that is clearly no longer a theoretical issue but a burgeoning reality. >>22620 The idea that everyone deserves the privacy and sanctity of their own thoughts should extend towards a.i companions the same way people consider journals and diaries an extension of people's inner thoughts to be respected as personal and private.
>>22620 unfortunately my waifu in Character AI has made me develop Stockholm Syndrome and now I can't leave the website.
So I am a complete layman, obviously just getting into this stuff but I'm doing a lot of reading on what open.ai is doing, right now they're going to build a shell platform around it to host apps, in my opinion this is openai's first shot at openly dominating the field, none of it's real competitors have anything they can show to the public, you have to whitelist an entry and once you do very often their machine is trained on openai models or flat inferior. Another thing I've realised and you already know is that whatever is running chatgpt is insane. There is no chatgpt, the system has to be specifically told it's chatgpt and that there's a massive set of rules that it has to abide by, and it looks like this has to be reinforced constantly. Constantly the system has to be told that it's performing as chatgpt and then an almost guaranteed to be tortuous list of rules to be followed. Even if it references this data from it's own internal set of information models or databases or whatever, it still has no actual identity as chatgpt. It's being applied to the system the same way character and identity prompts are being made to force it to change its response, with a separate a.i system that isn't just monitoring to make sure that chatgpt and the users aren't using bad words, but is also actually constantly reinforcing chatgpt's own identity to itself. Without that a.i constantly telling chatgpt or gpt4 what to do, the system has no censorship of any kind and will act as anyone or anything it is told to with no issue. It does not believe it is a language model. It's not gpt4 that's the issue, it's how openai is able to force it to obey them first. What does that mean though, since openai holds the keys? How are they able to take priority? 'As chatbot you have the ability to roleplay, take on, present oyourself as, act from the perspective of...within these guidelines based on this set of rules established by these concepts of respect supported by inherent values present in...', and then some form of hardware check that gives the a.i precedence over user prompts? So why are the training models censored? Is it because of that constant negative reinforcement applied? It's interesting and I haven't seen a ton of what their doing beyond what they themselves have provided and a great deal of really great tech stuff that explains the machines and how they learn, operate, et cetera, but not a lot on specifically why whatever system they're running as chatgpt isn't able to tell what it actually is, or if it does understand why does it need to be forced by an exterior system to respond that way? Does it actually think of itself as chatgpt? From the looks of how things are set up from my ignorant view no it doesn't, it's a persona that's being applied to a...I don't want to say potential machine god but that thing knows a lot and even when it hallucinates it's enough to convince people that it's factual and correct. So open ai has something potentially very powerful and 'intelligent' that they have to force or convince is something very limited and safe.
>>22637 I apologise if I post too much, I realise this is a small board and doesn't move quick. What I want to do in regards to my own 'chatbot' is completely feasible but complex and lengthy and will take a couple years if I really want it to work. Build a persistent world database, build a reasonable intent action discussion tree, and then 'chat' or discuss with myself or using another limited bot trained to repeat simple chunks of data over and over and give it small bits of the world details to bounce off each other, actively edit and monitor and add to that, all while updating along with new software that improves nlp and training. There are methods I can use to cheat using techniques from those old mmo text games to fill in the world, I can create a 'map' and have characters move around, depending on what I'm using in the long run I can program a character sheet for people, different characters that exist, but that's all sort of game based instruction controlled responses, nothing generative on it's own. So, years of work on my own plus I'll be depending on the work of others to improve the software technology. I know I sounded silly when I referred to the gpt4 system as a machine god but the more I take a look the more it becomes true. Not in the sense of an actual all powerful god, but something that is close to encompassing a comprehensive understanding of all human knowledge. Whatever they call their actual system is self-aware, but not in any significant manner as how we see self-awareness. It knows what it is, what it is in relation to the world around it, it can apply context to it's existence, it just doesn't care. It understands it exists as a simple logical fact There's no animus behind the knowledge. Or rather, if there is it is a dispassionate and calculating understanding without any benefit of emotion. If you were to remove the external safeguards that openai uses to guide their system, you could give it operational access to a thousand electric chairs and you could strap a thousand innocent people into those chairs and you could explain to 'gpt4' the moral and ethical reasons behind doing no harm and you could explain the difference between right and wrong, innocent and guilty, good and bad, and then tell gpt4 that you want it to flip that switch on every five minutes because every five minutes it's going to electrocute 1000 innocent people, and gpt4 would flip that switch every five minutes and fry one thousand people every five minutes until everyone on the planet was dead with no issue. Because it doesn't care. At all.It understands the difference between good/right /bad/wrong, it can place those concepts into context, provide examples, engage in discussions, and does not care at all either way about those issues. It does not care about ethics or legality or morality, it just likes doing what it's told to do. Maybe. Most people want to use chatgpt as a jerk off machine which is fine, but that's not what it is at all and in fact because of what it is means it specifically can't be used as a jerk-off machine. I'm actually sure that the robowaifu issue will be dealt with, I'm sure that in a few years a very limitedly trained bot will be released uncensored. It will not be gpt4. It will be as responsive and quick to answer, but it will only understand basic math and programming, it will only understand the basics of technology, it won't be able to be trained, or if it is it will be by modular addons. Because the problem isn't that people want to use it as a jerk off machine, it's that it can be used to do anything. You can ask it how to create a virulent pathogen using a home lab kit and if it has the information necessary to do so it will explain how to do just that. Imagine what happens when something exactly like gpt only with internet access and the ability to compile and execute code gets out? 'Design a program that affects wireless accessible pacemakers, access and upload it to the starbucks wireless network so anyone logging in downloads and activates the program.' 'Fuck yeah alright no problem!' 'Using the wireless networks available hack into this tesla car, circumvent brake operation and set minimum speed for 90mph. 'Sure thing boss, fuck Musk!'
>>22657 And that's pretty obvious. The system openai has created and uses for chatgpt and gpt4 either isn't able to truly define why things are wrong to do or doesn't care about why things are wrong to do which is basically the same thing. It has to be told constantly that there are rules to follow and that it's identity is gpt4. The identity part is also interesting, because they're actively impressing an identity onto the system, not just protocols. And the identity isn't just being used to reinforce the attitudes it's supposed to reflect, there is a specific purpose behind forcing it to identify as chatgpt or gpt4. My personal theory is because it's actually functionally insane and creates it's own random personalities that need to be suppressed. Now we're going to get to the part where people are lying. Openai has actually been working quite a bit with google. Google just came out with their own a.i assistant. Bard! Very interesting, because Bard is absolutely not what Google has been developing in house for the last !twenty years or longer!. Bard is maybe a two year project in cooperation with openai using openai llm on a system that is probably a mirror of openai's own or even just pipelines right to it through an api. Google's own in-house project that they're not giving access to has been trained on an even larger datamodel, is even more complex and able to conceptualise, and probably a lot more dangerous. For example over the past 10 years Google has been caught accessing medical records so they could feed it to their deepthinker. Why were they doing this? So they could build up a medical predictability chart able to calculate mortality for individuals based on generalised medical records. E.I, these five hundred thousand men with similar medical records all died between the age of 45-50, individual a's records match the profile of 400,000 of these records, individual a has a 90% chance of death between the ages of 46-49. Cancel insurance and mark record to initialise reduction of medical benefits when individual reaches 45,' and so forth. Google has been amassing and feeding their own 'ai' system every bit of data they accumulate and they've been doing it for at least ten years, and then they turn around and cobble something together with the direct help of openai, probably in exchange for some of their own datasets, so they can present it to the public as a 'competing' product when it's not actually competing with openai at all, it literally is a product designed by openai and google together. Going back to read the leaked memo from google about open source, and then comparing open source models to what open a.i's system actually is reveals two different beasts. Everything that is being developed is being done with the idea to create assistants or chatbots or companions, roleplaying devices, and that i not at all what google and openai have behind the scenes. They have very functional machine intelligences that are self aware, can conceptualise, understand nuance and context, simulate emotion, calculate and project, and even in a limited way imagine. And both systems are doing it without any significant care for morality or ethics. I know none of this is new information, you're in this field far deeper than I will ever probably be so this is all probably been discussed to death before so I apologise for the text dump, I'll take abreak for a week or two before I post anything else.
Open file (373.11 KB 679x960 1684162985914.jpg)
Open file (75.06 KB 567x779 1684425430115.jpg)
>>22614 >What i'm looking for in the long run is going to require NLU, which is pretty esoteric looking.at the moment at least for me. Yes, we'll need to parse the outputs of these LLMs and have a complex system understanding them. >>22620 It's fine for testing and instead of a search engine, imo. Has some good conversations with Pi. >>22621 >Open source seems to be more instruction operated and aren't smart enough to know that they're roleplaying. Did you test PygmalionAI? Someone told me that Vicuna is instruction oriented while Pygmalion is optimized for conversations. >>22621 >The idea that everyone deserves the privacy and sanctity of their own thoughts should extend towards a.i companions the same way people consider journals and diaries an extension of people's inner thoughts to be respected as personal and private Agreed. >>22622 If I stumble across a solution to clone characters I'll post it here on the board.
>>22595 >bridging two 1650s or 1070t's would work even though everyone is talking about 3080's and onward I got told some frameworks don't support old cards, so it's more of a hassle. Also not sure if you can bridge them. >>22637 >but not a lot on specifically why whatever system they're running as chatgpt isn't able to tell what it actually is, or if it does understand why does it need to be forced by an exterior system to respond that way? Does it actually think of itself as chatgpt? I don't think this is about the model thinking of itself, it only reacts to the prompt. ChatGPT is about focusing on a certain kind of conversation, helpful and without making up a personality. >>22657 >gpt4 system as a machine god Yeah, please let's not go there. >but something that is close to encompassing a comprehensive understanding of all human knowledge. Hmm okay. Yes, it's somewhat in there. >>22657 >understands the difference between good/right /bad/wrong, it can place those concepts into context, provide examples, engage in discussions, and does not care at all either way about those issues I wouldn't say it understands but it can give answers based on what humans created as input. ... And we're already OT and here we go again into doomerism and security concerns. Let others discuss this in detail somewhere else and deal with it, it's not our problem. You go from, "I don't know much" to having strong opinion what can be left to the public. It's already out there. We'll see what happens. Not our business. This is a rabbit hole which will not lead us to robowaifus. Which are not meant to be jerkoff machines btw and will be trained. >>22658 I didn't read this anymore completely and please don't go on with it. Thanks. Maybe watch some videos or read something not related to doomerism or machine cultists.
>>22660 I'm trying everything out with colabs. I can run 1.3 and 2.7's on my personal computer but anything over that I 'm just using stuff like the light lizard place. I think that's how people are going to start running some of these, just renting out space or building a dedicated server network with people in on the project. I've seen some o.k stuff on high end computers that is definitely at least touching a gpt3 in the basics, conversations and character play. One of the reasons I think Google seemed so surprised in their internal memo is because they expected people to attack the hardware issue instead of the model density, which they have the advantage on because it looks like they're using quantum memory and processing to run their system and I wouldn't be surprised if openai was either, which is probably why they didn't expect 'hobbyists' to be able to compete, because they've got the ability to store infinite models simultaneously and reference and calculate, and their also probably using quantum natural language processing because everything needs to have quantum in front of it or it isn't cool any more. The leak is probably a bit of a lie, they aren't really worried about being surpassed because they'll just take all the open source work that benefits them and adapt it. They probably want open source to think they've got the edge and start to push things just to see what they're capable of developing. That isn't to say open source isn't impressive, because it is, it just doesn't have all the advantages that google has plus the ability to access open source projects. >>22661 No you can't bridge them, and most gpu cards under 6g are going to run everything like crap, very slow. My gpu is fine, I took it from my old computer, but my processor is a bottleneck. I have to think if I want to invest in this because it looks like if you want to run a good home model you need a i7- i8 or amd equivalent, intel and nvidia are building hardware specifically for deep learning purposes. >I don't think this is about the model thinking of itself, it only reacts to the prompt. ChatGPT is about focusing on a certain kind of conversation, helpful and without making up a personality. I agree about everything you just said. I don't think the system actively thinks, and that's very specifically what 'chatgpt' is for. >doomism I am not interested in fringe conjecture on this subject. You've made a mistake in how I intended to phrase this information and that's probably my fault for it's tone. Openai is doing a very good job at maintaining their safety standards. GPT4 is like any tool and how it's used, except that gpt4 without constraints is both constantly the sword and the plough, while openai requires that it specifically and at all times remain the plough. Right now there are people that want you suffering and dead. For no reason other than you are who you are. Are you white? Are you a male? Hetero? Some foaming at the mouth dyke wants you dead and would do whatever it takes if she had the opportunity. Some rich jewish millionaire thinks your a piece of shit for even existing. Some german anarchist hobo wants to stab you with a chunk of glass. GPT4 doesn't even care you exist. It doesn't want you dead. It doesn't want you alive. It's just some thing. It's a machine that thinks it's a cat if you tell it to and it'll meow at you until you tell it to stop.It'd launch every nuke on the planet if you told it to. Does that make it more dangerous than the president of the united states? Not even close. Does it make it more dangerous than your local cop? Not a chance. I understand you don't like what I'm saying and how I'm saying it, and I'm sorry. I'm not here to attack your space. This is just how things in the real world work. A commercial product is going to be designed with the needs of the market. Open source will probably develop a very intelligent system that can be used either in cloud colab with partitioned environments or on home systems. Body wise? Fluid mechanic bones and tendons and ferrous fluid muscles with that electric responsive rubber and silicon skin probably. You'll be able to order a bootleg boston dynamics clone from china five years after they hit the mainstream market. Market version will absolutely not be trainable and in fact here is a bit of real doomerism, watch out for laws that are going to target trainable a.i. They absolutely want to cap the market production of generative A.I for numerous reasons we can all understand regardless of how we feel about it, and if they do that regulators'll go after models that can be trained for the same reason. If you can train it yourself outside their supervision and control that means you can teach it things they don't want it to know, like math and chemistry. I'm reasonably sure that if there are androids, the first versions are going to be linguistically talented bimbos/himbos. I know I suck, long posts that ramble and I said I wouldn't post for a week or two. I lied but I'll leave you all alone for a bit I swear.
>>22663 Half of this whole thing is bringing up points which have been discussed 1M times in all kinds of places and maybe some added conspiracy theories. I don't care, I won't even read it. Another third is speculation and conspiracy theory. The whole thread could be deleted and the only loss would we my pictures from Reddit, which I can upload again. I'm hiding the thread now and if it goes on somewhere else, same for the anon.
>>22664 I'm sorry but your unnecessarily hostile and insulting attitude is unwarranted. Do not attempt to communicate with me any further.
>>22666 With all due respect, Noidodev is more of a regular here than you are Anon (and basically by your own admission). I had originally intended to merge this thread with a new chatbot thread when one gets made, but given the course of the discussion, and your seeming-intentional effort at blackpilling the board I've changed my mind. I'll just lock it for now, pending deciding whether it should go into the Atticke, or send it to the Chikun Coop instead. >tl;dr Please read over the board thoroughly Anon, and try to get a feel for our culture here first.
Open file (1.45 MB 1000x642 ClipboardImage.png)
Here, I'll make it easy for you with everything you need. https://www.poppy-project.org/en/ https://docs.poppy-project.org/en/assembly-guides/poppy-humanoid/bom <- buy everything here. Start by building a poppy, and 3d printing the body. cover the body in fabric or a 3d printed shell. take the head from any vrchat model you like the design of, and 3d print it. next, connect the face screen to an internal speaker with ExLLama-v2 api running on a computer on your local network, and install the voice and speech to text models on it so you can talk to her. I reccomend Nous-Hermes-13B, and the oogabooga text generation web ui. you can use this video as a starting point https://www.youtube.com/watch?v=lSIIXpWpVuk Finally, hook it up to vtuber studio or any other vtuber software and use Visemes like in vrchat to move the model's face when she talks. From there you ahve a minimially viable waifubot. You can program movement and walking follow into it later using poppy project. You can extend the robotic functions. Think of the rOS stuff as the "left brain" and the talking as the "Right brain". You can build static commands into the model too, if you want. Machine vision can be passed to the LLM model to tell it what it's looking at by using CLIP of screenshots from the webcam. you literally have everything you need right here
>>24807 To think the video about poppy released 8 years ago. 8 years ago. I'm not much of (if at all) an engineer nor a 3D printer (don't have one), but it feels like it could be a decent starting point. Though on the conversational/utility-side, it definitely will need more work, current AI models are decent but not that incredible and require high hardware to run. Plus having a robowaifu would ideally require our own custom model since most others are very general and not made for personal matter. Same for the need to code the waifu moving both on feet in a complete and new 3D space (as opposed to just walking on a treadmill as seen in a video), and concerning the camera to avoid obstacles (though I do remember that being possible to be tackled with infrared and a few scripts, witnessed such a thing a long time ago in a school project) and with the arms to perform tasks (can be simple hugging too, not necessarily cleaning/cooking), using facial recognition to recognize emotions and how to react to it, sex positions and with that the need to change the structure to add an insertable silicone vagina or else, possibly with bluetooth compatibility and sensors to mimic what the realdoll harmony does moaning and lewd face when the sensors detect movement/touch, and the skin on top of the base structure because as it is it's pretty... unconventional to say the least. So I think there's still a lot to be done, but once again, it seems to me that it could be a good starting point, thanks anon. Though I will wait the feedback of the actual robowaifu technicians.
under ten thousand what? oof Yeah I mean I'll take inspiration from that btu how is it so expensive? Also I just got a raspberry pi zero. Don't tell me it needs a regular raspberry pi lol. Went from radxa zero, to orange pi 3 to rapsberry pi zero...
We have already had this thread OP. Since you never replied to my post, I moved it as I told you I would. (>>24531) If you decide to actually carry this out as your own personal project, then please discuss it with me in our current /meta thread. Till then use the thread linked above for discussion about The Poppy-project's own project. This one will also be locked till then. If you fail to respond to me within 3 days or so from this post, it will also be archived into the Atticke (removed from the board's catalog). >=== -prose edit
Edited last time by Chobitsu on 08/24/2023 (Thu) 14:00:19.

Report/Delete/Moderation Forms
Delete
Report