/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Merry Christmas!

Update on the file situation (it's good)

The warrant canary has been updated.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


Merry Christmas, /robowaifu/ ! Please join the /christmas/ party this year!


HOW TO SOLVE IT Robowaifu Technician 07/08/2020 (Wed) 06:50:51 No.4143
How do we eat this elephant, /robowaifu/? This is a yuge task obviously, but OTOH, we all know it's inevitable there will be robowaifus. It's simply a matter of time. For us (and for every other Anon) the only question is will we create them ourselves, or will we have to take what we're handed out by the GlobohomoBotnet(TM)(R)(C)? In the interest of us achieving the former I'll present this checklist from George Pólya. Hopefully it can help us begin to break down the problem into bite-sized chunks and make forward progress. >--- First. UNDERSTANDING THE PROBLEM You have to understand the problem. >What is the unknown? What are the data? What is the condition? Is it possible to satisfy the condition? Is the condition sufficient to determine the unknown? Or is it insufficient? Or redundant? Or contradictory? >Draw a figure. Introduce suitable notation. >Separate the various parts of the condition. Can you write them down? Second. DEVISING A PLAN Find the connection between the data and the unknown. You may be obliged to consider auxiliary problems if an immediate connection cannot be found. You should obtain eventually a plan of the solution. >Have you seen it before? Or have you seen the same problem in a slightly different form? >Do you know a related problem? Do you know a theorem that could be useful? >Look at the unknown! And try to think of a familiar problem having the same or a similar unknown. >Here is a problem related to yours and solved before. Could you use it? Could you use its result? Could you use its method? Should you introduce some auxiliary element in order to make its use possible? >Could you restate the problem? Could you restate it still differently? Go back to definitions. >If you cannot solve the proposed problem try to solve first some related problem. Could you imagine a more accessible related problem? A more general problem? A more special problem? An analogous problem? Could you solve a part of the problem? Keep only a part of the condition, drop the other part; how far is the unknown then determined, how can it vary? Could you derive something useful from the data? Could you think of other data appropriate to determine the unknown? Could you change the unknown or the data, or both if necessary, so that the new unknown and the new data are nearer to each other? >Did you use all the data? Did you use the whole condition? Have you taken into account all essential notions involved in the problem? Third. CARRYING OUT THE PLAN Carry out your plan. >Carrying out your plan of the solution, check each step. Can you see clearly that the step is correct? Can you prove that it is correct? Fourth. LOOKING BACK Examine the solution obtained. >Can you check the result? Can you check the argument? >Can you derive the result differently? Can you see it at a glance? >Can you use the result, or the method, for some other problem? --- > thread-related : (>>24047) >=== -corrected author's name -add crosslink
Edited last time by Chobitsu on 07/19/2023 (Wed) 14:10:30.
>>19056 >consider contribute to our own local, ongoing, 'database' right here Anon? Yes, I should do this more often. We simply don't have enough active users here. As you can see in >>19057 it would be something complementary to the database. I like doing things in some high level code also because it could later be used by some script or DL model. >I simply don't have the time to manage that solely by myself, so it's potential has languished tbh. I found it useful, though I didn't install it on my new computer yet. Well, another thing I should be doing. I thought the program was kinda finished, aside from all the great stuff you can always build on top of something.
>>19058 >I thought the program was kinda finished, aside from all the great stuff you can always build on top of something. It's Boolean algebra leaves much to be desired (ie, it only supports OR r/n), and it would be nice to somehow directly support posts to that thread. Sadly, I've never been able to figure out how to grab & display captcha images so POST'ing can be directly supported from Waifusearch. Pull-requests welcome! :^) Regardless, sounds like you're on a highly-productive tack already Anon. Don't let me distract you. I'll try to support you any way you see fit, that helps /robowaifu/ and it's anons! >=== -minor sp edit
Edited last time by Chobitsu on 01/26/2023 (Thu) 17:38:27.
>>19059 Actually, it's the 'grab' bit of a captcha image that's the problem; I simply have no idea how to do that using cURL alone. Displaying them in a local FLTK GUI would be trivial, as would be the solve input/send. Send halp plox >=== -add 'cURL' cmnt -add 'halp' funpost
Edited last time by Chobitsu on 01/26/2023 (Thu) 17:36:52.
>>19058 >so POST'ing can be directly supported I know that you wanted to do that and at some point with AI waifus this might be very useful. But it's not something I hugely missed. To me the search is fine. Don't remember if I missed AND or other booleans. I think I already worked on a script a while ago to nearly automatize the library. The idea I have, and I think I shared already, is filtering out the new phrases from every posting, by removing all the more common words and phrases. Then give the user doing this a list with new phrases and terms, then this could be fed to Waifusearch on the same computer, and the results could be posted into the library. >>19062 >Send halp plox I wanted to suggest for similar reasons that you should contact the creator of the Lynx imageboard software. I wondered if you already did that so he would integrate Waifusearch into his software, or create an API to install it as a module. Imagine a field on top, to search the whole board with it.
>>19069 Good ideas Anon. I'll give some thought to your filtering concept when we're ready to update the tool. >contact Lynx The only real advantage that this tool has is that it's pretty blazing fast (typically < 100ns to search the entire board's text). While if I were a server dev I'd be extremely interested in that fact, I don't fancy he's a C++ or systems guy in general. An API is probably a good thing, but I'm just not interested in learning much about JS, since we have a mountain of stuff waiting to be worked through already for robowaifu's systems code. JS will obviously not be anywhere near this code, ofc! :^) Again, pull-requests welcome. If someone wants to help him integrate an API that could be tied to a Waifusearch module for Lynxchan, it would be a big plus for anyone on any IB using his software.
>>19073 I didn't mean that you rewrite it in JS, and not that you need learning it (only The Good Parts, anyways). Waifusearch would run on the server, and anons could still download their own version. But any visitor could search the board(s) just using a search field and getting a result. Lynx might of course use JS to implement that connection into the board software, but who knows. It should work with the board calling Waifusearch on the server or as external service. It would improve Lynxchan in general, so maybe he would add some API. >Captcha There might be some module for getting around capchas. Maybe not in CPP, but maybe yes. You will survive using some Python glue code. I would look around KurobaEX. Looking it up... Oh it's more likely to be Kotlin: https://github.com/K1rakishou/Kuroba-Experimental
>>19190 Thanks for the clarification Anon. I agree that it would be a nice improvement to the IB experience in general (as we ourselves have proven on our own board), but I just don't see Stephen Lynx as being an innovator, and this is something that I've never seen any IB software do personally. Thanks for the tips about dealing with captcha. It's a sticky business in general, but we'll need to address it well if we're going to have our robowaifus be able to funpost alongside us. Obviously the pertinent HTTP calls are there to be broken down and reversed, I just CBA to tackle it myself ATM. Hopefully someone else who can program in libcurl will provide a solution for it soon. :^)
Open file (44.87 KB 1301x398 robowaifu-15972.png)
>>15972 classDiagram internal_frame --o rods internal_frame --o 3DpParts clothes --o size shell --o clothes shell --o plasticShell plasticShell --o 3DpParts shell --o siliconeRubber siliconeRubber --o 3DpParts constrain -- storage constrain -- transport storage -- optionFolding storage -- optionTransform storage -- optionSeparation transport -- optionFolding transport -- optionTransform transport -- optionSeparation optionFolding -- optionBox optionTransform -- optionBox optionSeparation -- optionBox capabilities -- basicLocomotion basicLocomotion --o swerveDrive basicLocomotion --o balancingWheels swerveDrive -- balancingWheels
>>19329 Nice, you're bringing things together!
Open file (104.55 KB 1088x596 Path2RW.png)
classDiagram Robowaifu <|-- LoveDoll_Improvements Robowaifu <|-- DesktopWaifu Robowaifu <|-- AI_Waifu Robowaifu <|-- RoboMaid Robowaifu: +bool isRobot Robowaifu: +bool hasAI Robowaifu: +bool physicalBody Robowaifu: +bool isSexEnabled Robowaifu: +mobility(walking) Robowaifu: +improvements() Robowaifu: +additionalSkills() class LoveDoll_Improvements{ +jointed_hands() +jointRepairs() +DIY_framework(body) -bool isRobot +bool isSexEnabled } class DesktopWaifu{ +physicalBody() +bool isRobot -bool isSexEnabled +AI(physical_interactions) +AI(conversation) +AI(recognition) +DIY_framework(body) +DIY_framework(personality) } class AI_Waifu{ -bool isRobot -bool isSexEnabled +AI(conversation) +AI(recognition) +DIY_framework(avatar) +DIY_framework(personality) } class RoboMaid{ +physicalBody() +bool isRobot -bool isSexEnabled +bool isSmall +AI(physical_interactions) +AI(simple_conversation) +AI(recognition) +DIY_framework(body) +DIY_framework(simple_personality) +DIY_framework(skills) +mobility(wheels) }
>>19866 This is nice work Anon. I can see this eventually growing to something dramatically-larger in the end. I'd recommend you think now about ways to do abstractions. For example, in Blender and other big software having a node system, it's commonplace that you can 'group' nodes & interconnection systems together, then iconically collapse that all down to a single idea. This is quite similar (at a conceptual level) to the way object-oriented software development also works. Keep going Anon, you're doing great! Cheers.
>>19882 Thanks, but I think I will go on doing this from time to time. Making overviews over some area, while by doing so hopefully also trying to encourage others to do something similar. Over time it might grow to a interconnected system. I'm also not sure about how to use the system the best way, I maybe should learn more about class diagrams first, but I think it's already a useful tool for creating better access to certain areas of knowledge or viewpoints we have here.
Why has this >>16378 never been crosslinked here? https://github.com/SoaringMoon/WaifuNetwork >WaifuNetwork an Obsidian powered searchable respository of information for the construction of gynoid robots. Using obsidian, wikilinks, and tagging; a vast network of connected topics, tags and pages can be visualized as a nodal map of topics. Acting as a second brain for public use. >WaifuNetwork is NOT an archive of an imageboard. It is an attempt to distill the information within multiple sources into a searchable format.
>>20069 Good point Anon, thanks.
All papers mentioned on /robowaifu/ listed in a related wiki: >>22353 I once posted a little snippet to retrieve such and more data, e.g. comment or summary, based on the paper id: >>10317
Open file (55.95 KB 768x1024 Imagepipe_4.jpg)
Open file (54.02 KB 768x1024 Imagepipe_3.jpg)
Open file (93.35 KB 768x1024 Imagepipe_2.jpg)
I started making a diagram about building a human-like AI, to get me into thinking about that more often keeping track of my ideas, and not forgetting about areas, details and how they connect. Also to show how much is still to do. My camera quality is bad, but I'll upload some pics anyways.
Open file (215.49 KB 909x1000 FvzJsBbaQAAaJKk.jpg)
we too are taking a "bite" from this problem. Working on an AI system It's an order of magnitude more complex than an LLM but it incorporates LLMs. Don't want to give too much away this early but know that we are working on something. Have faith, we are doing this for the entire /robowaifu/ movement and aren't going to commodify it to the point where it's inaccessible - the end product will be completely User Owned, that is the entire point. It's yours, your waifu. While it will be a paid product (with an option for cloud service, but not required) the capital will be reinvested in robowaifus b/c I care about robowaifus not status among normies - my end desire is space exploration, I see robowaifus as a necessary step (as a merger of man and machine and the freedom embark from earth without the troubles XX's add to the equation)
>>22493 You can't own a software completely without having the source code. I hope you're wise enough to make only a module with an API, but we'll see.
>>22494 details
>>22498 You want an explanation how I meant it? I thought this was obvious. Or did you mean you post details later? Are you planning a full system to control the robowaifu and converses with the world? As a subscription based service? Then it would need to be on a external server or closed source software.
>>22480 Nice Noidodev, thanks! Good luck with your concept designs Anon! :^)
>>22493 Good luck Meta Ronin, I wish for your team's good success! Cool pic BTW. :^)
AI and tools as your Second Brain, helping organization, searching, overview, understanding papers (Matt Wolfe): https://youtu.be/3u80Zuwf2Zo StatQuest Made a map about learning everything around machine learning: https://www.reddit.com/r/statistics/comments/wfwl9c/e_statquest_released_a_free_map_of_his_videos/ Mindmaps about relationships by ChatGPT: https://www.reddit.com/r/ChatGPT/comments/13cld5l/created_using_chatgpt/
>>9555 More like 3-5 sex positions. Waifu v1 should be a mechanical sex doll. The wright brothers didn't make a jumbo jet you know.
Open file (114.47 KB 800x800 Mimi.jpg)
>>22827 Different guys might have different priorities. Think about different areas, and then sort how hard it might be to get there. Then these are options. Depth or height is the (assumed) difficulty to develop and implement for each option, but width is all the options in different areas (skills) to choose from. I somewhat agree with you, a early gynoid would only need to be able to move into different positions. That said, we try to not call them sexbots, since this exaggerats this point too much. Even owners of such dolls often rather call them lovedolls. In case of a gynoid, she would at least somewhat hear and speak some sentences. So there would be some emotional bonding involved for some guys. The term sexdoll also implies a sex toy which would be hidden away, while a synthetic girlfriend would of course sit on the couch and be shown to at least some visitors. Also being used for some AI conversations.
>Give me a list of all degrees of freedom in movement a human has, sorted by body parts. ChatGPT: Here is a list of the degrees of freedom in movement for various body parts of a human, sorted by body region: Head and Neck: 1. Neck flexion and extension (forward and backward movement) 2. Neck lateral flexion (side-to-side movement) 3. Neck rotation (turning left and right) Upper Body: 1. Shoulder flexion and extension (raising and lowering the arm) 2. Shoulder abduction and adduction (moving the arm away from or towards the body) 3. Shoulder internal and external rotation (rotating the arm inward or outward) 4. Elbow flexion and extension (bending and straightening the arm) 5. Forearm pronation and supination (rotating the forearm to face down or up) 6. Wrist flexion and extension (bending the wrist up or down) 7. Wrist radial and ulnar deviation (bending the wrist towards the thumb or little finger) 8. Hand and finger movements, including individual finger flexion and extension, abduction and adduction, and thumb opposition Spine and Torso: 1. Spinal flexion and extension (forward and backward bending) 2. Spinal lateral flexion (side bending) 3. Spinal rotation (twisting) 4. Pelvic tilt (tilting the pelvis forward or backward) Lower Body: 1. Hip flexion and extension (lifting the leg forward or backward) 2. Hip abduction and adduction (moving the leg away from or towards the midline) 3. Hip internal and external rotation (rotating the leg inward or outward) 4. Knee flexion and extension (bending and straightening the leg) 5. Ankle dorsiflexion and plantarflexion (pointing the foot up or down) 6. Ankle inversion and eversion (turning the foot inward or outward) 7. Toe movements, including flexion and extension, abduction and adduction It's important to note that the number of degrees of freedom can vary depending on the joint and individual variation. Additionally, this list provides a general overview and may not encompass every subtle movement possible.
>>23494 I think you're on to something here Anon. Please keep going! :^)
>>23497 Well, I hoped for years some fresh people would help me to organize things better. Make overviews and to-do lists, and go from there. Now I found a tool to do the job.
>>23498 Here's is mine: august 2023. speech recognition and exoskeleton december 2023 mouth and head. february 2024 hands may 2024 arms august 2024 body movement and balancing december 2024 skin 2025 and optional- legs and walking It doesn't need to be super elaborate really what chatgpt suggested includes ankles and toe movements.
>>23529 Nice list Anon. I like that you've attached time goals to it all. Work hard and we all look forward to seeing your progress over the next couple years. Good luck! :^)
>>23494 That's really helpful. Thanks. I need to save this. I got a file I;ve been saving stuff like this.
Open file (186.13 KB 850x429 Screenshot_66.png)
Using LLMs for work: https://youtu.be/h8Jth_ijZyY?t=1004 - Code - Jargon explainers - Name generation - API design - thesaurus - brain storming buddy
>>24529 Nice stuff NoidoDev, thanks!
>>24546 >If you want a husk for a robowaifu then sure stick llama2 in and call it a day. Large language models are not magic or a full mind. >Your waifu will not have robust memory or opinions that are consistent. She will have no model of the world. There will be no internal emotional state or will she be aware of the passage of time and be able to passively observe things. Since "we" won't be able to successfully simulate a human brain, I think the next best thing is to provide an LLM with as much context as possible. This could be done with an automatic prompt generator that pulls in "sensory data" (Camera - > OpenCV -> "You see a basketball. You see Anon." | Microphone -> Speech to Text -> "Hello Waifu, how are you doing?) and "memories"/context (Clock -> "The time is 5:00 pm." | Memory Bank -> "You are in Anon's Office. Anon is a robot technician. You are his companion.") ... I wonder if you could subdivide this between different AIs? There could be a text file that stores every event the robowaifu recognizes, it gets fed to an LLM to summarize it, this summary gets stored to the "long-term memories" file...
>>24559 Continuing this thought, the first step is to see if prompt engineering works well enough in the first place before creating a machine that can auto-generate them... Once you have a good structure for prompts, then engineering the prompt generator is easier.
After some reflection: An LLM is never going to have human cognition, spatial awareness, instincts, etc. Creating a holistic brain like the human one would require an entirely different approach. Humans aren't born with a dictionary, they use words to express subconscious, innate desires and "feelings". They don't reason their way into climbing over a rock, they just do it as a matter of organic learning and instinct. In order to have a Lizard brain, you need to evolve a perfect lizard. For now, the most we can have is a good-looking level 2 robowaifu (can perform a limited set of high-level pre-programmed functions IRL). A robot that can learn, ask questions and make observations like a human is not currently feasible. The most I can see being done is attempting to emulate memories and feelings, and automatically feeding it limited info like "You are in the kitchen" and "It is hot".
>>24562 thoughts on marys room?
>>24566 First time I've heard of it. > Does she learn anything new? Yes, because now she has empirical experience of what "blue" or "red" is. You cannot speak red or blue into the consciousness of someone who only knows black and white. If Mary's only job was to study the physical properties, it wouldn't matter if she knew what "red" was. She's not interested in the subjective experience. So... A robot waifu LLM could have a high-level model of what anger is "supposed" to feel like, or the correct conditions for jealousy, but it's all abstract and hypothetical. Mary knows what red is "supposed" to do, but she doesn't know how it feels. In order to have a robot be angry like a human, it would have to have a near-perfect model of the human brain and body. Without a perfect physical model of the creature it's trying to replicate, a robot's "emotions" will always be different from an organic creature. But if it's all good enough for companion work, it doesn't practically matter.
>>24567 >>24566 Or expanding this to the idea of the "holistic brain", we don't know the language the subconscious uses to communicate. How does the visual cortex cause the monkey to be scared when it sees a predator? If you know, how do you translate that to an LLM input when the original consists of many different modules working concurrently? The LLM would have to be one "center" of the brain, with a bunch of unconscious supporting centers connected to it. Otherwise, you're spiralling into explaining literally every function of the human body with language and shoving that into an LLM (which seems bloated, if not impossible). Human cognition isn't a single train of thought, it's a bunch of things happening all at once. Seeing a person is not the same as thinking "I see this person"
>>24568 You could have the LLM as an executive function, but how does it understand "what am I doing?" Let's say your unconscious functions have caused you to pick up a box. How do you know that you have picked up a box? In the current moment, You need to see yourself holding a box, feel the box in your hands, and optionally remember yourself not having the box and then picking it up. A robot has to poll every single piece of information and conclude what is going on, as well as decide what it should do next. Is a language model capable of discernment? I guess the only way to find out is to throw a bunch of 'em together and see what happens.
>>24567 Good points Anon, but >"what anger is "supposed" to feel like" >"or the correct conditions for jealousy" seems to me a rather odd choice of examples for >"But if it's all good enough for companion work" TBH, seems a little like feminist-oriented attempt at gaslighting here. But I'll give you the benefit of the doubt as a newcomer here. :^) I'd also suggest at least trying to use more male-oriented, upbeat robowaifu concepts like >"what love is "supposed" to feel like" >"or the correct conditions for enthusiastic care" Cheers Anon. :^)
>>24568 >Human cognition isn't a single train of thought, it's a bunch of things happening all at once. I would suggest that what is going on within the human soul is vastly deeper and more complex than we can ever imagine in this life. But, I'm also rather confident we can pull off satisfyingly-realistic simulacrums for much of it's observable effects before all is said and done here! :^) >>24569 >I guess the only way to find out is to throw a bunch of 'em together and see what happens. Ahh, the old 'Just keep throwing stuff against the wall, and see what sticks' approach to design & engineering, ehh? :^) Heh, it's actually worked out a few times in history before, so who knows? >=== -add add'l reply -minor edit
Edited last time by Chobitsu on 08/11/2023 (Fri) 19:25:14.
>>24559 This thread here is about organisation. Your ideas and the responses relate to "cognitive architecture" and might have come up and been addressed in some ways >>77 >>250 >>18 >>11102 >>17125 >>16217 >>15047 >>2731 >>22 >>111 >>107 >>152 ...
>>24605 I'll probably relo this whole convo at some point Anon. Thanks for the great crosslinks list BTW! Cheers. :^) >=== -minor edit
Edited last time by Chobitsu on 08/15/2023 (Tue) 14:25:28.
>>24559 I feel people are taking the wrong message from my post. I never said that I want to simulate a human mind, nor do I think that is a useful goal for robowaifu. My message is that machine learning is a useful tool in your toolbox, but should not be the only thing you use. >I wonder if you could subdivide this between different AIs? Yes absolutely and that will be very important. >There could be a text file that stores every event the robowaifu recognizes, it gets fed to an LLM to summarize it, this summary gets stored to the "long-term memories" file... A better approach would be to identify important things to remember (you could ask the LLM), create an embedding of the information and then store the embedding with the text inside a database. To retrieve the memory, take the input before it’s fed into the LLM and query the DB for related memories to be inserted into the pre-prompt. (This is not a new idea) Skim this readme, I think you will find this useful https://github.com/wawawario2/long_term_memory >>24568 I don’t think we need to copy the monkey meat to succeed, AI NN are vaguely inspired by real neurons but are not the that similar in reality. So there is no reason the waifu brain needs to be accurate. I don’t think there is a "subconscious language" but the brain for sure is a collection of many independent systems that share information and is not a single system. The reason I speculate this is due the existence of conditions like "callosal syndrome" (where the connection between the two hemispheres is damaged) and the types of behaviors that are associated with it. So one way this idea could be applied is that a waifu brain would have some sort of event bus with different "modules" that listen for and publish events. I would love some input right now, here is the things on my "TODO" list. - I need to do a deep dive into psychology, this would be useful for figuring out what "mechanics" this mind should be governed by. (requesting opinions and advice) - I need to get more hands on experience with actually working on ML models, I have a background in programming but nothing to do with ML. (requesting opinions and advice) I don't know where to start when it comes to psychology, the entire field look like its filled with traps and is very pozzed. I have at random ran into interesting ideas, I learned about "misattribution of arousal" via a book on game design. I don't know where to start looking. I would love input! For ML, I would like to take a smaller is better approach, I don't think it's a nicety, but instead its essential for success. GloboHomo is begging for regulation and we can't continue to rely on the cloud and absurd amounts of compute. Even if stuff won't get this bad, its still a good to keep costs down to make it accessible to a single anon's budget. Even if we could collectivise the efforts, centralized groups are more easily subverted and attacked, this also robs the individual's ability to make there waifu in there own singular image. What makes LLMs so powerful is that they are trained on abusred amounts of unlabeled data. given enough data, compute and network size, the LLMs learns patterns contained within language including facts, ideas and bits of general reasoning patterns within it (to better predict text). It’s important to point out that LLMs are slow, expensive and hard to control. Creating a smaller and simpler networks is hard because finding labeled data is hard. But we can take advantage of something really nice. We already have LLMs you can use large LLMs to create the labeled data and basically extract what it learned. We are already seeing exciting developments where LLMs are being used to train other models. I ask anons to look at this problem from a high level, what works for them may not work for individual anons or even small groups of "us" (i'm a new fag so I wont speak for everyone here). Big tech is not going to push this, it’s the reason why they don’t allow it in there api/usage agreements, they do not want this. I think it’s important to understand that we are smaller then big tech and that we play by different sets of rules. We may not have as much resources but we also have advantages that globohomo corporations do not. Remember there is no moat :^) A current task that could be of a lot of use to "us", is figuring out what kind of structured and labeled data would be really useful for our mission. It would be a good move to generate it now while access to very big LLMs is still quite open. I would love input on this!
>>24612 My apologies for not responding sooner Anon. >nor do I think that is a useful goal for robowaifu. While I think that's a matter of degree, I'll agree with sentiment in general. BTW, we now have a new thread on-topic with this : (>>24783). >I don’t think we need to copy the monkey meat to succeed, AI NN are vaguely inspired by real neurons but are not the that similar in reality. Interestingly, the so-called 'monkey meat' as you put it, is now being used in conjunction with in-silico devices, just as predicted by /robowaifu/ ahead of time. (>>24827) > Only time will tell how useful this approach will be for robowaifus, but I think there is little doubt we'll see these systems being used in guided missles and drones within the decade. >my ML TODO I'd suggest starting with the recommended post from our /meta threads, Anon. > -How to get started with AI/ML for beginners (>>18306) >But we can take advantage of something really nice. We already have LLMs you can use large LLMs to create the labeled data and basically extract what it learned. We are already seeing exciting developments where LLMs are being used to train other models. Yes! This is a really nice innovation going on. I, for one, am eager to hear about this from one of our resident AI researchers, RobowaifuDev. >A current task that could be of a lot of use to "us", is figuring out what kind of structured and labeled data would be really useful for our mission. It would be a good move to generate it now while access to very big LLMs is still quite open. Indeed. And time may be running short for everyone in this regard as well : (>>24704). Thanks for making such an effort-post here Anon. Those are always welcomed on /robowaifu/ ! Cheers. :^)
>>24612 >- I need to do a deep dive into psychology, this would be useful for figuring out what "mechanics" this mind should be governed by. (requesting opinions and advice) I answered here >>24861
I upgraded this: >>10317. Though, nobody seems to care, since it was broken an no one complained or fixed it. It's not for downloading the files, just for getting the metadata for those which you already have. Doesn't work for renamed files, for example where the title of the paper was put into the name. I want to use this to extract the metadata and be able to use it in something like Obsidian. So I can have the description of the paper there and the title with the link. At some point, making keywords into tags automatically would also be interesting. (Indentation might be botched in the code, since copying from Emacs seems to not work very well) # pip install arxiv first import os import arxiv Getting metadata for your ArXiv.org documents AI_PAPERS_DIR = os.path.expanduser("~/Games/Not-Games/AI_Papers/") if AI_PAPERS_DIR is None: AI_PAPERS_DIR = os.path.expanduser(input("The dir with papers: ")) filenames = os.listdir(AI_PAPERS_DIR) id_list = [] for filename in filenames: if len(filename.split('.')) == 3: id_list.append('.'.join([string for string in filename.split('.') if string != "pdf"])) id_list = list(set(id_list)) client = arxiv.Client() result = next(client.results(arxiv.Search(id_list[0]))) print(vars(result).keys()) # dict_keys(['entry_id', 'updated', 'published', 'title', 'authors', 'summary', 'comment', 'journal_ref', 'doi', 'primary_category', 'categories', 'links', 'pdf_url', '_raw']) print(f"{result.title}") print(f"{result.summary}")
>>26312 Thanks kindly, NoidoDev. :^)
>related : John Gall : Systemantics : ( >>33550, >>33553 ) >=== -add crosslink
Edited last time by Chobitsu on 09/14/2024 (Sat) 15:24:36.

Report/Delete/Moderation Forms
Delete
Report