/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality!

We are back. - TOR has been restored.

Canary update coming soon.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

Captcha
no cookies?
More

(used to delete files and postings)


“I think and think for months and years. Ninety-nine times, the conclusion is false. The hundredth time I am right. ” -t. Albert Einstein


Open file (485.35 KB 1053x1400 0705060114258_01_Joybot.jpg)
Robowaifu Simulator Robowaifu Technician 09/12/2019 (Thu) 03:07:13 No.155 [Reply] [Last]
What would be a good RW simulator. I guess I'd like to start with some type of PCG solution that just builds environments to start with and build from there up to characters.

It would be nice if the system wasn't just pre-canned, hard-coded assets and behaviors but was instead a true simulator system. EG, write robotics control software code that can actually calculate mechanics, kinematics, collisions, etc., and have that work correctly inside the basic simulation framework first with an eye to eventually integrating it into IRL Robowaifu mechatronic systems with little modifications. Sort of like the OpenAI Gym concept but for waifubots.
https ://gym.openai.com/
157 posts and 68 images omitted.
>>35117 Right now, I'm not even sure how to do a full simulation of what I would want to build. I hope there will just be a general humanoid model that I can download and approximate mine to it. I don't believe in humanoid robots doing their movements through precise planning. It will be a guesstimate and then looking at the sensors while moving. The current Teslabot also seems to use a form of very fast neural network: https://youtu.be/xxoLCQTN0KA
>>35141 This and all the following until my post is OT in this thread.
Open file (523.86 KB 1920x1080 Screenshot (73).png)
>>35179 I'm working on the cad for the body. Its important to me that it has an adjustable spine because that will play a bit part in making the robot flexible. I will share the cad but I don't and it to be public. I want to share it with people that will be working on this. Please invite me on github https://github.com/peteblank The choice of her body being thicc is not only because of preference but because it needs to house the components. The weight distribution will also be selected based on the center of gravity. The center of gravity should be on the middle(I think) I have a theory that it ought to have a decent chest weight to act as a counterweight for the spine pulling(think of the stability of pulling a bag of cement vs puling a wrench) I'm planning on making the test robot walk near the end of January. it'll have empty boxes on the chest to add and take away weights to test the balance. Since its been decided the simulation should be on isaac(i'd prefer gazebo because its the industry standard and you can get a job with it) Can't find any gazebo/nvidia isaac tutorials atm
>>35202 no seems isaac is the industry standard now. ok then.
> (simulation system -related : >>36688 )

Open file (659.28 KB 862x859 lime_mit_mug.png)
Open-Source Licenses Comparison Robowaifu Technician 07/24/2020 (Fri) 06:24:05 No.4451 [Reply] [Last]
Hi anons! After looking at the introductory comment in >>2701 which mentions the use of the MIT licence for robowaifu projects. I read the terms: https://opensource.org/licenses/MIT Seems fine to me, however I've also been considering the 3-clause BSD licence: https://opensource.org/licenses/BSD-3-Clause >>4432 The reason I liked this BSD licence is the endorsement by using the creator's name (3rd clause) must be done by asking permission first. I like that term as it allows me to decide if I should endorse a derivative or not. Do you think that's a valid concern? Initially I also thought that BSD has the advantage of forcing to retain the copyright notice, however MIT seems to do that too. It has been mentioned that MIT is already used and planned to be used. How would the these two licences interplay with each other? Can I get a similar term applied from BSD's third clause but with MIT?

Message too long. Click here to view full text.

Edited last time by Chobitsu on 07/24/2020 (Fri) 14:07:59.
104 posts and 14 images omitted.
>>34551 The team behind BigCode released an AI training dataset called The Stack. https://huggingface.co/datasets/bigcode/the-stack-v2 >3.28B unique files belonging to 104.2M github repositories were collected by traversing the Software Heritage 2023-09-06 graph dataset. Additional repository-level metadata was collected from GitHub Archive data up to 2023-09-14. As part of their data cleaning process BigCode deleted all the repos they think are under a copyleft license. >The licenses we consider permissive are listed here. This list was compiled from the licenses approved by the Blue Oak Council, as well as licenses categorized as "Permissive" or "Public Domain" by ScanCode. The ScanCode tool works by keyword detection. If you say your project uses GnuTLS which is under the "GNU Lesser General Public License version 2.1" it will think the entire repo is copyleft TL;DR: Choose LGPL or use an LGPL dependency.
>>34920 Heh. OK, thanks for the explanation, Anon. :^)
an all purpose robot has the revolutionary magnitude as the invention of the steam engine. Lets hope some money will come out if this but theres also people that measure their success on not only their monetary compensation but on the impact they had in the course of history.
>>34937 POTD >Lets hope some money will come out if this but theres also people that measure their success on not only their monetary compensation but on the impact they had in the course of history. I think I can state uncategorically that a significant portion of regulars on /robowaifu/ are dreamers, and we all think about the amazing transformation to civilization (indeed, redeeming it from the literal brink) that robowaifus represent, peteblank. Cheers. :^)
> (MIT licensing-argument -related : >>36315 )

Open file (522.71 KB 1920x1080 gen.png)
Nandroid Generator SoaringMoon 02/29/2024 (Thu) 13:54:14 No.30003 [Reply]
I made a generator to generate nandroid images. You can use it in browser, but a desktop version (that should easier to use), will be available. https://soaringmoon.itch.io/nandroid-generator Not very mobile friendly unfortunately, but it does run. I made a post about this already in another thread, but I wanted to make improvements and add features to the software. >If you have any suggestions or ideas other than custom color selection, which I am working on right now, let me know.
22 posts and 11 images omitted.
Nice work SoaringMoon, please keep it up! Cheers. :^)
Open file (79.29 KB 448x736 nandroid (2).png)
>>31460 v0.9 - Sclera color is now customizable.
>>34154 Sweet
>>34154 Glad to see you keeping your project advancing, SoaringMoon. Keep it up! Cheers. :^)
Open file (55.85 KB 790x971 Screenshot Galatea.jpg)
I made my Galatea design in the Generator.

Papercraft waifu Robowaifu Technician 09/16/2019 (Mon) 06:21:35 No.271 [Reply]
Thoughts on making a paper waifu then adding robotics? I want animu grills but, most robots have uncanny 3DPD faces that aren't nearly as cute as a real waifu. With paper/screens, at least the face can keep the purity and beauty of 2D.
16 posts and 9 images omitted.
>>33827 >papercraft wifu shell Just in case you forgot or someone else wants a good source. I found the mother of all paper mache recipe sites by this Grandmother. It's a great resource for making things from paper. She has all sorts of recopies. Some have various additives so you can get better dimensional stability. For prototyping I don't think paper can be beat. Fast, cheap and once you have the shell or a paper positive you can create molds from more solid materials while keeping prototype cost to a minimum, It's at this link, >>33318
>>34113 Thanks kindly, Grommet! My plan ATM is simply to unfold my 3D models from Blender into 2D flats, print & cut them out, then assemble them all together /po/ -style. After coating or two, I can see using some type of papier-mâché coating to fashion a mold perhaps (as you seem to suggest)?
>>34131 That's seems like a super fast way to get results. And yes I do mean using P.M. for molds. I was thinking about doing this for fiberglass and boats. I got the idea from the "concrete fabric formworks" guys. Type that in search and look at some of the images. It's wild what they are doing. Low cost, high quality(most of the water drains out leaving far stronger concrete due to compaction) and they can do structures that the loads are designed in the form to be exactly where they are needed. Due to the curvature of the form it automatically forms the correct reinforcement where needed.
>>34147 >And yes I do mean using P.M. for molds. Yeah, thanks I thought so. >and they can do structures that the loads are designed in the form to be exactly where they are needed. We certainly need to take advantage of similar approaches, for similar needs within our robowaifus. For example, mounting these shell pieces on internal endoskellington struts, etc., could use some beefing up on the attach points. Thanks for all the good ideas, Grommet! Please keep them coming, Anon. Cheers. :^)
>>34131 I have a great link for those interested in flat structures to make 3D structures. The first link I messed up a little. I started reading in the thread farther down and people were talking about cutting flat stuff and materials and I commented before I realized the thread was machine tools. Sigh...oh well the post should be more in structures which I linked to later. Anyways here'a link to the post on making these flat "Isogrids" and then a link to some further ideas in the proper structures thread. Isogrids >>34491 Ideas about using them in structures >>34493

Waifus in society Robowaifu Technician 09/11/2019 (Wed) 02:02:53 No.106 [Reply] [Last]
Would you walk around with your waifu? Would you hold her in public? Would you shamelessly have her custody with you to conventions? Would you take her on dates? This thread is for discussing how you'd interact with your waifu outside of the home.
136 posts and 56 images omitted.
>>34310 While it is a possibility, the clownhaired rabid feminist types are more rare than the internet would lead you to believe (part of their strength is the illusion of it). If your robowaifu is helpful beyond companionship (eg can help you carry stuff) that would make it more socially acceptable for normies... but even normies won't mind if it is a robot with some neat, fun features. When I've seen public robots, folks tend to leave them alone when they're accompanied by humans and respect the bot. Just try to avoid mentioning whether or not it is fully functional and anatomically correct ;) eg normie: "does your robowaifu have genitals?" robosexual: "do you?
>>34310 Its not the women I'm concerned about but their feral simp hordes they'll get to do their dirty bidding
>>34330 Well in that case, just defend your property. There's no stigma against hitting another man.
>>34322 >That is not the first post. Actually, it was at the time that Anon made that post. Between that time and when you read it, I merged that whole thread into this one (which then became the last few posts ITT). This is a mundane example of so-called temporal sliding. >tl;dr You're both right! :^)
( AI/robowaifu vs society -related: >>36544 >>36573 >>36581 >>36590 >>36612 >>36615 >>36616 >>36617 )

SPUD (Specially Programmed UwU Droid) Mechnomancer 11/10/2023 (Fri) 23:18:11 No.26306 [Reply] [Last]
Henlo anons, Stumbled here via youtube rabbit hole & thought I'd share my little side project. Started out as just an elaborate way to do some mech R&D (making a system to generate animation files on windows blender and export/transfer them to a raspberry pi system) and found tinkering with the various python libraries a kinda neat way to pass the time when whether doesn't permit my outside mech work. Plus I'd end up with a booth babe that I don't have to pay or worry about running off with a convention attendee. Currently running voice commands via google speech and chatgpt integration but I'm looking into offline/local stuff like openchat. WEF and such are so desperate to make a totalitarian cyberpunk dystopia I might as well make the fun bits to go along with it. And yes. Chicks do dig giant robots.
497 posts and 277 images omitted.
>>34226 Hmmm James might be onto something here. I'll be back in a while lol. https://www.youtube.com/watch?v=AEXz8xyTC54
>>34226 Those speaker boobs are comically disturbing due to how they recess. Do they have some sort of cover that goes over them? It looks like you made some sort of attachment ring for something of mesh or foam i presume. Would help prevent speaker damage.
>>34254 This is a good example of why I keep pushing the use of LARPfoam for our robowaifu's 'undershells'. LARPagans have a big set of communities around this stuff today, and it's a good idea for us here to benefit from all this information. BTW, another Anon also posted this video here, but I can't locate that post ATM. >I'll be back in a while lol. Lol, don't stay away too long, Mechnomancer. ABTW, this is a daily reminder you'll need a new bread when you get back. Alway rember to link the previous throd in your OP. Cheers, Anon. :^) >>34259 Great find Kiwi, thanks very kindly. I wonder what Bruton has in store for his 'next design that walks better'? Cheers. :^)
New bread: >>34445

The Sumomo Project Chobitsu Board owner 11/24/2021 (Wed) 17:27:18 No.14409 [Reply] [Last]
So I've been working for a while at devising an integrated approach to help manage some of the software complexity we are surely going to encounter when creating working robowaifus. I went down many different bunny trails and (often) fruitless paths of exploration. In the end I've finally hit on a relatively simplistic approach that AFAICT will actually allow us to both have the great flexibility we'll be needing, and without adding undue overhead and complexity. I call this the RW Foundations library, and I believe it's going to help us all out a lot with creating workable & efficient software that (very hopefully) will allow us to do many things for our robowaifus using only low-end, commodity hardware like the various single-board computers (SBCs) and microcontrollers. Devices like the Beaglebone Blue and Arduino Nano for example. Of course, we likely will also need more powerful machines for some tasks as well. But again, hopefully, the RW Foundations approach will integrate smoothly with that need as well and allow our robowaifus to smoothly interoperate with external computing and other resources. I suppose time will tell. So, to commemorate /robowaifu/'s 5th birthday this weekend, I've prepared a little demonstration project called Sumomo. The near-term goal for the project is simply to create a cute little animated avatar system that allows the characters Sumomo and Kotoko (from the Chobits anime series) to run around having fun and interacting with Anon. But this is also a serious effort, and the intent is to begin fleshing out the real-world robotics needs during the development of this project. Think of it kind of like a kickstarter for real-world robowaifus in the end, but one that's a very gradual effort toward that goal and a little fun along the way. I'll use this thread as a devblog and perhaps also a bit of a debate and training forum for the many issues we all encounter, and how a cute little fairybot/moebot pair can help us all solve a few of them. Anyway, happy birthday /robowaifu/ I love you guys! Here is my little birthday present to you. === >rw_sumomo-v211124.tar.xz.sha256sum 8fceec2958ee75d3c7a33742af134670d0a7349e5da4d83487eb34a2c9f1d4ac *rw_sumomo-v211124.tar.xz >backup drop

Message too long. Click here to view full text.

Edited last time by Chobitsu on 10/22/2022 (Sat) 06:24:09.
159 posts and 97 images omitted.
>>33843 >Blender does a lot of relevant things to support high performance, hard and soft realtime requirements, and heterogeneous development. Not sure what you mean about realtime in Blender's case, but otherwise fair enough. It's a remarkable system today! :^) >Blender's design docs I've seen these in the past, but since I stopped actively building Blender 2-3 years ago, I kind of let it slip my mind. So thanks for the reminder. I personally like Blender's documentation efforts, though I've heard some disagree. Not-uncommonly, this is one of those tasks that get pushed to the 'back burner', and is often left to volunteer work to accomplish. Given the breadth & scope of the platform, I'd say the Blender Foundation has done a yeoman's job at the doco work, overall. Very passable. <---> Also, reading that link reminded me of USD. NVIDIA is currently offering developers their version of free training on this topic, and I've been pondering if I can make the time to attend. A huge amount of the DCC industry has come together to cooperate on Pixar's little baby, and today it's a big, sprawling system. Why it's of interest to us here is that most of what a robowaifu will need to do to analyze and construct models of her 'world' is already accounted for inside this system. While there are plenty of other (often higher-speed) ways to accomplish the same (or nearly the same) tasks, the fact that USD has become such a juggernaut, with a highly-regimented approach to scene descriptions, and with such broad approval, improves the likelihood IMO that other Anons from the film & related industries may in fact be able to help us here once they discover robowaifus in the future -- if we're already using USD to describe her world and the things within it. I hope all that made sense, Anon. https://openusd.org/release/glossary.html# >===

Message too long. Click here to view full text.

Edited last time by Chobitsu on 10/02/2024 (Wed) 17:12:35.
>>33845 >Not sure what you mean about realtime in Blender's case This page looks relevant: https://developer.blender.org/docs/features/cycles/render_scheduling/ Blender does progressive rendering, which starts by rendering low-resolution frames. If there's extra time left over before a frame needs to be rendered, it generates more samples to generate a higher-resolution frame. The equivalent for video generation at a fixed framerate would be running a small number of denoising steps for the next frame, and running additional denoising steps if the next frame doesn't need to be rendered yet. For text generation at a fixed token rate, it would be equivalent to doing speculative decoding for the initial response, then using (maybe progressively) larger models if the next token doesn't need to be output yet. For a cognitive architecture with a fixed response rate, I think the equivalent would be generating an initial response, then continually refining the response based on self-evaluations & feedback from other modules until the the response needs to be output. >USD Very nice. I hadn't heard of this. It looks like a goldmine of information. Your explanation does make sense, and it's a great example of the sort of design patterns that I expect would be useful, in this case for modeling the environment & context.
>>33850 OK good point, CyberPonk. Such UX optimizations can fairly be said to be in the domain of soft-realtime. And certainly, integrating GPU processing code into the system to speed the rendering processes of Cycles & EEVEE has had major positive impacts. I personally think the fact that Ton chose to create the entire GUI for Blender in OpenGL all those years ago has had many far-reaching effects, not the least of which is general responsiveness of the system overall (especially as it has rapidly grown in complexity over the last few years). <---> >It looks like a goldmine of information Glad you like it! It's fairly easy to overlook that describing a scene is in fact a very-complex, nuanced, and -- I'm going to say it -- human undertaking. And when you consider that task from the deeply-technical aspect that USD (and we here) need to accommodate, then you wind up with quite a myriad of seeming-odd-juxtapositions. Until D*sney got their claws into it, Pixar was a one-of-a-kind studio, and well up to such a complicated engineering effort. I doubt they could do it as well today. If at all. DEI DIE to the rescue!111!!ONE! :D Cheers, Anon. :^) >=== -fmt, minor, funpost edit
Edited last time by Chobitsu on 10/03/2024 (Thu) 03:49:23.
>>33857 I looked up that USD. "USD stands for “Universal Scene Description”". I hadn't heard of it. Wow, that's some super comprehensive library and format. Hats off to pixar for open sourcing this.
>>34201 >Hats off to pixar for open sourcing this. Well, it's a vested-interest, but yeah; you're absolutely correct Grommet. Sadly, I'm sure they couldn't even pull it off today; they've become quite afflicted with the incompetency crisis. >protip: competency doesn't cause a crisis, only incompetency does. :^)

Open file (329.39 KB 850x1148 Karasuba.jpg)
Open file (423.49 KB 796x706 YT_AI_news_01.png)
Open file (376.75 KB 804x702 YT_AI_news_02.png)
General Robotics/A.I./Software News, Commentary, + /pol/ Funposting Zone #4 NoidoDev ##eCt7e4 07/19/2023 (Wed) 23:21:28 No.24081 [Reply] [Last]
Anything in general related to the Robotics or A.I. industries, and any social or economic issues surrounding it (especially of robowaifus). -previous threads: > #1 (>>404) > #2 (>>16732) > #3 (>>21140)
524 posts and 191 images omitted.
> this thread <insert: TOP KEK> >"There is a tide in the affairs of men, Which taken at the flood, leads on to fortune. Omitted, all the voyage of their life is bound in shallows and in miseries. On such a full sea are we now afloat. And we must take the current when it serves, or lose our ventures." >t. A White man, and no jew...
>>34164 DAILY REMINDER We still need a throd #5 here. Would some kindly soul maybe NoidoDev, Greentext anon, or Kiwi please step up and make one for us all? TIA, Cheers. :^)
Open file (2.43 MB 2285x2962 2541723.png)
>>34230 Guess it's up to me again. This was much easier than the meta thread. Took me like fifteen minutes, and ten of those were spent browsing in my image folders for the first two pics. Changes are as follows: + New cover pic + Added poner pic + New articles ~ Minor alteration to formatting >>34233
>>34234 >Guess it's up to me again. Thanks, Greentext anon! Cheers. :^)
>>34234 NEW THREAD NEW THREAD NEW THREAD >>34233 >>34233 >>34233 >>34233 >>34233 NEW THREAD NEW THREAD NEW THREAD

Open file (14.96 KB 280x280 wfu.jpg)
Beginners guide to AI, ML, DL. Beginner Anon 11/10/2020 (Tue) 07:12:47 No.6560 [Reply] [Last]
I already know we have a thread dedicated to books,videos,tutorials etc. But there are a lot of resources there and as a beginner it is pretty confusing to find the correct route to learn ML/DL advanced enough to be able contribute robowaifu project. That is why I thought we would need a thread like this. Assuming that I only have basic programming in python, dedication, love for robowaifus but no maths, no statistics, no physics, no college education how can I get advanced enough to create AI waifus? I need a complete pathway directing me to my aim. I've seen that some of you guys recommended books about reinforcement learning and some general books but can I really learn enough by just reading them? AI is a huge field so it's pretty easy to get lost. What I did so far was to buy a non-english great book about AI, philosophycal discussions of it, general algorithms, problem solving techniques, history of it, limitations, gaming theories... But it's not a technical book. Because of that I also bought a few courses on this website called Udemy. They are about either Machine Learning or Deep Learning. I am hoping to learn basic algorithms through those books but because I don't have maths it is sometimes hard to understand the concept. For example even when learning linear regression, it is easy to use a python library but can't understand how it exactly works because of the lack of Calculus I have. Because of that issue I have hard time understanding algorithms. >>5818 >>6550 Can those anons please help me? Which resources should I use in order to be able to produce robowaifus? If possible, you can even create a list of books/courses I need to follow one by one to be able to achieve that aim of mine. If not, I can send you the resources I got and you can help me to put those in an order. I also need some guide about maths as you can tell. Yesterday after deciding and promising myself that I will give whatever it takes to build robowaifus I bought 3 courses about linear alg, calculus, stats but I'm not really good at them. I am waiting for your answers anons, thanks a lot!
74 posts and 106 images omitted.
>>33765 > But I could see training one to recognize voice, one to deal with movement, one to deal with vision, specifically not running into things, and maybe one for instruction on a low level. Like move here, pick up this, etc If I remember right Mark Tilden referred to this as "horse and rider" setup, where you have a high level program giving direction to a lower level program. The lower level worries about not stepping in a hole, etc while the high level worries about where the pair are going. I too have experienced the boons of separating different functions into different programs/AI. To give a real life example of what you're talking about: my voice recognition AI doesn't like to run in the same program as the image recognition AI. I've experienced some programs running at different speeds, eg: on a raspi takes half a second for the image recognition to run, while the servo program can run like 2 dozen times a second while it is running, and the voice detection pauses the program until words are heard (or a 5 second timeout), so these different speeds/natures of the code requires separation, which in turn requires developing a way to communicate with each each program. >>33746 >Starting Best way to start is looking for a code/library that does what you want (like image recognition), and try tweaking it to fit your needs, like making it interact with other programs eg if an object is recognized in an image, move a servo.
>>33767 >I've experienced some programs running at different speeds Asynchrony is a deep topic in systems engineering for complex 'systems-of-systems' -- which full-blown robowaifus will certainly be in the end. The buffering and test, test, test combo has been the most successful engineering approach to this issue thus far, AFAICT. Just imagine the timing difficulties that had to be surmounted by the men who created and operated the Apollo spacecraft out to the Moon & back! Lol, our problems here are actually much more complicated (by at least a couple orders magnitude)!! :DD Kiwi discussed the desire that >A thread dedicated to man machine relationships may be needed : ( >>33634 ), and I agree. While my guess is that he meant for that thread to primarily be focused on the psychological/social aspects of those relationships, I would argue that the engineering of complex parts of our robowaifu's systems that in any way involve responsiveness or interaction-timing with her Master (or others) is definitely fair game for such a thread. The reason is simple: timing of interactions -- particularly verbal ones -- clearly affects the social perceptions of those (for all parties involved). >tl;dr < If it takes our robowaifus more than half a second to begin to respond to her Master's engagements, then we've waited too long... Cheers. :^) >=== -fmt, prose edit
Edited last time by Chobitsu on 09/28/2024 (Sat) 03:08:07.
>>33905 Thanks, Anon. Nice degree programs. And at least one version of many of these lectures are available online! Cheers. :^)
>>33767 Thanks for the advice. It's welcome.

Artificial Wombs general Robowaifu Technician 09/12/2019 (Thu) 03:11:54 No.157 [Reply] [Last]
I think we need to figure out how to fit a womb onto a waifubot. Where's the fun in having sex if you can't procreate?

Repost from a thread on /b/;
>"If you're like me and want to fuck a qt battlebot and get her pregnant, the best place to put an artificial womb is where a normal womb would be on a normal girl. The metal exterior could simply be a bunch of metal plates that unfold to allow space for the womb pod inside. The baby is probably safest inside the battlebot, and if she has good calibration then there shouldn't be problems with her falling and hurting the baby. After giving birth the metal plates could automatically fold back up again, shrinking the womb pod inside so she is combat effective again."

Well /robowaifu/? Feasible?
202 posts and 28 images omitted.
https://kr-asia.com/are-synthetic-wombs-the-future-of-childbirth-new-chinese-experiment-sparks-debate >Recently, the First Affiliated Hospital of Zhengzhou University (ZDYFY) announced a groundbreaking synthetic womb experiment without the use of extracorporeal membrane oxygenation (ECMO). In this experiment, a four-month-old fetal lamb survived for 90 minutes while hooked up to a unique apparatus, maintaining vital signs through a connection with its mother.
https://hakaimagazine.com/news/scientists-built-an-artificial-shark-uterus/ >In a new paper published in Frontiers in Fish Science, the researchers show that this manufactured uterus can sustain midterm Moller’s lanternshark embryos for up to a year, about two-thirds of their normal 18-month gestation period. The scientists hope this system will help sharks in their aquarium and eventually be used to bolster wild populations of other endangered shark species.
>>33991 >>34003 Very good news, and I'm also glad they found a reason to justify this research to the general public. Helping with species close to extinction or bringing them back, opens a lot of opportunities to get funding.

Report/Delete/Moderation Forms
Delete
Report