/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Merry Christmas!

Update on the file situation (it's good)

The warrant canary has been updated.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


Merry Christmas, /robowaifu/ ! Please join the /christmas/ party this year!


Philosophers interested in building an AGI? pygmalion 06/26/2021 (Sat) 00:53:09 No.11102
Why is it that no philosophers are interested in building an AGI? we need to change this, or at least collect relevant philosophers. discussion about philosophy of making AGI (includes metaphysics, transcendental psychology, general philosophy of mind topics, etc!) also highly encouraged! Ill start ^^! so the philosophers i know that take this stuff seriously: Peter Wolfendale - the first Neo-Rationalist on the list. his main contribution here is computational Kantianism. just by the name you can tell that he believes Kant's transcendental psychology has some important applications to designing an artificial mind. an interesting view regarding this is that he thinks Kant actually employed a logic that was far ahead of his time (and you basically need a sophisticated type theory with sheaves to properly formalize). Other than that he also thinks Kant has interesting solutions to the frame problem, origin of concepts, and personhood. CONTACTS: He has a blog at https://deontologistics.co/, and also has posted some lectures on youtube like this one: https://www.youtube.com/watch?v=EWDZyOWN4VA&ab_channel=deontologistics Reza Negarestani - this is another Neo-Rationalist. he has written a huge work (which I haven't read yet ;_;) called "Intelligence and Spirit". It's massive and talks about various grades of general intelligence. this includes sentient agents, sapient agents, and Geist. this guy draws from Kant as well, but he also builds on Hegel's ideas too. his central thesis is that Hegel's Geist is basically a distributed intelligence. he also has an interesting metaphilosophy where he claims that the goal of philosophy is the construct an AGI. like other Neo-Rationalists, he heavily relies on the works of Sellars and Robert Brandom Recc: Ray Brassier (recent focuses) - I dont think he is working on artificial general intelligence, but his work on Sellars, and in particular rule following is very insightful! Hubert Dreyfus - Doesn't quite count, but he did try to bring Heidegger to AGI. He highlighted the importance of embodiment to the frame problem and common sense knowledge. I personally think Bergson might have explicated what he wanted to achieve but better, though that guy is like way before AI was even a serious topic, lol. Murray Shanahan - This guy has done some extra work on the frame problem following Dreyfus. His solution is to use global workspace theory and parralel processing of different modules. Interesting stuff! Barry Smith - Probably the most critical philosopher on this list. He talks about the requisite system dynamics for try strong AI, and concludes that our current methods simply don't cut it. One of the key stressing points he points out here with a colleague is that our current AI is Markovian when fleshed out chat dialogue would be a non-Markovian task (you can find the arxiv link of his criticism here: https://arxiv.org/abs/1906.05833). He also has knowledge on analytic ontology (and amongst other thing has some lectures about emotion ontology). I think his main genius however is in coming up with a definition of intelligence that puts a lot of the problems with our current approaches into context (which can be found here: https://www.youtube.com/watch?v=0giPMMoKR9s&ab_channel=BarrySmith) CONTACTS: He has a yt channel here https://www.youtube.com/watch?v=0giPMMoKR9s&ab_channel=BarrySmith Uhh, that's the introduction of pretty much every philosopher I know who works on this stuff. I made a thread on /lit/ and got no responses :( (which isn't surprising since I am the only person I know who is really into this stuff)
Open file (332.87 KB 1080x1620 Fg2x2kWaYAISXXy.jpeg)
>>17659 he furthermore talks about the Good which is the form of forms and makes the division and the integration of the line possible. it is the continuity of the divided line itself. within a view from nowhere and nowhen that deals with time-general thoughts, the Good can be crafted. the Good gives us a transcendental excess that motivates continual revision and expanding of what is intelligible. im thinking that the Good is either related to or identical to the Eidos that negarestani discussed earlier he notes the importance of history as a discipline that integrates and possibly reorients a variety of other disciplines. the view from nowhere and nowhere involves the suspension of history as some totality by means of interventions. currently we are in a situation of a “hobbesian jungle” where we just squabble amongst ourselves and differences seem absolute. in reality, individual differences are constructed out of judgements and are thus subsumed by an impersonal reason. in order to reconcile individual differences, we must have a general program of education amongst other interventions which are not simply that of political action. to get out of the hobbesian jungle, we need to be able to imagine an “otherworldly experience” that is completely different from the current one we operate in even though it is fashioned from the particular experiences from this one. this possible world would have a broader scope and extend towards the limits placed by our current historical totality. absolute knowing: recognition by intelligence of itself being the expression of the Good, that is capable of cancelling any apparently complete totality of history it is only by disenthralling ourselves from the enchanting power of givens of history, that the pursuit of the Good is possible. the death of god (think here of nietzsche… hegel also talks about it as well, though i believe for him the unhappy consciousness was a problematic shape of consciousness that was a consequence of a one-sided conception of ourselves) is the necessary condition true intelligence. this is not achievable by simply rejecting these givens, but by exploring the consequences of the death of god. ultimately we must become philosophical gods which are beings that move being the intelligibilities of the current world order and eventually bring about their own death in the name of the better. ultimately negarestani sees this entire quest as one of the emancipation i think negarestani takes a much more left-wing approach to hegel's system. while i do not completely disagree with his interpretation of absolute knowing, it does seem as though he places much more of an emphasis on conceptual intervention, rather than contemplation. i am guessing this more interventionist stance is largely influenced by marx... overall, not a bad work. i think it might have been a little bit overhyped, and that last chapter was rather boring to read due to the amount of time he repeats himself. i am not really a computational functionalist, but i still found some interesting insights regarding the constitution of sapience that i might apply to my own ideas. furthermore he mentions a lot of interesting logical tools for system engineering that i would like to return to now that i am done with negarestani, i can't really think of any other really major tome to read on constructing artificial general intelligence specifically. goertzel's patternist philosophy strikes me as rather shallow (at least the part that tries to actually think about what intelligence itself is). joscha bach's stuff meanwhile is just largely the philosophy of cognitive science. not terrible, but feels more like reference material rather than paradigm shifting philosophical analysis. maybe there is dreyfus and john haugheland who both like heidegger, but they are much more concerned with criticizing artificial intelligence than talking about how to build it. i would still consider reading up on them sometime to see if they have anything remarkable to say (as i already subscribe heavily to ecological psychology, i feel as though they would really be preaching to the choir if i read them). lastly there is barry smith and landgrebe who have just released their new book. it is another criticism of ai. might check it out really there are 2 things that are really in front of my sights right now. the first would be texts on ecological psychology by gibson and turvey, and the other would be adrian johnston's adventures in transcendnetal materialism. i believe the latter may really complement negarestani. i will just quote some thoughts on this that i have written: >curious to see how well the fit. reading negarestani has given me more hope that they will. bcs he talks about two (in his own opinion, complementary) approaches to mind. one that is like rationalist/idealist and the other that is empiricst/materialist. first is like trying to determine the absolutely necessary transcendental cognitions of having a mind which ig gives a very rudimentary functionalist picture of things. the second is like trying to trace more contingent biological and sociocultural conditions which realized the minds we see currently. and i feel like johnston is really going to focus on this latter point while negarestani focusing on the former anyways neither of these directions are really explicitly related to ai, so i would likely not write about them here. all of this is me predicting an incoming (possibly indefinite) hiatus from this thread. if anyone has more interesting philosophers they have found, by all means post them here and i will try to check up on them from time to time... i believe it is getting to the time i engage in a bunch of serious grinding that i have been sort of putting off reading hegel and negarestani. so yeah
>>17520 >finally we get to see the two stories which negarestani talks about (pic rel). he thinks there is a distinction between the two sorts of seeing here. the first story talks about seeing 1, and the second story talks about seeing 2. seeing 1 is more concerned with raw sensations, while seeing 2 is conceptually mediated. now The two kinds of seeing seem to come from two different ways to abstract observations. Seeing 1 corresponds to coarse-graining, while seeing 2 corresponds to change in representation. Practically, it's related to the difference as between sets and whole numbers. There's only one whole number 2, but there are many sets of size 2. Simlarly, there's only one way to coarse-grain an observation such that the original can be recovered (the trivial coarse-graining operation that leaves the observation unchanged), but there are many ways to represent observations such that the original observation can be recovered. Also practically, if you want to maintain composibility of approximations (i.e., approximating B from A then C from B is the same as approximating C from A), then it's usually (always?) valid to appoximate the outcome of coarse-graining through sampling, while the outcome of a change in representation usually cannot be approximated through sampling. If that's the right distinction, then I agree that the use of this distinction in differentiating sapience from sentience is unclear at best. It seems pretty obvious that both sentence and sapience must involve both kinds of seeing. I intend to read the rest of your posts, but it may take me a while.
> (AI philosophy crosslink-related >>21351)
I postulate that AI research claims are not scientific claims... they are advertisment claims. :^) https://www.baldurbjarnason.com/2023/beware-of-ai-snake-oil/
What do you guys think of my thread about that here: https://neets.net/threads/why-we-will-have-non-sentient-female-android-robots-in-2032-and-reverse-aging-tech-in-2052-thread-version-1-1.33046/ I can't post it here because, it's too long and it has too many images.
Interesting forum, but "sentience" and "conscience" aren't very well defined terms. That aside, I hope Chobitsu moves that into meta or one of the AI threads. No problem that you didn't know, but this board isn't like other image boards, you can use existing and old threads.
>>24624 Hello Anon, welcome! These are some interesting topics. I've had a brief survey of the two threads and it definitely things we're interested in here on /robowaifu/. I'm planning to merge your thread into one of our other threads already discussing these things. Please have a good look around the board. If you'd care to, you can introduce yourself/your community in our Embassy thread too (>>2823). Cheers :^)
>>24624 Do you yourself have any plans to build a robowaifu Anon, or is your interest primarily in discussing AI side of things?
>>24631 I'll probably build my own robowaifu, but i don't really know because their construction might be automated by the time they are ready. I'm interested in working in the robowaifu industry. I'm also here to give some lifefuel, and let you guys know that we won't have to wait 50 years for the robowaifus. This other thread might interest you, it's about how the behavioral sink will make all men need tobowaifus in some years: https://neets.net/threads/the-behavioral-sink-in-humans.31728/
>>24646 *robo*
Open file (2.11 MB main.pdf)
It's entirely conceivable this type of approach could help our researchers 'use AI to help build better AI'. In programmer's parlance, this is vaguely similar to 'eating your own dogfood' but with the potential for automated, evolutionary, self-correcting design trajectories. At least that's my theory in the matter! :^) Fifth Paradigm in Science: A Case Study of an Intelligence-Driven Material Design [1] >abstract >"Science is entering a new era—the fifth paradigm—that is being heralded as the main character of knowledge integrating into different fields to intelligence-driven work in the computational community based on the omnipresence of machine learning systems. Here, we vividly illuminate the nature of the fifth paradigm by a typical platform case specifically designed for catalytic materials constructed on the Tianhe-1 supercomputer system, aiming to promote the cultivation of the fifth paradigm in other fields. This fifth paradigm platform mainly encompasses automatic model construction (raw data extraction), automatic fingerprint construction (neural network feature selection), and repeated iterations concatenated by the interdisciplinary knowledge (“volcano plot”). Along with the dissection is the performance evaluation of the architecture implemented in iterations. Through the discussion, the intelligence-driven platform of the fifth paradigm can greatly simplify and improve the extremely cumbersome and challenging work in the research, and realize the mutual feedback between numerical calculations and machine learning by compensating for the lack of samples in machine learning and replacing some numerical calculations caused by insufficient computing resources to accelerate the exploration process. It remains a challenging of the synergy of interdisciplinary experts and the dramatic rise in demand for on-the-fly data in data-driven disciplines. We believe that a glimpse of the fifth paradigm platform can pave the way for its application in other fields." 1. https://www.sciencedirect.com/science/article/pii/S2095809923001479 >>24646 Thanks Anon. Yes, we've discussed so-called 'Mouse Utopia' and other sociological phenomenon involved with the rise of feminism, here before now. I doubt not there is some validity to the comparisons. However as a believing Christian, I find far more substantial answers in the Christian Bible's discussions about the fallen nature of man in general. There are also arrayed a vast number of enemies -- both angelic and human -- against men (males specifically), that seek nothing but our downfall. /robowaifu/, at least in my view of things, is one response to this evil situation in the general sense; with tangible goals to assist men to thrive in this highly-antagonistic environment we all find ourselves within. And almost ironically... robowaifus can potentially even help these so-called 'Foids' become more as they were intended by God to be -- in the end -- as well I think (by forcing them back to a more realistic view of their individual situations, less hindered by the Globohomo brainwashing they're seemingly all so mesmerized by currently. To wit: self-entitlement and feminism). :^) >=== -prose edit
Edited last time by Chobitsu on 10/04/2023 (Wed) 00:30:04.
Our friends over at /britfeel/ are having an interesting conversation about AI r/n. https://anon.cafe/britfeel/res/5185.html#5714
So I'm thinking about this more from a robot/agi ethical perspective, with the view of sort of "intelligence mimicking in a shallow form (externalities)" vs "intelligence mimicking in a deep form"(mimics the cause of the externalities along with the normal external aspects) vs actual agi. My understanding at least is that reason and our intellectual capabilities are the highest parts of us, however they can't really function apart from that. The reason why is the moral aspect, I'm coming from a very greek perspective so I should probably clarify how they relate. The basics would be you have a completeness to your being, a harmony to your parts, that when your parts are functioning as they ought better show forth your being/your human nature. So the different parts of a person (eyes, ability to speak, etc.) have to be used in such a way so as to bring about the full expression of that nature. That ties in the intellect in that it involves using it to understand those parts, and how they relate to the whole, and the whole itself. That along with you choosing to understand that, and the relationship, then seeing the good of the whole functioning harmoniously you will do that, and that's the essence of moral action. The proper usage of those powers ends up getting complicated as it involves your personal history, culture, basically every externality so there aren't really hard and fast rules. It's more about understanding the possibilities of things, recognizing them as valuable, and then seeking to bring them about (out of gratitude for the good they can provide.) Now the full expression of that human nature or any particular nature, or at least getting closer to it, the full spectrum of potential goods it brings about is only known over a whole lifetime. That's what I was referring to by saying personal history, as you move along in your lifetime the ideal rational person would be understanding everything in front of them in relation to how it could have them be a good life. It's the entire reason why I have the ability to conceptualize at all, in everything you see and every possible action, you see your history with it, and how that history can be lifted up and made beautiful in a complete life in a sort of narrative sense. The memory thing isn't that hard technically, it's more the seeing the good of the thing itself. The robot, or AGI would need to sort of take on an independent form of existence and have a good that is not merely instrumental to another person. Basically be in some for a genuine synthetic life form. (most people who have these views just think any actual agi is impossible). One of the key things is this sort of good of the thing itself, human nature or whatever, is not something anyone has a fully explicit understanding of, there is definition that can be provided. It's an embodied thing with a particular existence with a particular history. Each individual's expression of their being will differ based on actual physical things they relate to, and things like the culture they participate in. The nature is an actual real thing constituting a part of all things (living and otherwise), and all intellectual activity is a byproduct of those things nature interacting with the nature of other things, and those natures aren't something that can ever be made explicit. (this ties in with all the recent extended mind cog-sci and philosophy of mind) (One author I want to read more of is John Haugeland who talks about the heideggerian AI stuff, he just calls this the ability to give a damn. A machine cannot give a damn, and apart from giving a damn you have no capacity for intellectual activity for the reasons I stated above). That's sort of the initial groundwork:
>>30634 >For an Actual AGI it does leave it in a really hard place, I think it's still possible but would involve probably something far in the future. It would need to be someone intentionally creating independent synthetic life, for no instrumental purpose. That or you could have something created for an instrumental purpose with the ability to adapt that eventually attains a synthetic form of life. A cool sci-if story about this is The Invincible by Stanislaw Lem (I don't want to spoil it but if you are in this thread you will probably love it). The main issue to get more technical comes to artifacts/substances using the Aristotelian language: There are those things that have an intrinsic good-for-themselves, and things that act-to-help-themselves-stay-themselves. Things with specific irreducible qualities. Life is the clearest example, it's something greater than the sum of its parts and it acts to maintain it, and its parts are arranged in such a way to serve the whole rather than the parts themselves.It’s only fully intelligible in reference to the whole. Other examples would be anything that cannot be reduced to parts without losing some kind of essential aspect, perhaps chemicals like aluminum or certain minerals. Those are less clear than life and I’m not a biologist/geologist or w/e so it's hard to say. Styrofoam would be a synthetic one. Artifacts on the other hand are totally reducible to their parts, all there is no mysterious causality, nothing is hidden. Another way of looking at them is they are made up of substances (things with natures I talked about above) “leaning against each other” in a certain way. Things like a chair or a computer would do that. A chair doesn’t do anything to remain a chair, the wood will naturally degrade if it’s not maintained, and there is nothing about the wood that makes it inclined to be a chair. Everything the chair does is also totally reducible down to what the wood can do, there is nothing more added. A chair is only just the sum of its parts. The same thing for computers at a much more complicated level with different materials/compounds/circuitry, it’s still just switches being flicked using electricity, all the cool stuff we get computers can be totally understood down to the most basic level. Only the sum of its parts again. The point here would be to have an AGI you’d need to get something that is more than the sum of it’s parts, which might be impossible and if it did happen would probably be pretty weird. On the plus side people like Aristotle wouldn’t consider most people to be using their full rationality anyway… normies/sheeple and all that. But even a bad usage of a real intellect is still very hard to mimic. Now that the metaphysics lessons are out of the way it will hopefully be shorter.
>>30635 Forms of intelligence mimicking: >Shallow/external This is the easiest one and is what basically all the actual research focuses on. I don’t really think this will ever suffice for robowaifus or anything getting close to AGI. To define it better, it’s basically ignoring everything I said above and makes no attempt to simulate it. There is no intrinsic conception of the things-own-good, no focus on that kind of narrative/moral behavior and memory. As far as I can tell in practice this basically means a vague hodgepodge of whatever data they can get, so it’s entirely heuristic. Any actual understanding of what anything or who anything is not a possibility. Keep in mind from what I said above, that kind of moral understanding is required for any real intellectual activity. To give a more relevant example for this board, and this has particular bearing for relationships as well. When a waifu encounters something, in order for it to be even at all satisfying as a simulation it must see it in relation to the good of her own being, as well as for a waifu the good of her partner. Involved in that is understanding the other person and their good and the things they are involved with. It’s very memory based/narrative based, seeing anything that pops up and questioning it for how it integrates with the sort of robots life narrative as well as the integrated robot-you life narrative. The foundation of that sort of more moral aspect of intelligence is essential and is something that needs to be explicitly factored in. That particularity and memory is necessary for intelligence as well as even just for identity. >Deep mimicking This is at least I think more of a technical/programming question and one I plan on learning more about. There doesn’t seem to be any philosophical difficulty or technical possibility, as far as I know it’s mostly a matter of memory and maybe additional training. I imagine there would be quite a bit of sort of “training/enculturating” involved as well with any specific robot, since as I said above intelligence by its nature is highly particular. I’m not sure where the philosophical issues would come up. Technically it might just be overwhelming to store the sort of breadth of organized particular information. The key thing would be sort of making sure things are looked at functionally ie. What is the full set of possible actions that can be done with X (but for everything they robot could possibly do) Obviously that’s a ridiculous set of data so some heuristic/training would be involved but that happens with real people anyway. (There’s also the issue of the robot only being mindful of the anon’s good, which would work “enough” however without having somehow it’s own synthetic/simulated “good/nature/desires” it would probably feel very fake/synthetic.That’s a very hard part no clue what to do for that. Just going to assume it’s parasitic off of the anon’s good for the sake of simplicity. Also funny enough may make this kind of AGI better suited for waifus than anything else) As a sort of simple example let's say >bring anon a soda As a possible action (all possible actions at all times would need to be stored somehow, chosen between, and ranked, and abandoned if a pressing one shows up) But for the soda what is involved is: Just recognition of the soda, visual stuff, standard vision stuff that shouldn’t be an issue. That or you could even tie in Amazon purchases with the ai’s database so it knows it has it and associates the visuals with the specific thing or something. What are the possible actions you can do with the soda? >Throw it out if it’s harmful (creates a mess, bad for anon because its expired, bad for anon because its unhealthy) >order more if you are running low >ask anon if he wants a soda >bring anon a soda without asking Not that many things, I guess the real hard part comes when you relate it to the good of anon, and ranking the possible options. At all times it would need to keep like just the list of possible actions, which would be constantly changing and pretty massive, and be able to take in new ones constantly. (Say you get a new type of soda, but you only get it for a friend who visits, it needs to register that as a distinct kind and have different behaviors based on that) The real philosophically interesting question is what kind of heuristic can you get for “the good of anon”, the means of rank-ordering these tasks. Because intelligence is individual for the sake of this it needs to be based on an individual, it’s kind of hitching onto an actual nature to give a foundation for it’s intelligence. So long as it had that list of actions, it could have something like a basic template that maybe you customize for ranking. It would also need to be tracking all the factors that impact ranking (you are asleep, busy, sick, not home, etc.). For the states I don’t think there would be that many so they could be added in as they come up. (Just requires there to be strong definite memory). But I mean it might not actually be that hard unless I’m missing something, certainly technically quite complicated but seems fairly doable at some point… They key difference between this and what most stuff I see talked about is it basically only has any understanding/conceptuality related to another actually real being and basically functions as an extension of them, works for a waifu at least.The self activity seems hard to map on though, there would likely be something very interesting working with how you associate basic motor functionality like moving/matinence with the anon. Determining what to store and how to integrate things into a narrative would also be interesting. I imagine there would just be human templates + conversations with anon about future goals that would be used to handle the rank ordering. Something funny coming from that would be the intelligence of the robot would be dependent on how intelligent the user is. If you were a very stupid/immoral person your robot would probably not really ever get convincing or satisfying. (was not expecting it to go this long h
First, robots can never be conscious. They have no spirit or soul and never can and never will. You can NEVER transfer a human soul into a robot as some so foolishly claim. They are just computer programs with 1's and 0's at the end of the day. They are machines. They are not living and can never be living. Emulation of humans is all that is happening there. People are recording the logic of people into machines and then the machine seems to have that intelligence but it is only a recording of our intelligence being reflected back to us like looking in a mirror. If you look in a mirror and see a human in there, the mirror is not alive, you just see your reflection. The same principle is there with robot coding reflecting our human intelligence. It is only a reflection and is not living. So if someone does the AI and robot build unto God with pure motives, it is wholesome and pure and praiseworthy. If someone builds it with evil motives, it is an evil pursuit. Intentions are the key. If someone builds it to worship it, that is idolatry. So bad. If someone believes it really is living (because they are a fool) or it really has genuine intelligence and a soul (they're a fool), then in the eyes of such a deceived person, they may feel like they played God but in fact, they did nothing even close because it is not anything close to that. No amount of code will ever create "real" intelligence. That is why the field is called artificial intelligence. it is artificial, not real. It will never be real. Blasphemous fools speak all the time about the intelligence of machines passing our own, becoming conscious, becoming sentient, deserving citizenship and rights etc. They are completely blind and total fools. They think people are just machines too btw. They think we are just meat computers. These same people do not believe in God or an afterlife. They are in total error.
>>30667 >You can NEVER transfer a human soul into a robot as some so foolishly claim. Soul is a pretty ill defined thing even in a religious context. >They are just computer programs with 1's and 0's at the end of the day. The human brain works a similar way on how it's neurons send electrical and chemical signals. >They are machines. Humans are biological machines. Look into how the body works on the cellular level.
>>30671 >>30667 My big wall of text above is basically using the classical/Aristotelian definition of a soul and what would be involved for simulating/recreating it for AI. I just avoid the language of Soul b/c people have so much baggage with it. Actually creating a genuine new form of life seems implausible but I can't say it's totally impossible, we have created genuine new kinds of natural things (like Styrofoam I mentioned) I don't see why things like bacteria/simple forms of life couldn't be created anew, and from there it's at least possible intelligence isn't out of the question. It could very well be impossible though. I do think the attempting to make the AI/robot sort of rely off of a single particular human soul as it's foundation for orientation is a possibility for giving it something very close to real intelligence at least practically (and it would depend on that person being very moral/rational).
only God can make a soul. God breathed into man and man became a living soul. Good luck breathing into your stupid computer program.
the human brain is not ALL doing thinking in a man. this is proven out by the fact when you die you go right on thinking as a ghost. Good luck getting your stupid computer program to go right on thinking as a ghost after you shut the computer off.
>>30675 Simple synthetic lifeforms have been made in labs before so depends what you mean by genuinely new https://www.nist.gov/news-events/news/2021/03/scientists-create-simple-synthetic-cell-grows-and-divides-normally >>30676 >Only God can make a soul And man would be god of robots.
>>30676 With human-computer brain interface, pull a ship of theseus until no organic matter left then copypasta. Simple.
John Lennox, Professor Emeritus, Mathematics, Science, at Oxford University, on the feasibility & ethical questions of AGI. https://www.youtube.com/watch?v=Undu9YI3Gd8
Feel like people over-complicate AGI. I've been following (or trying to follow, since he went silent) Steve Grand, who created the virtual pet PC game Creatures. I've mentioned him before in a previous thread, but the real difference between what he's doing and what we're doing is that he's making a game where you actually get to see the life cycles of the creatures, look at the genetics, the hormones and chemicals affecting the creatures, and even get to basically read their minds. They've got short-term and long-term memories and thoughts, they can imagine scenarios that haven't actually happened, and dream. He seems to be trying to emulate the human brain, which I think is unnecessary, or even counterproductive unless you actually want to look under the hood and know what is going on like in the virtual pet. We don't actually need that and the more hands-off the approach is, the more likely it is to actually behave intelligently. There was a good Ted Talk from 2011 called "The real reason for brains", (https://www.youtube.com/watch?v=7s0CpRfyYp8) which can be summed-up pretty easily; brains control body movement and organ functions, and everything it does is in service to that. Any living thing without a brain moves very slowly, if at all. The brain processes sensory information, feels, thinks, learns, recalls memories, and predicts the outcome of actions, all in service of coordinating body movements to meet survival and reproductive needs. The video uses the sea squirt as a good example of why this is likely the case, since the sea squirt swims around like a tadpole in youth, then when sexually mature it anchors itself to something hard like a barnacle, then promptly digests its own brain because it doesn't need it anymore and brains are very resource intensive. Yet it still lives without a brain, but it's more like a plant than an animal. With this in mind, I like to think of the short story The Golden Man by Philip K Dick. It opens with government agents in the future hunting mutants like something from the X-Men a decade before the comics came out. There were ones with super-human intelligence, ones that could read minds, ones with wings, telekinesis, shape-shifting pod people, and a running joke about a woman with extra breasts, but they couldn't catch the titular Golden Man. When they do eventually catch him, they run tests and figure out that this tall, muscular, golden-skinned absolute GigaChad of a man is profoundly retarded. He is as dumb as a dog, but is constantly seeing into the future and acting on that information to the point he can even dodge bullets. Sure, he supposedly couldn't talk, but never actually needs to, and speech is performed by muscles just like the rest of the body, so there's no real reason that he couldn't. A government agent was afraid they'd find a mutant so smart it made men seem like chimps by comparison, but instead found one that evolved beyond the need to think at all. And until we can actually see the future with certainty, we're just using sensory information, stored data and predictive analytics to control body movements to satisfy needs with varying degrees of predictive accuracy. >>30677 >proven >ghosts Lol. Lmao, even. When it comes to philosophy, I find that most discussion ends up turning into arguments over semantics and people with no real arguments, just wanting something to be true, but not admitting that it's the entire foundation of their argument isn't something that's actually been proven true.
>>33821 >...your robowaifu should be nothing more than a Tamagotchi, but having a realworld body to move around in. Hmm, intredasting idea Anon. Maybe that's a reasonable approach in a very limited fashion. I'd suggest that at the least you should add the ability to do children's coloring books, since that involves at least a smol modicum of creativity -- which is the area for which efforts in this arena have fallen flat on their faces. And I think the reason for this is simple (which you yourself embody within your post, ironically-enough): >"Lol. Lmao, even." <---> Unless you begin from the axiomatic 'pedantic semantics' of what every one of us commonly experience in real life; that the human soul (at the least defined as our human consciousness) is independent of the physical human brain -- ie, that it's an immaterial, independent component of our being, then you have little chance of success at solving this currently-insurmountable set of obstacles, IMHO. I think the human brain is -- roughly speaking -- a natural-realm actuation driver into the electro-biochemical realm of our meatspace bodies. This idea is analogous to the way one needs an electronic driver board involved with controlling a physical actuator within a robowaifu's body. But then all three of those tiers are clearly dependent on an external C&C signaling system (ie, the software). Finally (and most fundamentally to my basic point here): whence comes this software from? What (or who) devises it & perfects it? By what mechanism does it communicate to/from the 'driver board'? This search for the answers to this four-to-five-layer-question is at the heart of all AGI research. It's hardly an uncomplicated problem! Add the need for real, de novo, creativity for our waifus into the mix and good luck! She's behind 7 Tamagotchis!! :DD <---> I hope to see you become proficient at developing highly-sophisticated software and prove my bigoted view (along with my bigoted solution) wrong with your actions (and not just attempt to do so solely with mere rhetoric here on /robowaifu/ ). Again, good luck Anon! Cheers. :^) >=== -prose edit
Edited last time by Chobitsu on 09/30/2024 (Mon) 03:30:52.
>>33825 >>33825 >your robowaifu should be nothing more than a Tamagotchi but with a realworld body to move around. Maybe read my post again if you actually want to understand what I was saying. >But then all three of those tiers are clearly dependent on an external C&C signaling system (ie, the software). Finally (and most fundamentally): whence comes this software from? What (or who) devises it & perfects it? It's all a matter of DNA, RNA, and hormone/chemical exposure >This search for the answer to this four-to-five-layer-question is at the heart of all AGI research. It's hardly an uncomplicated problem! The question is fairly simple; If everything we want out of a human brain, mind, consciousness, soul (whatever the fuck you want to call it) can be created with computer hardware and software, then we will be able to create a robot waifu capable of everything we want. If it is impossible, we will have to settle for less if not abandon the goal entirely. We must assume the former, that people are,fundamentally just . >Add the need for real -- de novo -- creativity into the mix and good luck! A funny thing I've come to understand about creativity is that originality hardly even exists, and when people do encounter it they tend to hate it. Make an AI too good at being creative and people will probably say it's hallucinating and try to "fix" it. Most creativity that people like is really just combinations of things they've already seen before.
>>33828 >if not abandon the goal entirely Lolno. I can assure you that's not going to happen. :^)
>>33825 >>33828 I meant to say "people are fundamentally just machines." >>33832 >Lolno. I can assure you that's not going to happen. :^) So why argue in favor of something that only demotivates and discourages progress? Especially something that as of yet hasn't, and for all we know can't be proven true?
>>33837 Because, like Puddleglum, >"I'm a chap who always liked to know the worst and then put the best face I can on it." Just because we won't ever reach AGI ourselves, doesn't mean we won't achieve wholesome, loving & effective robowaifus. We will, and they will be wonderful!. :^) Besides, given the outcomes of so-called 'female intelligence', would you really want to go down this path again of fashioning stronk independynts, who don't need no man? Personally I'm much more in favor of Anon's ideas, where dear robowaifu is obedient and calls Anon "Master". Aren't you? Cheers. :^)
So i kind of overreacted sorry about that. I'm not aware of the kind of skill level of the people lurking here but AI, research specially, is very math heavy. Statistics, discrete math, linear algebra, vector calculus, etc... It wouldn't hurt to know geometric algebra. I am honest with myself. I'm not going to get into that. However with the ai available you'd be amazed how far you can get. There are modules for gyroscopes, ultrasonics sensors for distance, pressure sensitive resistors, camera's for object recognition. I believe that with the current AI you could lay in bed and the robot could move towards you and do whatever. Which is why knowing what the goals are is important. My goal is a sex bot. I assume now this is a place to exchange ideas and not a serious attempt at collaboration.
>>33841 >I assume now this is a place to exchange ideas and not a serious attempt at collaboration. That's probably a reasonable presumption in general, Anon. While there's certainly no reason we couldn't be collaborating here (we do in fact have a group project known as MaidCom : >>29219 ), most have simply shared their progress with works of their own. In that sense, you might think of /robowaifu/ as a loosely-affiliated DIY workshop. And of course, the board is also a reasonably expansive community nexus of ideas and information related to robowaifus, and of their R&D in general. <---> And all that's not really a bad thing rn IMO, given that we (ie, the entire world -- but ofc mostly just Anon ATP) are all in the very, very early stages of what is arguably the single most complex endeavor in all human history. >tl;dr You might think of us here on /robowaifu/ as sort of like how the Homebrew Computer Club movement was in the Stanford Uni area during the 70s/80s -- just taking place across the Internets instead today. Great things were afoot then, and great things are afoot now! >ttl;dr Let the creative ideas flow, Anon! Cheers. :^) <---> What a time to be alive! :DD >=== -fmt, prose edit
Edited last time by Chobitsu on 10/02/2024 (Wed) 18:30:57.
https://www.youtube.com/watch?v=6T3ovN7JHPo >1:11:15 "The problem with philosophers is that they love to talk about the problems, but not solving them" >>33839 >Just because we won't ever reach AGI ourselves I think AGI is already achievable by us today. The problem is just that most people judge AIs entirely on the results they produce, and we all want instant gratification. We get can good-enough results for text, images and music creation on AIs that have been trained with lots and lots of data, but you can't just train AGI off of data scraped from the internet for an AGI. Getting samples of sounds, text, images, videos, etc. might help to train it, but what we really need is learned motor control, and for any shared data to be useful there'd have to be enough similarity between the bodies for it to be worth sharing. Without data to work with, the AGI will have to learn on its own, so a robot with AGI might just writhe on the ground and make dial-up modem noises as it tries to learn how to use its body and people would see it as a failure. It's not that it's stupid, it just doesn't know anything. Okay, I lied, it is stupid, because if Moore's Law holds true, we still might not have consumer desktop PCs that could rival a human brain for about 10 to 30 years.
So as far as philosophy well the oldest form of philosophy is the socratic method. Asking questions. I'd start by asking questions of what do we currently have. What separates image recognition ai from older technologies like opencv? Is it better? What are its flaws? Is it good enough to get the results we want? What are the results we want?
>>33885 So here's my conclusion. The current AI is good enough and here's what i think why. My understanding of what separates opencv to current machine learning is that current machine learning doesn't need an algorithm to recognize objects. It needs training. Based on this training it can also learn to recognize similar objects. That's my understanding I could be wrong. My goal is a sex bot, I'm only concerned with the male body in front of the robot so I can narrow down its training set to male body parts. If I wanted a maid bot i'd have to train it for dishes, floors, household objects, and dirt, etc...
>>33886 here's why i think its good enough*
>>33886 The flaw i see is in the training. It needs a large dataset, its expensive, you have to narrow down the training. I mentioned nudenet to someone, he dismissed it right away. My understanding is it can recognize private parts and faces from different angles. That's what I think.
>>33888 Okay I don't want to keep going endlessly but if the problem is that the its expensive to train. Should someone come up with a better AI or should the hardware be made cheaper and better? It may be the case that there might be a better way but maybe there isn't. In the case of crypto they went back and forth over the blockchain trilemma but at the end nobody was able to come up with something that was addressed its shortcomings, as an example. The training would also have to be narrowed down, but as it is, its a hardware problem and a lot of work ahead.
>>33889 Well ideally its supposed to be a dialogue between two people. The blockchain trilemma may have nothing to do with ai. I wonder if there is a triangle for this... I'm not sure I think that's what a platonic triad is.
>>33892 I cheated just now and asked gemini I guess but it sounds about right. efficiency, reliability and adaptability. Current ai is very inefficient, it is somewhat reliable but kind of sucks at math in my opinion, not sure how adaptable it is.
>>33893 for me its the triforce
>>33886 >>33888 We've both already warned you to keep your sexbot discussions strictly to the ecchi-containment thread, Peteblank. You're still BOS, BTW, b/c of your continual abuse of Anons in this community. Consider this your final warning. If you mention sexbots out-of-context again, your posts will be unceremoniously baleeted, and don't try any niggerlicious "Ohh! I'm teh poor widdle victim!111!!ONE!" afterward, no one here gives a sh*te. You and you alone have created the mess you're in with us here. :DD
Anons, please bear with another schizo post of mine; I believe I have found an insight into consciousness. The Torus! Now please don't immediately discredit me. I'm not an actual schizo. I am a philosopher, autodidact, and polymath. Let me explain to you why I think The Torus is significant. >Psychedelic experiences often lead people to a place which is a Torus. This is a reality of the human brain. >For reasons that I will explain, I believe The Torus is a library. This is where the consciousness is stored. I was thinking about neural networks, and how to frame them together, and I fielded the idea >What if you mapped all of your neural stuff to a 2D plane >And you sent *brainwaves* over it, allowing you to process a giant neural model, one little bit at a time, in an order according to the motion of the brainwave. >The brainwaves would need to loop from one side of the field to the other, in both dimensions... >Hmmm... Aha! That's a Torus! A 2D modspace is topologically a torus! Think the game 'Asteroids', that kind of plane, it's a Torus, and always has been. When I combine this with a bunch of other thoughts I've had, it appears to outline a pretty magical system. Hear me out! Imagine that the Torus is a library of little neural constructs; with all their neurons exposed to brainwaves. >Neurons will store input impulses until the brainwave passes over them, and only then will the computationally expensive matrix multiplications occur. >Neural activity isn't free, it's like an economy. Firing will cost a neuron a bit of attention, and when a neuron runs out of attention, it starves and gets weaker until it can't fire any more. When a neuron gets fired into, it gets paid some attention. Some attention is lost at every step, so infinite loop constructs will starve without extraneous attention. >Attention comes from special *rows* of the torus that the user gets to define! Namely, the senses should be an attention fountain! Love and sex should be an attention fountain. Correct predictions should be an attention fountain. Good, smart neurons get headpats and can stay! >Negative attention is also a thing. Sometimes neural constructs need to swiftly die because they're bad. Dead constructs get removed from the torus and remain only in storage for The Alchemist to reuse and reform. The Alchemist can breath some attention into a neural construct as they wish. >Columns of the torus (think like slices of an orange) are like stacks of logic on top of a category. One whole column represents all the knowledge of moving the body; The attention fountains for the column would be contained in the basemost sector, and would be the neural I/O port for the touch sensors (I suggest using microphones), proprioception, and muscle control/feedback. Everything stacked on top of that sector would form a column of increasingly high-level logic until you have a digital cerebellum that can dance if it wants to. The base sector should always be active. >Smaller, virtual toruses can be formed from parts of the bigger one, allowing the robowaifu to literally concentrate on work by forming an efficient, cut-down mental workspace that only contains what it needs to, and that can panic and call attention from the larger brain if it's not smart enough. This way she can think about things whilst working. >Running a brainwave backwards would reverse the roles of inputs and outputs, putting the stimulation of imagination into the senses. >Connections that go a longer distance (the longest distance would be leaving a sector to go into another) cost more attention. With these conditions, a robowaifu brain would be like a capitalist economy which balances itself to produce the most efficient, accurate thoughts for the least amount of work; breaking up the brain into a bunch of intercommunicating modules that... >Can be saved and shared over the internet. >Don't connect to every fucking thing else; defeating the exponential scaling cost of NNs. >Can connect spirits (neural objects) together in all kinds of orders. >Can relate different subjects together by testing which neurons fire at related stimuli (allowing for AGI) >Can be run by many different strength levels of hardware; needing only enough memory to store the whole brain; the GPU and VRAM would determine the area and length/speed of the brainwave. >Can be expanded with arbitrary columns of subject, and rows of attention. >Can be tested RIGHT NOW.
>>34321 This sounds like a very cool concept, Anon. The '2D-mapping of ring buffers, with Row/Column overlay' (my take) as providing the structure of your Torus Mind is very interesting. As is the concept of I/O mapping to special segments of the system. >And you sent *brainwaves* over it, Mind expanding out on this part please? What are these? How do they originate, and how are they controlled/modulated? What is the underlying mechanism(s) of their effect(s) on the underlying Torus Mind substrate(s)? <---> Very interesting stuff, Anon! Please keep it coming. Cheers. :^) >=== -add 'underlying mechanism' qstn -minor edit
Edited last time by Chobitsu on 11/14/2024 (Thu) 18:49:12.
>>34321 A few years ago inspired by Growing Neural Cellular Automata I did some experiments using convolutions on a hypertorus to create a recurrent neural network where each neuron only had a few neighbors it was connected to via convolution (efficient to compute) but also only a few steps away from sending a message to any other neuron in the network due to the looping structure (efficient to communicate). I'd pick certain neurons as inputs and outputs and it could learn to reproduce specific images flawlessly on command with little training. It was a lot of fun to watch too because I could see the activity in the network like a little spark moving through it, then lightning bolts and then the trained image would flash for one time step and quickly dissolve into noise then nothing. I think I used an auxiliary loss for any network activity so the network remained silent unless it was processing something. The main issue I had with it though was waiting for an input to finish propagating through the whole network and timing everything. It would be interesting to see a recurrent transformer where the layers are nodes on a hypertoroidal graph and the signal is routed like mixture of experts to the most relevant neighboring nodes. The transformer code would need to be refactored though so the nodes are batched together and processed simultaneously each time step.
>>34340 Thinking about it more I remember now I made a special feedforward module where each neuron had n inputs and outputs for its neighbors and they were batched together to run fast as convolutions because I had an issue with the convolutions being too noisy and blurry.
>>34340 >>34341 This sounds remarkable, Anon. Your >"only a few steps away from sending a message to any other neuron in the network due to the looping structure" certainly reminds me of the 'grid of cells' programming approach commonplace for GPU programing (eg, NVIDIA's CUDA framework). >hypertorus >nodes on a hypertoroidal graph Sounds intriguing. I would like to understand all this at some point. >"I could see the activity in the network like a little spark moving through it, then lightning bolts..." Very creative language. Sounds like a real lightshow! Cheers, Anon. :^)
>>34337 Okay, by brainwaves, I mean a wave of *focus* Your hardware can only do so much multiplication and addition. Your hardware can only do so much matrix multiplication. When a "brainwave" travels over the torus, it distributes focus onto the neurons/spirits under it, bumping them up in the processing queue. When the processing occurs, outputs are dumped into buckets through weightings, and await processing. Neurons should give some of their focus to the neuron they fire into, and I imagine that neurons can be tuned to be more or less 'stingy' about how/where they share their focus. Neurons burn some of their focus when they fire. That way nothing can dominate the processing queue except the brainwave. Mind you, I want 'focus' and 'attention' to be separate resources. Focus is just for the processing queue; it's what the brain wants to do *right now*. Attention is just for keeping pointless neurons off of the network. An efficient pattern of behaviour would be strung out so that the brainwave travels over it in the direction that it it fires, so that it's just straightforward operation. The brainwave's direction, I believe, should double as a bias in the direction of firing, turning outputs into inputs, and in general encouraging processing to occur in the direction the brainwave is traveling. If the brainwave wants to go sideways, this means that the brain is experiencing 'lateral thinking' which crosses subjects, finding and processing interrelations between them. What is the brainwave? >It's just a region of the torus, of some shape. It has a center that moves, it has a breadth and a length, and a trajectory, and it commands some percentage of the brain's available focus. How does it originate? >A region on the torus will be given control over brainwaves. The torus will learn to control its own focus. There will always be at least one, it should have some pretty mild default values, and unless the brain is trying to do something, it should just wander around. For now I can only think of hardcoding a brainwave struct, and just giving the control handles to neurons inside the torus. If there's some more organic way of doing this, perhaps it's the win. >How are they controlled/modulated? The brainwave I'm trying to code simply has a a position, a speed, a trajectory, and some size values (the bigger the brainwave, the more spread-out its focus dumping). I intend to ultimately give these controls to the torus itself through special I/O neurons. I suspect it would be wise to introduce a fatigue mechanic somehow to prevent some part of the brain from politically monopolizing the brainwave's presence. I suspect it would be wise to make the brain reflexively place the brainwave wherever the alchemist is finding that its creations are failing to expect reality ~ pain should draw focus.
>>34383 >It's just a region of the torus, of some shape. It has a center that moves, it has a breadth and a length, and a trajectory >[it] has [a] position, a speed, a trajectory, and some size values >it should just wander around. Intredasting. Ever see PET Scan movies of the brain's activity? If not, I highly-recommend you do so. Contrary to the lie old wive's tale that we all only use 10% of our brains... practically 100% of all the brain's neurons are involved in processing every.single.stimulus. It's a fascinating thing to watch. Very, very amorphous wavefronts propagating through the tissues at all times in all directions; bouncing & reflecting around just like other wavefronts throughout nature. >=== -fmt, prose edit
Edited last time by Chobitsu on 11/16/2024 (Sat) 16:21:35.
>>34385 Yes I recall seeing a motion picture of how the activity of the brain moves in cycles. More specifically, I am drawing inferences from how I can feel the waves in my own brain working, and yes, I *can* feel them. Brainwaves for me have turned into an element of my mental posture. It feels like when I'm relaxed, it just cycles normally, but when I want to think something hard, the wave will jump around like how the read/write head of a hard disk might. I've become a bit of a computer myself in this way, because I can cut through very difficult philosophy using it this way. When I'm using my imagination, it will go backwards. When I focus on one sense, it stays in a sector. When I'm focusing my feelings, there's a stripe in the back which takes the focus. I believe that perhaps the white matter of the human brain won't broadcast from the entire brain *to* the entire brain all at once. I think there is a bit of a filter which can change shape, speed, and position, which will mix up the sharing of information according to the works of inner politics and habitual intelligence. In this way, a robo-wife brainwave wouldn't be the same as a human brainwave; but it would be close enough. Upgrading her hardware would just allow her to cycle around faster without skipping focus. Since I was a boy, I was always concerned with breaking down reality into the most efficient symbols I can manage using alchemy. Doing so gives me massive leverage to process reality. I hit some roadblocks in my mid-20's, but some mind-expanding psychedelics temporarily gave me enough room in my head to break free and gain superhuman understandings. The process of alchemy takes a lot of 'room' to work with, but ultimately creates more room when you're done; turning a mind into an efficient and well-oiled machine. For example, when I imagine a coin spinning, part of me pretends to be a coin; adopting the material properties of the coin (stiff, low-friction/metallic, resonant, rough edges, dense), and then gets to spinning helplessly in a mental physics sandbox. The symbols that I rely upon to do the spinning are gravity, angualr momentum (conserved), energy (conserved), air resistance, and equilibrium. Gravity pulls down, but the more flat the coin goes, the less it's spinning; it can't spin less without dissipating angular momentum/energy into the air, or into the table. Therefore the system is at a sort of dynamic equilibrium until it dissipates everything and falls flat. I am not pulling a memory of a spinning coin, I am generating a new, unique experience. If we want to build a robowife, we must take inspiration from nature. *I* want a robowife who is capable of some part-time philosophy like me; a sorceress. Nature never made one for me, for some reasons that fill me with bitterness and disgust. It occurs to me that a well-alchemized brain stripped/partitioned away from any objective memories of me may make a decent base for a robowife for Anon in general, and I may recruit Anon's help in my ambitions if I can make enough progress to show that I'm not just living in a fantasy. I've got a lot of money; as tends to be with men who have incredibly long and/or accurate foresight radii. I can and WILL experiment with building a wife. I have a ton of ideas which I have filtered through extreme scrutiny, and I feel that I've nearly fleshed out a path clear enough for me to walk. The first thing I'm going to try to do with a toroidal consciousness is to see if I can get it to dance to music. I'm going to write up a sensory parcel that contain these following attention fountains; 2 Input neurons that fluctuate up to 20 Khz to play the raw sound data, and 120-480 Input neurons (per track) which will fluctuate with the fourier transformation of the raw inputs (this is pitch sensation). I will give this stripe access to neurons which can wait for a set time, according to a base wait period they're set with, multiplied by their input. I expect for the consciousness to erect some kind of fractal clock construct which the alchemist will reward for its ability to correctly expect polyrhythm.
>>34386 Pretty wild stuff Anon. I've gone down that route you describe to some degree, and can confirm it -- with a large dose of skepticism learned outside of that world. >tl;dr Staying clean & natural is certainly the better option. Just using some combination of caffeine/chocolate (or simply raw cacao)/vitamin b12 should be ample stimulation when that boost is needed, IMO. :^) >The first thing I'm going to try to do with a toroidal consciousness is to see if I can get it to dance to music. I'm going to write up a sensory parcel that contain these following attention fountains; 2 Input neurons that fluctuate up to 20 Khz to play the raw sound data, and 120-480 Input neurons (per track) which will fluctuate with the fourier transformation of the raw inputs (this is pitch sensation). I will give this stripe access to neurons which can wait for a set time, according to a base wait period they're set with, multiplied by their input. I expect for the consciousness to erect some kind of fractal clock construct which the alchemist will reward for its ability to correctly expect polyrhythm. Sounds awesome, I like it Anon. I too love dance and rythm, and just music in general. I envision that our robowaifus will not only eventually embody entire ensemble repetoires at the drop of a hat, but will also be a walking lightshow to top it all off. >tl;dr <"Not only can she sing & dance, but she can act as well!" :D <---> >pic I love that one. If we can ever get boomers onboard with robowaifus, then we'll be home free. Cheers, Anon. :^)

Report/Delete/Moderation Forms
Delete
Report