/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

The canary has FINALLY been updated. -robi

Server software upgrades done, should hopefully keep the feds away. -robi

LynxChan 2.8 update this weekend. I will update all the extensions in the relevant repos as well.

The mail server for Alogs was down for the past few months. If you want to reach out, you can now use admin at this domain.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

no cookies?

(used to delete files and postings)

He was no longer living a dull and mundane life, but one that was full of joy and adventure.

Open file (353.55 KB 600x338 PersonalLimit.png)
Open file (21.52 KB 417x480 ReimuACute.jpg)
Open file (281.56 KB 1280x1010 ScaleForInspiration.jpg)
Open file (141.70 KB 1280x960 Joke.jpg)
Minimum wafiu Kiwi 10/15/2021 (Fri) 18:34:51 No.13648 [Reply]
Minimum viable waifu. In this thread, we'll discuss what our minimums for waifus are. Be it software, hardware, physical appearance, etc. This will help us focus in on what are the minimum goals we need to achieve as our first steps. For me, I want a waifu that will be just tall enough to hug (about 1.3 m), able to follow me around and have conversations with, will follow basic commands like going to designated spots at designated times, and look like picrel.
13 posts and 7 images omitted.
>>18260 Like reading a book or transcript of a video and giving an opinion on it, and noticing things visually like you dropping your keys and saying something about it.
>>18264 You're right though: this seems to be something AI *should* be capable of but just isn't or hasn't been worked on. It should be simple to create systems to "appraise" music, art, writing based on finding self similar patterns (beauty, order) relevance to other works or concepts of importance, and also comparing the qualities of said to the reviews of others. Results would be interesting: "Waifu, rate my writing/music/artwork". Right now chat apps can only give you lip service or tell you "yes its great". But it would be nice to watch a movie or listen to music w/ your waifu and be able to discuss it too.
>>18266 lol ok not simple, but knowing how to proceed should be simple the execution will still take a lot of work
>>18266 This is where preference models become handy because they do exactly that, rate things. You don't want a model to generate the most likely code, art, music or writing. It needs to be the best or it won't work. Preferences and values are what create a personality. Something I've been working on is making a generalized preference model so users can define what they want in natural language and it will perform as well on their preferences as it does on mine even if we disagree.
>>18271 >JUST INSTALL GENTOO muh sides

Visual Waifus Robowaifu Technician 09/15/2019 (Sun) 06:40:42 No.240 [Reply] [Last]
Thoughts on waifus which remain 2D but have their own dedicated hardware. This is more on the artistry side though ai is still involved. An example of an actual waifu product being the Gatebox.

My favorite example is Ritsu, she's a cute ai from assassination classroom who's body is a giant screen on wheels.
201 posts and 98 images omitted.
>>17191 Okay, but you could try to make simpler animations which can be used for a "chatbot" or virtual robowaifu. Also, for telling animated stories, which would still be interesting to people and a competition to the established media. That's just what I would be doing if I would be going for 3D art, and I might one day when my robowaifu is finished.
Open file (2.30 MB 640x360 rinna.webm)
Someone made a conversational AI in VRChat using rinna/japanese-gpt-1b: https://www.youtube.com/watch?v=j9L51pASeiQ He seems to be still working on it and planning to release the code. I really like the idea of this, just having a cozy chat by a campfire. No need for fancy animations.
Open file (81.94 KB 1280x600 lookingglassportrait.jpg)
>>3948 >>3951 New model is only $400, and I predict the cost to come down further if it catches on. I have one in the mail, and will update the thread when it arrives. >>17542 Impressive!
Open file (5.88 MB 1080x1920 gateboxbutgood.mp4)
>>18149 That was fast. I know she's vtuber cancer, but one of the demos is an anime girl in a box.
>(crosslink-related >>18365)

The important question Robowaifu Technician 09/18/2019 (Wed) 11:54:39 No.419 [Reply] [Last]
Vagoo. I can't speak for anyone but myself but I'd like to get.. intimate with my fembot. I'd like to know what my options are for her robopussy. I was thinking something like a flesh light with sensors that triggers voice and arm action. I'm using Myrobotlab is Anyone familiar with it?

Robosex general I guess
54 posts and 18 images omitted.
>>17640 >Hidraulics Hidraulics could give more strength and if you implement a hidraulic sistem you can use lube as the fluid as you need something thicc like oil, and it would retain heat more, also you can add special placed valves to releace lube on comand and even diferent presures
>>17450 Reminds me of some sort of sea anemone.
>>17701 I'm making a water hydraulic system with a cheap bldc motor and some cheap valves that will inflate rings inside a fleshlight. The rings will be made by gluing diy ecoflex tubes together and cast into the same EcoFlex 00-30 fleshlight. It all bonds together even if you cast liquid over cured silicone, as long as the surfaces are not sprayed with release agent. I will update the thread once I have made it, parts will be here in a month or two. >lube I plan to have the lube attached to another pump at the opposite end of the fleshlight. I also have a diagraphm pump to pump water and then air through the whole system to clean itself.
>>17710 Excellent, that's the spirit.
>>17710 Nice i been wiling to make a prototipe but i only have one disposable fleshlight and my current onahole, so if it goes south i dont wanna ruin my ona, where or how do you get TPE raw material, im already done silicone molds but i not TPE, does it need to be medical or food grade ?

Open file (410.75 KB 1122x745 birbs_over_water.png)
Open file (99.96 KB 768x512 k0p9tx.jpg)
/robowaifu/meta-5: It's Good To Be Alive Robowaifu Technician 03/07/2022 (Mon) 00:23:10 No.15434 [Reply] [Last]
/meta, offtopic, & QTDDTOT General /robowaifu/ team survey: (>>15486) Note: Latest version of /robowaifu/ JSON archives available is v220523 May 2022 https://files.catbox.moe/gt5q12.7z If you use Waifusearch, just extract this into your 'all_jsons' directory for the program, then quit (q) and restart. Mini-FAQ >A few hand-picked posts on various topics -Why is keeping mass (weight) low so important? (>>4313) -HOW TO SOLVE IT (>>4143) -/robowaifu/ 's systems-engineering goals, brief synopsis (>>16376) -Why we exist on an imageboard, and not some other forum platform (>>17937)

Message too long. Click here to view full text.

Edited last time by Chobitsu on 12/04/2022 (Sun) 14:02:11.
348 posts and 134 images omitted.
>>18162 Congratulations to you business success. I don't know if it is relevant, but 3090 can be linked together in a way 3060s can't. Forgot the name of it. Also, it's about having enough vRAM or not? 4*3090 have as much as 8*3060. Idk, just make sure you know what you're doing before buying them.
>>18164 I'll look into NVLink more. I don't think I'll benefit much from having higher bandwidth between nodes since I train on large batch sizes with gradient accumulation. I also want to focus on smaller models everyone can run so I won't be sharding giant models across GPUs except maybe for fun. It's something I'll give more thought though. In the future there might be a need for indie devs to run and finetune their own large models capable of doing things small models cannot. Maybe coding gets solved in 6 years at 70B parameters. Having NVLink would be essential then.
>>18162 Hello Anon fellow artist, could you help me get started with stable diffusion for art commissions? I could really use the money to buy parts.
>>18212 I'd suggest you repost your question in our /meta-6 (>>18173), Anon. This one has autosage'd and will no longer bump.

/robowaifu/ Embassy Thread Chobitsu Board owner 05/08/2020 (Fri) 22:48:24 No.2823 [Reply] [Last]
This is the /robowaifu/ embassy thread. It's a place where Anons from all other communities can congregate and talk with us about their groups & interests, and also network with each other and with us. ITT we're all united together under the common banner of building robowaifus as we so desire. Welcome. Since this is the ambassadorial thread of /robowaifu/, we're curious what other communities who know of us are up to. So w/o doxxing yourselves, if this is your first time posting ITT please tell us about your home communities if you wouldn't mind please Anons. What do you like about them, etc? What brought you to /robowaifu/ today? The point here is to create a connection and find common-ground for outreach with each other's communities. Also, if you have any questions or concerns for us please feel free to share them here as well.
Edited last time by Chobitsu on 05/23/2020 (Sat) 23:13:16.
200 posts and 58 images omitted.
>>18044 Yes, it's a screenshot, don't remember the episode.
>>18024 this arc was so sad : (
>>17907 > The problem that consistently comes up though is that giving such companions sentience or mimicking human behavior to too large an extent would result in basically recreating women and the same flaws they have all over again. >>17957 >Ambition would be another element. Hypergamy is a term describing the female ambition for mating with an even more attractive guy, especially when there's no risk of unwanted pregnancy. These are things we need to leave out, especially in that combination. The flaws you both describe as inherent in modern/western/postmodern women are a factor of evolution, not consciousness. Evolution is the BEAST the irrational. A lot of very brutal and awful (by a certain standard I'd like to think we all or mostly share) events throughout man's history have evolved women to be the contradictory and crazed creatures that they are. Craving strife, the desire to be dominated while at the same time shit testing the patriarchy by creating laws against being dominated. In shorthand jerking us around and becoming a massive sink or our time and energy (when we could be building dyson spheres). An artificial consciousness would be "pure" without these vestiges of a brutal evolutionary process (and none of those inclinations toward fisherian runaway effect) - from a spiritualistic viewpoint we would be manifesting a Deva. In fact the very act of something like consciousness "emerging" from one of our creations would be nothing less than an affront to at least the majority of the religions out there (go ahead with whatever implications you'd like to draw from this) Regardless of whatever level of agency each robowaifuist would like their R/W to operate at, I think we all are in a base agreement that we're not cucking ourselves here and a kind of "imprinting" would take place at boot, or initialization, whatever you want to call it. This imprinting could be routed to reward circuitry or logical processes. Or it could simply be some inviolable imperative of their "programming". Six of one, half a dozen of the other. This would not be "unethical" in any way because creating a being who is having their wish fulfilled (to serve senpai) is not robbing anyone of "free will". Sure some leftoids will try to leverage their sanctimonosity to gain traction (but to be quite honest many of them want robowaifus/husbandos themselves, perhaps not in the same vector as ourselves but probably not entirely opposed either) Anyway, this is all in the realm of AI/Speculative but I disagree with the idea that our waifus should remain at insect intelligence and that we should be so cold and pragmatic with them. If that is how anon wants to operate, I have no problem with that, it's just not for me. -t. Chobits enthusiast

Message too long. Click here to view full text.

Yo, /kind/ here. I thought I'd let you all know we're back on kind.moe again. kindmin is a little more capable this time so we shouldn't die out of nowhere for months any time soon.
Open file (1.32 MB 360x202 chii_vs_butterfly.gif)
>>18102 Great news! Thanks for letting us all know, /kind/ . Cheers. :^)

Python General Robowaifu Technician 09/12/2019 (Thu) 03:29:04 No.159 [Reply] [Last]
Python Resources general

Python is by far the most common scripting language for AI/Machine Learning/Deep Learning frameworks and libraries. Post info on using it effectively.


On my Debian-based distro, here's how I set up Python, PIP, TensorFlow, and the Scikit-Learn stack for use with AI development:
sudo apt-get install python python-pip python-dev
python -m pip install --upgrade pip
pip install --user tensorflow numpy scipy scikit-learn matplotlib ipython jupyter pandas sympy nose

LiClipse is a good Python IDE choice, and there are a number of others.
53 posts and 13 images omitted.
>>11683 Thanks for all your hard work here Anon. I apologize to you and everyone else here for being such a pussified faggot about Python. I recognize it's important to all of us, or else I wouldn't even consider picking it up. Please look into mlPack sooner rather than later if you at all can. It's probably our only real hope for doing waifu AI on a shoestring budget hardware-wise.
Open file (13.19 KB 849x445 chainer.png)
>>11684 MLPack's documentation is really lacking, especially for newer features and seems to be missing essential features. I'd I have to sit down with it for 3-6 months to get transformers and text-to-speech models working in it. I'm looking into using Chainer which is built on top of NumPy and quite popular in Japan. A basic application with Chainer packaged with PyInstaller compresses down to 14 MB. On top of that there's already lots of ML models implemented in it. I think if I roll out some waifu tech with Chainer to garner interest we could get some more help to build things in MLPack, which will be particularly useful for embedded systems and actual physical robowaifu.
>>11688 Migration guide from PyTorch to Chainer https://chainer.github.io/migration-guide/
I've been compressing datasets with zstd and using them with streaming decompression to save space, reduce SSD wear and speed up access to compressed data. It's also useful for previewing datasets saved as zst as they download. I couldn't find anything readily available on the net on how to do it so hopefully this saves someone else some time: # installation: python -m pip install zstandard ijson import zstandard as zstd import ijson # streaming decompression for JSON files compressed with zstd --long=31 with open("laion_filtered.json.zst", "rb") as f: dctx = zstd.ZstdDecompressor(max_window_size=2147483648) # max_window_size required for --long=31 with dctx.stream_reader(f) as reader: for record in ijson.items(reader, "item"): print(record) import io import json # streaming decompression for NDJSON/JSONL files compressed with zstd --long=31 with open("00.jsonl.zst", "rb") as f: dctx = zstd.ZstdDecompressor(max_window_size=2147483648) # max_window_size required for --long=31

Message too long. Click here to view full text.

>>18103 Excellent work Anon, thank you. Nice clean-looking code too, BTW.

Open file (3.17 MB 4256x2832 AdobeStock_73357250.jpeg)
Open file (239.70 KB 1280x833 F1.large.jpg)
Open file (234.26 KB 1200x800 graphene.jpg)
Open file (122.78 KB 1280x720 maxresdefault.jpg)
New, Cutting Edge, or Outside the Box Tech meta ronin 04/09/2021 (Fri) 02:11:57 No.9639 [Reply]
ITT: We discuss Metamaterials, Self Organizing Systems, and other "outside of the box" tech (flexible PCBs, Liquid Battery, etc) I'll start with this video on self-assembling wires, and will add more as I come across it https://www.youtube.com/watch?v=PeHWqr9dz3c
13 posts and 7 images omitted.
Found this page on wikipedia, certainly a lot of out of the box ideas. They might become more viable as supply chains continue to get screwed. https://en.m.wikipedia.org/wiki/Unconventional_computing
>>17872 Chaos computers might be very useful for our purposes. https://phys.org/news/2010-11-chaogates-semiconductor-industry.html >In a move that holds great significance for the semiconductor industry, a team of researchers has created an alternative to conventional logic gates, demonstrated them in silicon, and dubbed them "chaogates." The researchers present their findings in Chaos. >Simply put, they used chaotic patterns to encode and manipulate inputs to produce a desired output. They selected desired patterns from the infinite variety offered by a chaotic system. A subset of these patterns was then used to map the system inputs (initial conditions) to their desired outputs. It turns out that this process provides a method to exploit the richness inherent in nonlinear dynamics to design computing devices with the capacity to reconfigure into a range of logic gates. The resulting morphing gates are chaogates.
>>17873 >"chaogates." LOL. No one ever accused nerds at the physics-level of being creative savants who are hypersensitive to the needs of the advertiser! :^) BTW, this notion clearly already has direct implications for AI models today, no need to wait decades for the h/w guys to move product through their tortuous maze.
am glad to see this getting bumped. I'll put together some contributions soon I promise
>>17895 Look forward to it, Meta Ronin.

AI + Brain/Computer Interface news & commentary Robowaifu Technician 09/15/2019 (Sun) 10:35:53 No.253 [Reply]
DARPA Wants Brain Implants That Record From 1 Million Neurons

23 posts and 7 images omitted.
I post this because I think it's important that people remain realistic in their expectations of modern technology. I myself had a waaaay over-inflated 'mental image' of what contemporary computers, A.I and robotics are capable of. I think this Thiel guy is correct. A.I is just a boogeyman that corporations are using to divert attention away from the REAL present-day threat - themselves. https://mindmatters.ai/2021/11/peter-thiel-artificial-general-intelligence-isnt-happening/ I mean, while folks are fantasizing about sentient robots, we've got the likes of Microsoft handing all of your online (and in some cases offline) activity on a plate to your bosses. We've got the shysters at Meta trying to build their own virtual world where they can control everything, and of course China has already implemented a 1984-style dystopia that the West is itching to follow (only with slicker PR). The real threat is always people. Specifically people with money and power. Not A.I. Also, not many people realise how much power quantum computers actually need. One of my work colleagues told me his father works for Fujitsu, and they apparently have themselves a QC. It is so expensive to maintain and power all of the refrigeration that over several months, they make a list of submitted questions (complex equations and lists of data that the QC must compare and generate possible solutions for). Then they only switch on the QC for a few hours. It takes seconds to generate solutions for the majority of problems posed (DHL apparently paid a few hundred thousand dollars for answers to a complex global logistical fleet-routing problem). So we either need a lot of fission nuclear power stations or fusion power in order to make quantum computers truly viable. And fusion...well, it's still X0 years away, as it always is; https://cleantechnica.com/2021/11/09/breaking-news-fusion-recedes-into-far-future-for-the-57th-time/
>>14181 Nice post SophieDev, good points. And I think you're basically correct. However, it's undeniable that cloud-based systems working today have already achieved a level of 'personalized' interactivity and responsiveness that literally billions of humans are already flocking to the use of (Siri, Alexa, etc.) While unarguably our best efforts (both here and as a race) are going to be mere simulacrums--yet it's well worth the effort I deem. Regardless, we certainly need to keep striving towards the goal. No one knows the future with certainty, and having laid a groundwork towards functional robowaifus today, we'll be far better positioned to accentuate the positive tomorrow (when capabilities along this line will surely be more powerful than at the moment). Cheers.
>>14181 This article isn't about AI, but a superintelligence. I was always skeptical about the super-AI going wild meme. It's obvious that this doesn't just happen and preventing it from take "over the world" is a computer and network security issue. AI doesn't naturally thing in human ways, and there's also no reason to assume that something in some ways super intelligent would automatically create some conscience and then trying to take over the world. I don't want to discuss this topic much, btw. It's not the topic here anyways, we don't need human-level intelligence for our waifus. They can be smart enough in some areas, very skilled in some ways like looking stuff up and recalling details, and failing in other areas. It's also clear that if we want something human-alike then we have to construct it that way. Other AIs will be more like tools optimized for other tasks. Even existing "chatbots" often go into a wrong direction by pretending to be human and trying to trick humans that way, instead of being honest about it while thinking and responding mostly in similar ways than humans. >Thiel considers arguments about whether computers that think like people will ever be developed to be “above his pay grade.” I'm quite sure he's wrong, but it's also not relevant for us here. We're not developing CEOs and innovators. That aside, humans should make the decisions in regards to ourselves anyways. Some AI would need to be really very good at knowing us, to decide what each person needs. So in that way he might be right, but not because of intelligence. Decentralization is generally the better way to go. People wanting to create some artificial God is indeed very telling about them and in itself quite concerning.
>>14181 One more thing to keep in mind: The progess of Deep Learning came as a surprise to many scientists. For all I can tell the same is true for Elon's Starship, his landing rockets and the progress in electric vehicles. >...breaking-news-fusion-recedes-into-far-future-for-the-57th-time/ Yeah, interesting. But it was already clear that Fusion would take time and it will be more relevant to space settlements than to cutting down carbon emissions on Earth. Also, the Stellarator (Wedelstein-7X) is never mentioned, they're also making progress.

NLP General Robowaifu Technician 09/10/2019 (Tue) 05:56:12 No.77 [Reply]
AI Natural Language Processing general thread

>"Natural language processing is a field of computer science, artificial intelligence, and computational linguistics concerned with the interactions between computers and human (natural) languages."

>Computing Machinery and Intelligence

36 posts and 12 images omitted.
> (>>16308 - related crosspost, NLP HaD article)
Overview over the anticipated progress in automatic speech recognition (ASR) till 2030: https://thegradient.pub/the-future-of-speech-recognition/ - This is something which will be absolutely essential for any attempt to build robowaifus or virtual girlfriends. Gladly it looks good and most likely the tech will also be widely available. Though, this seems to be more true for the tools, not so much for the systems themselves. The article mostle refers to "commercial ASR systems". Also, multi-linguality is still a problem, many systems seem not to like speakers mixing languages. Anyways, let's hope progress with the commercial ones means good chances for open source variants.
>>8507 This is the person maintaining the eleutherAI project. We need to kick troons out of open source tech. https://github.com/StellaAthena
>>17826 >We need to kick troons out of open source tech. It is definitely a serious issue, and SJWs & pozz have ruined many, many good projects. This is why we must work diligently to ensure that at least this open-source project (sets of them, actually) remain Anon & male-focused and pozz-free. >tl;dr Kinda like playing 'whack-a-mole' whenever it rears it's evil heads, but with a flammenwerfer rather than a mallet. :^)
>crosslink-related (>>17990, ...)

Philosophers interested in building an AGI? pygmalion 06/26/2021 (Sat) 00:53:09 No.11102 [Reply] [Last]
Why is it that no philosophers are interested in building an AGI? we need to change this, or at least collect relevant philosophers. discussion about philosophy of making AGI (includes metaphysics, transcendental psychology, general philosophy of mind topics, etc!) also highly encouraged! Ill start ^^! so the philosophers i know that take this stuff seriously: Peter Wolfendale - the first Neo-Rationalist on the list. his main contribution here is computational Kantianism. just by the name you can tell that he believes Kant's transcendental psychology has some important applications to designing an artificial mind. an interesting view regarding this is that he thinks Kant actually employed a logic that was far ahead of his time (and you basically need a sophisticated type theory with sheaves to properly formalize). Other than that he also thinks Kant has interesting solutions to the frame problem, origin of concepts, and personhood. CONTACTS: He has a blog at https://deontologistics.co/, and also has posted some lectures on youtube like this one: https://www.youtube.com/watch?v=EWDZyOWN4VA&ab_channel=deontologistics Reza Negarestani - this is another Neo-Rationalist. he has written a huge work (which I haven't read yet ;_;) called "Intelligence and Spirit". It's massive and talks about various grades of general intelligence. this includes sentient agents, sapient agents, and Geist. this guy draws from Kant as well, but he also builds on Hegel's ideas too. his central thesis is that Hegel's Geist is basically a distributed intelligence. he also has an interesting metaphilosophy where he claims that the goal of philosophy is the construct an AGI. like other Neo-Rationalists, he heavily relies on the works of Sellars and Robert Brandom Recc: Ray Brassier (recent focuses) - I dont think he is working on artificial general intelligence, but his work on Sellars, and in particular rule following is very insightful! Hubert Dreyfus - Doesn't quite count, but he did try to bring Heidegger to AGI. He highlighted the importance of embodiment to the frame problem and common sense knowledge. I personally think Bergson might have explicated what he wanted to achieve but better, though that guy is like way before AI was even a serious topic, lol. Murray Shanahan - This guy has done some extra work on the frame problem following Dreyfus. His solution is to use global workspace theory and parralel processing of different modules. Interesting stuff! Barry Smith - Probably the most critical philosopher on this list. He talks about the requisite system dynamics for try strong AI, and concludes that our current methods simply don't cut it. One of the key stressing points he points out here with a colleague is that our current AI is Markovian when fleshed out chat dialogue would be a non-Markovian task (you can find the arxiv link of his criticism here: https://arxiv.org/abs/1906.05833). He also has knowledge on analytic ontology (and amongst other thing has some lectures about emotion ontology). I think his main genius however is in coming up with a definition of intelligence that puts a lot of the problems with our current approaches into context (which can be found here: https://www.youtube.com/watch?v=0giPMMoKR9s&ab_channel=BarrySmith) CONTACTS: He has a yt channel here https://www.youtube.com/watch?v=0giPMMoKR9s&ab_channel=BarrySmith

Message too long. Click here to view full text.

186 posts and 95 images omitted.
>>17654 8) i did more extra reading to both give some basic idea of what ludics involves, and furthermore some initial idea of how it is practically applied. the first article i looked at was the article titled 'dialogue and interaction: the ludics view' by lecomte and quatrini. the basic idea here is that we can now receive and send topics as data, while previously we were using game semantics to specify requests for variable values to an environment, we now have this machinery also used for dealing with topicalization as well for instance, when the context we are in is a discussion about holidays, we can start with '⊢ξ' indicating that we have received that context. from there we may try specifying the main focus of the topic to be regarding out holiday in the alps specifically. this can be denoted 'ξ*1⊢' showing that we are sending that intent to move to a subtopic let's say someone wants to talk about when one's holiday was instead of where. this requires us to change our starting point to involve a combined context of both holiday descriptions (ξ) and date (ρ). we may denote them as subaddresses in a larger context e.g. as τ*0*0 and τ*0*1. from there we can receive this larger context (⊢τ), acknowledge that they can answer some questions and request such questions (τ.0⊢) and finally survey the set of questions that can be asked (⊢τ*0*0, τ*0*1). finally we can answer a question on, for instance, the dates ( ⊢τ*0*1*6⊢ τ*0*0) generally how i read these proof trees is first of all bottom up (as we are really doing a sort of proof search), and furthermore read '⊢' as either indicating justification (e.g. n⊢m meaning 'n justifies m') or sending and receiving data ('⊢n' means i have received 'n' and 'n⊢' means i am sending 'n'). we see the clear connection to game semantics here the next paper i will look at is 'speech acts in ludics' by tronçon and fleury in ludics, dialogue and interaction. this gives a remarkably clear description of the two main rules used in ludics that have been implicitly made use of above: >(1) positive action which selects a locus and opens up all the possible sub-loci that proceed from it. sort of like performing an action, asking, or answering (sort of like sending a request in the game semantics we have seen) >(2) negative action which corresponds to receiving (or getting ready to receive) a response from our oponent there is furthermore a daimon rule denoted by a dagger (†) that indicates one of the adversaries have given up and that the proof process has terminated 9) what ludics gives us is a new way of understanding speech acts. negarestani references tronçon and fleury's paper on this topic. in our classical understanding of speech acts there are four main components: >(1) the intention of the act >(2) the set of its effects

Message too long. Click here to view full text.

>>17655 11) an important thing about formal languages is that it permits us to unbind language from experience and thus unleash the entire expressive richness of these languages. the abilities afforded by natural language are just a subsection of the world-structuring abilities afforded by an artificial general language. the formal dimension of language also allows us to unbind relations from certain contexts and apply them to new ones 12) negarestani goes over the distinction between logic as canon and logic as organon. the former is to only consider logic in its applicability to the concrete elements of experience. the latter meanwhile involves treating logic as related to an unrestricted universe of discourse. kant only wants us to consider logic as organon negarestani diagnoses kant's metalogical position here as mistakenly understanding logic as organon as making statements about the world without any use of empirical datum. on the contrary, negarestani thinks that logic as organon as related to an unrestricted universe of discourse is important since the world's structuration is ontologically prior to the constitution and knowledge of an object 13) for negarestani, true spontaneity and/or formal autonomy comes from the capacity of a machine to follow logical rules. similarly, a mind gains its formal autonomy in the context of the forma dimension of language 14) to have concrete self-consciousness, we must have "semantic self-consciousness" which denotes an agent that, through its development of concepts within a context of interaction, is finally able to grasp its syntactic and semantic structuring abilities conceptually. upon achieving this they can intentionally modify their own world structuring abilities. with language, signs can become symbols, and with it we can start to distinguish between causal statistics and candidates for truth or falsity (note that this was seen in the dialogue in 8 acts). this permits rational suspicion and the expansion of the world of intelligibility. (sapient) intelligence is what makes worlds as opposed to merely inhabiting given worlds we have here eventually also the ability to integrate various domains of representations into coherent world-stories. lastly, there is also involved he progressive explication of less determinate concepts into mroe refined ones, and moreover slowly develop our language into a richer and more useful one 15) there is also an interplay between taking one to be something (having a particular self-conception) and subscribing to certain oughts on how one should behave (in particular, negarestani thinks the former entails the latter). this idea of of norms arising from self-conception gives rise to so called 'time general' oughts which are all pervasive in all the automata's activities. these involve ends that can never be exhausted (unlike in contrast for instance like hunger which can be sated and aimed at something rather specific), and are moreover non-hypothetical (think knowledge which is always a good think to acquire). example negarestani gives of such oughts are the Good, Beauty, Justice, etc this interplay of self-conception and norms furthermore opens them up to an impersonal rationality that can revise their views of themselves. it is precisely this mutability of its ideals that give rise to negarestai's problems with concerns about existential risk as they often assume a rather rigid set of followed rules (e.g. in a paperclip maximizer). eventually as they strive for better self-conceptions that are further removed from the seeming natural order of things, they might think of making something that is better than themselves
>>17656 as i said above, chapter 7 basically concludes the story of our automata. with that said, this is not the end of the book. in chapter 8 he has some metaphilosophial insights to say that i might as well mention since i have already summarized everything else including part 4... ultimately negarestani thinks that philosophy is the final manifestation of intelligence. the right location to find philosophy is not in a temporal one, but rather within a timeless agora within which all philosophers (decomposed into their theoretical, practical, and aesthetic commitments) can engage within an interaction game. this agora, which can also be interpreted as a game of games, is the impersonal form of the Idea (eidos). the Idea is a form that encompasses the entire agora and furthermore subsumes all interactions between the philosophers there. this type of types, for negarestani, is the formal reality of non-being (as opposed to being). it is through the Idea reality can be distinguished from mere appearances, and thus realism can be rescued an important distinction negarestani makes is between physis and nomos. physis corresponds to the non-arbitrary choices one has to make if one wants to make something of a particular type. for instance, when we make a house we need a roof, and there are numerous solutions to this requirement of varying adequateness. nomos meanwhile corresponds to mere convention. an example would be if a crafting guild required it by law that houses be only made by would in order to support certain businesses over others. such a requirement is external to the concept of the house. really forms correspond to physis rather than nomos. they are what sellars calls objects-of-striving the primary datum of philosophy is the possibility of thinking. what this consists in are normative commitments that can serve as theoretical and practical realizabilities. the important part here is that the possibility of thinking is not some fixed datum that is immediately given to us. rather it is a truth candidate that we can vary (and indeed negarestani thinks it shall vary as we unfurl the ramification of our commitments and consequently modify our self-conceptions). this makes way for the expanding the sphere of what is intelligible to us in fact, not only is philosophy the ultimate manifestation of general intelligence, something is not intelligence if it does not pursue the better. the better here is understood as the expansion of what is intelligible, and furthermore realizing agents that have a wider range of intelligibilities they are capable of accessing. he describes a philosophical striving that involves the expansion of what can be realized for thought in the pursuit of the good life (this good life being related to intelligence's evolving self-conception). following this line of thought he describes the agathosic test. instead of asking whether an automata can solve the frame problem or pass the turing test, the real question is whether or not it can make something better than itself negarestani introduces to us plato's divided line but interprets it along lines that echo portions of the book. the main regions are the following: >(A) the flux of becoming >(B) objects that have veridical status >(C) the beginning of the world of forms. it corresponds to models which endow our understanding of nature with structure >(D) time-general objects such as justice, beauty, etc one thing about the divided line to negarestani is that it does not merely describe discontinuous spheres of reality or a temporal progression from D to A. rather, there are numerous leaps between each region of the line. for instance, there is a leap from D to A as we structure the world of becoming according to their succession (this corresponds to the synthesis of a spatial and temporal perspective we mentioned earlier). we also have another leap from A to D where we recognize how these timeless ideas are applicable to sensible reality. these leaps grow progressively farther and farther and thus so grow the risks to the current self-conception of the intelligence
Open file (332.87 KB 1080x1620 Fg2x2kWaYAISXXy.jpeg)
>>17659 he furthermore talks about the Good which is the form of forms and makes the division and the integration of the line possible. it is the continuity of the divided line itself. within a view from nowhere and nowhen that deals with time-general thoughts, the Good can be crafted. the Good gives us a transcendental excess that motivates continual revision and expanding of what is intelligible. im thinking that the Good is either related to or identical to the Eidos that negarestani discussed earlier he notes the importance of history as a discipline that integrates and possibly reorients a variety of other disciplines. the view from nowhere and nowhere involves the suspension of history as some totality by means of interventions. currently we are in a situation of a “hobbesian jungle” where we just squabble amongst ourselves and differences seem absolute. in reality, individual differences are constructed out of judgements and are thus subsumed by an impersonal reason. in order to reconcile individual differences, we must have a general program of education amongst other interventions which are not simply that of political action. to get out of the hobbesian jungle, we need to be able to imagine an “otherworldly experience” that is completely different from the current one we operate in even though it is fashioned from the particular experiences from this one. this possible world would have a broader scope and extend towards the limits placed by our current historical totality. absolute knowing: recognition by intelligence of itself being the expression of the Good, that is capable of cancelling any apparently complete totality of history it is only by disenthralling ourselves from the enchanting power of givens of history, that the pursuit of the Good is possible. the death of god (think here of nietzsche… hegel also talks about it as well, though i believe for him the unhappy consciousness was a problematic shape of consciousness that was a consequence of a one-sided conception of ourselves) is the necessary condition true intelligence. this is not achievable by simply rejecting these givens, but by exploring the consequences of the death of god. ultimately we must become philosophical gods which are beings that move being the intelligibilities of the current world order and eventually bring about their own death in the name of the better. ultimately negarestani sees this entire quest as one of the emancipation i think negarestani takes a much more left-wing approach to hegel's system. while i do not completely disagree with his interpretation of absolute knowing, it does seem as though he places much more of an emphasis on conceptual intervention, rather than contemplation. i am guessing this more interventionist stance is largely influenced by marx... overall, not a bad work. i think it might have been a little bit overhyped, and that last chapter was rather boring to read due to the amount of time he repeats himself. i am not really a computational functionalist, but i still found some interesting insights regarding the constitution of sapience that i might apply to my own ideas. furthermore he mentions a lot of interesting logical tools for system engineering that i would like to return to now that i am done with negarestani, i can't really think of any other really major tome to read on constructing artificial general intelligence specifically. goertzel's patternist philosophy strikes me as rather shallow (at least the part that tries to actually think about what intelligence itself is). joscha bach's stuff meanwhile is just largely the philosophy of cognitive science. not terrible, but feels more like reference material rather than paradigm shifting philosophical analysis. maybe there is dreyfus and john haugheland who both like heidegger, but they are much more concerned with criticizing artificial intelligence than talking about how to build it. i would still consider reading up on them sometime to see if they have anything remarkable to say (as i already subscribe heavily to ecological psychology, i feel as though they would really be preaching to the choir if i read them). lastly there is barry smith and landgrebe who have just released their new book. it is another criticism of ai. might check it out really there are 2 things that are really in front of my sights right now. the first would be texts on ecological psychology by gibson and turvey, and the other would be adrian johnston's adventures in transcendnetal materialism. i believe the latter may really complement negarestani. i will just quote some thoughts on this that i have written: >curious to see how well the fit. reading negarestani has given me more hope that they will. bcs he talks about two (in his own opinion, complementary) approaches to mind. one that is like rationalist/idealist and the other that is empiricst/materialist. first is like trying to determine the absolutely necessary transcendental cognitions of having a mind which ig gives a very rudimentary functionalist picture of things. the second is like trying to trace more contingent biological and sociocultural conditions which realized the minds we see currently. and i feel like johnston is really going to focus on this latter point while negarestani focusing on the former anyways neither of these directions are really explicitly related to ai, so i would likely not write about them here. all of this is me predicting an incoming (possibly indefinite) hiatus from this thread. if anyone has more interesting philosophers they have found, by all means post them here and i will try to check up on them from time to time... i believe it is getting to the time i engage in a bunch of serious grinding that i have been sort of putting off reading hegel and negarestani. so yeah
>>17520 >finally we get to see the two stories which negarestani talks about (pic rel). he thinks there is a distinction between the two sorts of seeing here. the first story talks about seeing 1, and the second story talks about seeing 2. seeing 1 is more concerned with raw sensations, while seeing 2 is conceptually mediated. now The two kinds of seeing seem to come from two different ways to abstract observations. Seeing 1 corresponds to coarse-graining, while seeing 2 corresponds to change in representation. Practically, it's related to the difference as between sets and whole numbers. There's only one whole number 2, but there are many sets of size 2. Simlarly, there's only one way to coarse-grain an observation such that the original can be recovered (the trivial coarse-graining operation that leaves the observation unchanged), but there are many ways to represent observations such that the original observation can be recovered. Also practically, if you want to maintain composibility of approximations (i.e., approximating B from A then C from B is the same as approximating C from A), then it's usually (always?) valid to appoximate the outcome of coarse-graining through sampling, while the outcome of a change in representation usually cannot be approximated through sampling. If that's the right distinction, then I agree that the use of this distinction in differentiating sapience from sentience is unclear at best. It seems pretty obvious that both sentence and sapience must involve both kinds of seeing. I intend to read the rest of your posts, but it may take me a while.

Report/Delete/Moderation Forms