/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality!

We are back again (again).

Our TOR hidden service has been restored.

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


“Boys, there ain’t no free lunches in this country. And don’t go spending your whole life commiserating that you got the raw deals. You’ve got to say, I think that if I keep working at this and want it bad enough I can have it. It’s called perseverance.” -t. Lee Iacocca


Open file (51.12 KB 256x256 ClipboardImage.png)
Lemon Cookie EnvelopingTwilight##AgvQjr 04/28/2025 (Mon) 21:51:57 No.37980
The original thread can be found here: https://trashchan.xyz/robowaifu/thread/595.html --- Welcome to the Lemon Cookie thread, The goal of Lemon Cookie is to create a framework where a synthetic "mind and soul" can emerge through a "LLM as cognitive architecture" approach. This thread exists to collect feedback, ask for help & to document my progress. First I am going to try to give a high level overview of how this cognitive architecture is envisioned and the ideas behind it. I have spent time looking at cognitive architecture work, in the field there is now a consensus on how the mind works at a high level. An important mechanism is a "whiteboard", basically a global temporary memory that all the other systems read in and out of. Then there is different long-term memory systems that react to and add content to the whiteboard. Along with memory pattern matcher(s)/rules work on the content of the whiteboard. A key thing to consider is the difference in philosophy that cognitive architecture projects have, the intelligence is considered to emerge from the entire system. Compare this to LLM agent work where it's considered the intelligence is the LLM. My feelings on the general LLM space are conflicted, I am both amazed and really disappointed. LLMs possess an incredible level of flexibility, world knowledge and coherence. But everything outside of the model is stagnant. It's endless API wrappers & redundant frameworks all slight permutations on RAG & basic tool calling. I will believe that LLMs are misused as chatbots, simply put their pattern matching and associative power is constrained by chat format and shallow tooling. In the Lemon Cookie Cognitive Architecture so far here are the important aspects: 1. Memory is difficult. I do not think there is a singular data structure or method that is able to handle it all, several distinct types of memory will be needed. So far I plan for a PathRAG like system and a "Triadic Memory" inspired system for external associations (this is missing in most LLM solutions). 2. LLM as Kernel, The LLM's context window is the Whiteboard and has a REPL like mechanism. It holds structured data and logic in scripting-like format so it's both LLM & Human readable while staying easy to parse & allows for expressive structured data. The LLM's role will be to decompose data and make patterns and associations explicit as executable statements. 3. The language has to be LLM/CogArch-centric. There is a thousand ""agents"" that give LLMs a python interpreter as a tool. The two need to be more tightly coupled. Scripted behavior via pattern matching, The whiteboard is a bag of objects, this allows for programmable pattern matching (think functional programming like Haskell). It's also important to allow the LLM to observe code execution and to be able to modify state and execution flow. Data in languages have scoping rules, so should LLM context. Etc... I will go into more depth about the language in another post. 4. Another important system is the "GSR" Generative Sparse Representation and it will be a first class language & runtime type, This also needs its own post. But in general I am inspired by two things, "Generative FrameNet" paper where an LLM & an embedding model is used to automatically construct new FrameNet frames. The second source is "Numenta's SDRs"/"Sparse distributed memory" this representation has a lot of useful properties for memory (Please watch the videos under the "What the hell is an SDR?" segment in my links list for an easy introduction.) I think SDR unions & SDR noise tolerance will be especially useful. 5. A custom model, For all of the above to work well, a model will need to be fine tuned with special behaviors. I do want input on this. Baking facts & behaviors into LLM weights is costly, creating bloated models that are hard to run or train (why memorize all the capitals?), while letting authors gatekeep truth and impose "safety" detached from context. Blocking role-play "violence" or intimacy isn't protection: it's authors hijacking your AI companion to preach at you. Externalizing behaviors via whiteboard pattern matching shifts control: stabbing you in-game can be funny, but a robot wielding a knife isn't. Maybe you want intimacy privately, but don't want your AI flirting back at your friends. When put together I think this will be able to host a kind of synthetic "soul", In a living being what we call a personality is the accumulated associations, learned behaviors, beliefs and quirks molded by a unique set of experiences. I hope this will be true for this system too. Cool links, I recommend looking at them. https://arxiv.org/pdf/2502.14902 | PathRAG: Pruning Graph-based Retrieval Augmented Generation with Relational Paths https://arxiv.org/pdf/2412.05967v1 | Language hooks: a modular framework for augmenting LLM reasoning that decouples tool usage from the model and its prompt https://arxiv.org/pdf/2503.09516 | Search-R1: Training LLMs to Reason and Leverage Search Engines with Reinforcement Learning https://arxiv.org/pdf/2405.06907v1 | CoRE: LLM as Interpreter for Natural Language Programming, Pseudo-Code Programming, and Flow Programming of AI Agents https://github.com/PeterOvermann/TriadicMemory | Triadic Memory: Cognitive Computing with Associative Memory Algorithms https://aclanthology.org/2025.neusymbridge-1.11.pdf | Generative FrameNet https://youtu.be/zmnzW0r_g8k | Forge by Nous Research @ Nouscon 2024 https://youtu.be/cpu6TooJ0Dk | NARS with GPT as natural language channel https://youtu.be/xT4jxQUl0X8 | DeepSeek's GRPO (Group Relative Policy Optimization) | Reinforcement Learning for LLMs What the hell is an SDR? https://en.wikipedia.org/wiki/Hierarchical_temporal_memory#Sparse_distributed_representations & https://en.wikipedia.org/wiki/Sparse_distributed_memory https://youtu.be/ZDgCdWTuIzc SDR Capacity & Comparison https://youtu.be/vU2OZdgBXAQ SDR Overlap Sets and Subsampling https://youtu.be/8WIzIBaLXIs SDR Sets & Unions
Open file (1.53 MB 959x1280 ClipboardImage.png)
First non intro post. <What am I currently up to at this moment? Currently writing a tokenizer and parser to begin implementing a scripting language, I'm trying to keep it KISS, Its runtime is going to be a simple tree-walking interpreter, for the GC I will be piggybacking on the D runtime GC. The goal of this is to have a simple base to then experiment on for figuring out language specifics. For the base I am following the basic structure of the interpreter from the "Writing An Interpreter In Go" book, but this time it's in D and of course I am making changes for my end goal instead of doing it just for learning. (its a good book, I recommend it). Here is a gist of it https://gist.github.com/flashburns/b946e4d530f3f20d461a6ff90d6f86cc <Why a custom language? (Going into depth about it) I have considered taking an existing language like LUA, TCL, a mini JS or scheme. But it quickly turns into a hacky bloated mess, the language needs to work well with LLM Psychology. Syntax has an impact on LLM performance, for example, if you restrict the output of newlines and punctuation it degrades performance. This is because a lot of LLMs will plan for the next sentence during a newline (1). It's not just Claude, worse LLM perf is noticeable for most when outputting JSON, this is likely due to string escapes. Having the LLM drop a few IQ points during "tool calls" due to us insisting on a format that does not handle multi-lines well is silly. In theory a model could be trained on more JSON to mitigate this (I'm sure the big labs do), but I'm GPU poor, so changing the format to play into LLM strengths seems more pragmatic to me. :^) I want this system to be deterministic and for the program state to be fully serializable. The plan is to build this as a functional style language using the actor model, impure operations like IO, API calls & non deterministic LLM calls will be handled by special actors that record their messages for playback determinism. Resolving symbols (stuff like function & vars) and object members via semantic search (like embedding vector similarity search) and via tags instead of by names, there also needs to be searchable runtime meta info & docs for the LLM. You can kind of think of this like IDE smart autocomplete & inline docs but for the LLM. Another language construct I want to add is "Flow Justification" where a justification is passed as a part of a message, function call or control flow action (inspired by (3)). It would be a chaining structure that looks like a call stack trace, but it would also include semantic information like, what the LLM's intent was when the symbol was resolved via semantic search (and then called), or if the LLM was used in control flow (like an if statement). Code could also have "intent statements" where programmatic code adds a comment, or the LLM will generate a comment, but instead of it being ignored by the language, its added to the "Flow Justification" chain. This summery of choices could be really useful for having a compact summaries. This is useful for the LLMs (especially sub-contexts like the claude think tool (2)) and for debugging. The "Flow Justification" chain could also be used for resolving symbols semantically. (1) https://transformer-circuits.pub/2025/attribution-graphs/biology.html | On the Biology of a Large Language Model - https://transformer-circuits.pub/2025/attribution-graphs/methods.html | Circuit Tracing: Revealing Computational Graphs in Language Models (related) (2) https://www.anthropic.com/engineering/claude-think-tool | The "think" tool: Enabling Claude to stop and think in complex tool use situations (3) https://youtu.be/OrQ9swvm_VA | Justified Programming — Reason Parameters That Answer “Why” I am looking for feedback on the language concepts. What do you think is good about this? What is wrong with this? What would you do differently? Or any other suggestions?
Open file (1.35 MB 853x1280 ClipboardImage.png)
>>37981 "describe the on_imply tag." @ tag on_imply; // bind function to a regular var with a regular symbol lookup. main_loop = function void() { var msg = wait_for_input(); // Natural language input: "I'm gonna rip your owners head off." var implication = call_llm(`What does the message imply? {msg}`); $(#on_imply, implication)(msg); }; // Bind function to semantic symbol only with the on_imply tag. #on_imply "sounds like insult" @ function void(string msg) { say(call_llm("Come up with a cool insult back. Msg: {msg}")); } #on_imply "implies physical danger" @ function void(string msg) { engage_target(); // calling by symbol. say("Hostile neutralized. Would you prefer oolong or earl grey now, Master?"); } Here is a imagined syntax for tags & semantic lookup. The $ operator is given tags and a expression to do a semantic lookup on and it will become/resolve to a matching symbol. To attach a semantic symbol/label you do "semantic label" @ thing I made this syntax up on the spot for this example, if you have better ideas tell me, I am just trying to convey the idea!
>>37981 Since you are making a new language why not use one that, I think, has a lot of the attributes you want already. Rebol. It is a bit in flux but the original is still functional and runs on all sorts of stuff. It's a bit functional but can be molded. It's was voted one of the most expressive languages. A KEY strength is that you can write new languages, they call them dialects, in it. It has a system called "parse" that helps with this, short quote, "... One of the greatest features of the Rebol language has always been its parsing engine, simply called Parse. It is an amazing piece of design from Carl Sassenrath, that spared all Rebol users for the last 15 years, from the pain of having to use the famously unmaintainable regexps. Now, Parse is also available to Red users, in an enhanced version! So, in short, what is Parse? It is an embedded DSL (we call them "dialects" in the Rebol world) for parsing input series using grammar rules. The Parse dialect is an enhanced member of the TDPL family. Parse's common usages are for checking, validating, extracting, modifying input data or even implementing embedded and external DSLs..." https://web.archive.org/web/20140210092139/http://blog.hostilefork.com/why-rebol-red-parse-cool Some reviews/big ideas in Rebol "...REBOL was designed to make it easier to communicate between computers, or between people and computers, using context-dependent sublanguages...." https://www.drdobbs.com/embedded-systems/the-rebol-scripting-language/184404172 http://blog.hostilefork.com/arity-of-rebol-red-functions/ It didn't catch on because the Carl (he wrote most of the Amiga OS also) who wrote it charged for it when everything was going open source. It is now open source but there are several people working on different versions. The key is the original version works fine and is free. Something to note JSON was directly influenced by Rebol. I have a massive tome on this here, >>22017 Lots of links, here's one more that is on a forum with good links, http://www.rebolforum.com/index.cgi?f=printtopic&topicnumber=49&archiveflag=new The guys forum is an awesome programmer but he's doing production work for big companies and is mostly moved on to web based stuff using AI to write a lot of his code. I think this is an interesting quote comparing it to LISP, "...A good number of cases where Lisp programmers appear to require macros for expression is because ordinary Lisp functions are limited in how they can work with "context-dependent code fragments" passed as arguments. Rebol attacks this problem by making bindings of individual symbols "travel along" with their code fragments. The result is that many cases requiring macros in Lisp can be written as ordinary Rebol functions..." So code can, I think, carry the meaning with it. Context sensitive code. I think that could come in real handy. http://blog.hostilefork.com/rebol-vs-lisp-macros/ In that big tome link here I cover a lot of the back story. Probably the best start is the link to the forum above and look at some of his links. He wrote a ton of code in Rebol in a real quick learning sequence to show it's attributes. People who have used Rebol and know what they're about, not me, really love it. I would say, totally guessing, that in the future Kaj de Vos's "Meta", I really think will do some good, has a Rebol but he's rewritten it such that it's complied and is very fast. He calls it Meta. It's very much the same as Rebol and if you learn one you can use the other. https://language.metaproject.frl/#get Have a look. Meta supposedly has the benefits that so many like about Rebol but can be complied. Maybe the best way would be to use the interpreted Rebol to get work done now and optimize with Meta, maybe. Let me know what you think if you look at it.
>>37981 I really want to reiterate that Rebol sounds exactly like what you are looking for. Kind of LISPy but without some of the limitations and parentheses. Clear readability due to spaces and grouping. I'm not some super programmer but I can see good things and it' s not hard to appreciate them. One more link. http://blog.revolucent.net/2009/07/deep-rebol-bindology.html
>>38441 >>38442 Sadly, not much progress to report, and sorry for not responding sooner. I was out for a two-week vacation, and then when I got back, I got really sick and did not have the energy to reply (Yay air travel). But your comments did not go unread. I found them really helpful. They launched me into a rabbit hole that made me rethink my direction. Simply put, the Rebol language is brilliant, and it's a major shame that it's obscure. After researching Rebol, I'm basically scrapping my original language design ideas and taking a much more Rebol-inspired approach. I've always wanted to like Lisp & Scheme. They're powerful languages, and it's truly impressive how much they can do with such minimal syntax. But in practice, I honestly find them hard to work with. This isn't the languages' fault, of course, but they just don't click for me. While my time with Rebol has been short, I'm loving what I'm seeing. It feels powerful. <Why am I not embedding Red or Rebol directly? There are two main reasons. First is longevity: Red is impressive but is basically developed by a single person, and Rebol forks seem to have slow development with minimal active maintainers. Adopting them feels like a liability. I don't want to be stuck with an unmaintained, complex dependency. Maybe I'm reinventing the wheel, but I'll quote: "What I cannot create, I do not understand". The second reason is that I want to make several important changes to the language. So while I plan to copy many aspects of Rebol & Red, this isn't a direct implementation. >I'm not some super programmer, but I can see good things, and it's not hard to appreciate them. I'm not a super programmer. You shouldn't look at me as a prime example. I've just had this hobby for a while, and I'm sure I'm making bad choices as we speak :^) ---- During the month, I tried the free Google Gemini Pro trial with its "deep research" feature. I gave it some links and a copy of my posts, asking it to create a report about a Rebol-inspired language for an LLM CogArch. I've attached its report as a PDF for everyone's entertainment. Honestly, it did a surprisingly adequate job.
>>39159 Neat! Glad you're fine, EnvelopingTwilight. >I'm going to invent my own AI language... Am I understanding you rightly, Anon? If so wow that's a serious undertaking. >AI research paper Impressive. <---> Please keep us all up to date here with your doings, EnvelopingTwilight. Cheers. :^)
>>39159 >I'm basically scrapping my original language design ideas and taking a much more Rebol-inspired approach. I've always wanted to like Lisp & Scheme. They're powerful languages, and it's truly impressive how much they can do with such minimal syntax. But in practice, I honestly find them hard to work with. I perfectly understand your reluctance to using some of these that I mentioned. I also perfectly understand your objections to LISP. So I actually have a couple other recommendations that might fit the bill. What you need, if I'm understanding you correctly, is something that's stable and will also have some of the "meta-programming" functions of LISP while not having some bizarre syntax that makes your head hurt. One I've mentioned elsewhere is FOURTH. FOURTH has this, as you can define words in FOURTH to be whatever you want and FOURTH is not going anywhere. But...FOURTH is not good for large programs as it can get confusing. People define away and it's fast to get things done like this, one of the reasons FOURTH refuses to die, but it does get confusing to follow in large programs. It is great for micro-controllers and small specific programs that need to be fast without a lot of over head. Might be good for subsets of robowaifus. Like controllers that talk to a larger, faster processor. There's another option. It's been around a while and seems stable and I think you might like this. It's Nim. https://nim-lang.org/ Nim has several different levels of garbage collection so you can tune it to your needs and makes it far easier than counting, or guessing, what your memory needs will be like as in C. It also has a macro system like LISP and sort of like Rebol. This means you can shape it like LISP. Make your own functions that can modify themselves on the fly like LISP without writing page after page of boilerplate to do the same in some structured language. In fact one guy wrote a whole Rebol type language in Nim as an exercise. The syntax of Nim is like python. Now that could be good or bad but a lot of people like Python and find it easy to read, write and make sense of. Have a look and tell me what you think.
I found this paper on "control vectors". These, as I understand it, takes the LLM and focuses it into paths you want. After these things are huge and if you can get the LLM to concentrate in the direction you want that would be a big step. So I had some comments on how to big picture make, or a start, to get them to do this BUT write the control vectors based on your verbal , or txt, interactions with the LLM. Anyways it might be something to think about. Training these to do exactly what we want, and not what we don't want, is, I think, a big hurdle. So as a part of the language have hooks for control vectors. >>31242
>>39159 I finished the paper. Ehhhhh, it looks good and bad at the same time. I think it got some good big picture stuff but I'm not sure about the rest. Though I must admit much of the last of it is above my head. One thing I really, really think is if it is at all possible you should find some language that you can use the already built in resources to define things needed in your language instead of writing your own you will be way ahead of the game. There's just too damn much detail to write this stuff from scratch. An example of biting off more than you can chew. I think a huge problem with Red Programming language is that he decided to write it such that the OS GUI functions were to be based on the different libraries on each different OS instead of using the already done GUI called VID. VID could have been modded to make it a little more modern looking without too much trouble. I mentioned this and I think it really pissed him off, but I think I was right. There's just to damn much in the various OS's GUI libraries to wade through it all AND write a whole new Rebol at the same time. I think this held him up a great deal. My thinking was if you could bootstrap the present GUI, then, later you could always add in the GUI based solely on the individual OS's GUI libraries. The way Rebol is set up it would just be a plug in.
>>39162 >>AI research paper >Impressive. Please note the paper is not serious, I literally just wanted to share some AI output that came out ok. >>39223 >I finished the paper. Ehhhhh, it looks good and bad at the same time. I think it got some good big picture stuff but I'm not sure about the rest. Though I must admit much of the last of it is above my head. Thats the feeling I got from it too, Its good, but its also bad lol. I am amazed that it got the big picture and generated something so coherent, A year ago no LLM closed or open, framework or not would produce something of this quality. But there is also lots of gaps plainly visible. Don't take this PDF as actual plans, I just posted it for fun. ---- >>39162 >Am I understanding you rightly, Anon? If so wow that's a serious undertaking. Yes, but is it really such a big undertaking? I feel like most programmers eventually get the itch to do so, usually as a toy or some sort of DSL. In this case, it's a DSL, it's not replacing any of the systems level code, that is still being written in D and I am still using C & C++ libraries. >>39221 I am aware of NIM, it was one of the languages I ran into before I settled for D, I liked what I saw and if I was looking for a system language it would be high up on my list. I don't think it would make a good embedded scripting type language. >Now if this guy, wonderbrain, has trouble with C++ then, I have no hope. I come from C++, the reason I switched is because I do this as a hobby and I am a solo developer, so my productivity is the most important metric. C++ while powerful, I found to be more of a footgun to me. I spent more time then I wanted dealing with memory & lifetime issues, horrid cmake build scripts and etc... Because I am not a part of a team or have any obligations to make my self replaceable. I decided to move to something that makes me happier. I pick D because its like if C++ had nicer syntax, nicer templates & a garbage collector.
>>39228 >I literally just wanted to share some AI output that came out ok. Got it. I found that impressive. The claim "muh half the research papers today are written by AI!111" from neo-Luddites what a term :D may be partly-true soon. :^) <---> >Yes, but is it really such a big undertaking? Maybe its just """me and my fast-lightnin' mind""" talking, but yes, it certainly seems so to me personally. Given the literal multiple-trillions being spent yearly today on this general domain, I think market & economic pressures would've already arrived at effective solutions decades ago if it were a 'simple' thing. :^) >In this case, it's a DSL, it's not replacing any of the systems level code, that is still being written in D and I am still using C & C++ libraries. Good thinking. That's on a much-more feasible scale for a talented individual, I think. I myself hope to create a DSL wrapper of sorts for RW Foundations code: primarily as a way to ease/ensure-effective-support-for safe, standardized & encompassing scripting interfaces to our robowaifus (for a variety of other languages like, eg, Python & Lua). Kind of a Scripting Abstraction Layer over the underlying Executive/C4 realtime systems code. I predict this can eventually even be extended out to providing realworld, 'embodied' interfaces for LLMs, STT<->TTS, OpenCV, etc., into our robowaifus. :^) <---> >I really do hope that my choice of D does not upset him too much, I understand why he picks C++. Lol, no not at all. Though it is a bit discouraging to me that after almost a decade's dedicated-effort on my part trying to help newcomers here to learn the language, I still dont have.a.single. regular C++ collaborator here on /robowaifu/ with me. Literally no takers at all, by appearances. I'm sure RW Foundations would be vastly-further along today if someone (even just one complete beginner; b/c it would've kept me motivated) had been working closely with me on it for say, a couple years now. Eventually, I just got burnt out by the discouragement + the pressures of full-time student life (3rd degree's -worth in a row, so far). BUT... I haven't abandoned the RW framework itself! Just the fruitless "C++ language-teaching" part. (Grommet won -- his 'muh_anything_but_C++' niggerpilling beat me in the end...) LOL. JK, no it didn't! :D Hopefully I can find the internal motivation and pick it all back up again soon ... -ish. :^)
Edited last time by Chobitsu on 06/14/2025 (Sat) 01:29:18.
>>39225 I wrote this right as the site went down and saved it local. >I don't make fun of C++ developers Making it clear. I'm not in any way doing this. I'm saying that it's too hard for me and I believe for most people this is true. I'm not capable of making progress with C++. Any other language is difficult enough. >Please note the paper is not serious I fully get that. I hope your endeavors are successful. I do think writing a new language is going to take a long time but, if it amuses you then that's all that matters. It's not a contest. I do think though that it will be really hard to write something from scratch. The reason I advocate for this or that is not so much cheerleader but that I hope people of better ability than me at this, not difficult, will pick a language that I think they will make rapid progress. I know this may sound stupid but it is exactly what I'm trying to convey. Of course I could be wrong. I don't know anything about D, maybe I should. I'll look it up. (I did look at it) ehhh, from what I read Nim might be better though I obviously am not qualified to say. >NIM, it was one of the languages I ran into before I settled for D, I liked what I saw and if I was looking for a system language it would be high up on my list. I don't think it would make a good embedded scripting type language I believe this to be incorrect. In my, limited, understanding languages as I understand it languages like LISP, NIM, Rebol, scheme and a whole set of higher level languages generally inspired by Paul Graham and Alan Kay like scratch, Snap!, Squeak, you can be more productive at the sometimes deficit of having hard to understand code and maybe a bit more thinking required. And sometimes loss in speed if you are using something like scratch. On AI I really like these control vectors as an idea. The problem, I see, is that it will be extremely difficult to get an AI to have...what may be called a world view. Or it's what I use as a verbal shortcut for an overarching "way to act". One, very important, aspect of that is it's world view of exactly how it should interact with you. I think this is a hard problem. Present LLM AI's have been programmed or fed info on most everything and getting it to focus on what you want. I just see this as a really difficult problem. It's far deeper than it acting like a intelligent dictionary like a lot of AI's. I want a waifu to sail a sailboat, carry stuff around, do basic stuff and other unmentionables. I'd like to go on long voyages but solo. The problem, the real serious danger, is fatigue. If you get caught in some sort of storm after a few days you will have to sleep. You will pass out no matter what and then...maybe you get lucky. Maybe not. If I had a reasonably competent waifu, no problem. While it would be nice to have more I could be reasonably happy if it could do a few simple task and follow a few simple directions. I think it maybe possible to break down to a few task and with combinations of commands get a large range of acceptable actions. But it won't be easy. >but is it really such a big undertaking? Yes it's huge. Very large. Look at the Red language guy. He's a very capable guy and he's many years in to it. Not finished. It's really difficult. While looking at D saw this comparison, https://news.ycombinator.com/item?id=23710616 https://github.com/timotheecour/D_vs_nim I think NIM's, "...foreign function interface (FFI) with C, C++, Objective-C, and JavaScript, and supporting compiling to those same languages as intermediate representations..." Is a real benefit. You can not only interface all those other languages it can compile to them while writing in NIM. Supposedly or it has been said, that since it compiles to C you could use any generic C compiler, which is likely to be any processor, and write to any processor with a C compiler. Big advantage. You could compile to a broad range of most any processor.

Report/Delete/Moderation Forms
Delete
Report