/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Server and LynxChan upgrade done. Canary update and a general address soon. -r

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


Remember, /agdg/ 's Game Jam is still going on. Go Team! :D


HOW TO SOLVE IT Robowaifu Technician 07/08/2020 (Wed) 06:50:51 No.4143
How do we eat this elephant, /robowaifu/? This is a yuge task obviously, but OTOH, we all know it's inevitable there will be robowaifus. It's simply a matter of time. For us (and for every other Anon) the only question is will we create them ourselves, or will we have to take what we're handed out by the GlobohomoBotnet(TM)(R)(C)? In the interest of us achieving the former I'll present this checklist from George Pólya. Hopefully it can help us begin to break down the problem into bite-sized chunks and make forward progress. >--- First. UNDERSTANDING THE PROBLEM You have to understand the problem. >What is the unknown? What are the data? What is the condition? Is it possible to satisfy the condition? Is the condition sufficient to determine the unknown? Or is it insufficient? Or redundant? Or contradictory? >Draw a figure. Introduce suitable notation. >Separate the various parts of the condition. Can you write them down? Second. DEVISING A PLAN Find the connection between the data and the unknown. You may be obliged to consider auxiliary problems if an immediate connection cannot be found. You should obtain eventually a plan of the solution. >Have you seen it before? Or have you seen the same problem in a slightly different form? >Do you know a related problem? Do you know a theorem that could be useful? >Look at the unknown! And try to think of a familiar problem having the same or a similar unknown. >Here is a problem related to yours and solved before. Could you use it? Could you use its result? Could you use its method? Should you introduce some auxiliary element in order to make its use possible? >Could you restate the problem? Could you restate it still differently? Go back to definitions. >If you cannot solve the proposed problem try to solve first some related problem. Could you imagine a more accessible related problem? A more general problem? A more special problem? An analogous problem? Could you solve a part of the problem? Keep only a part of the condition, drop the other part; how far is the unknown then determined, how can it vary? Could you derive something useful from the data? Could you think of other data appropriate to determine the unknown? Could you change the unknown or the data, or both if necessary, so that the new unknown and the new data are nearer to each other? >Did you use all the data? Did you use the whole condition? Have you taken into account all essential notions involved in the problem? Third. CARRYING OUT THE PLAN Carry out your plan. >Carrying out your plan of the solution, check each step. Can you see clearly that the step is correct? Can you prove that it is correct? Fourth. LOOKING BACK Examine the solution obtained. >Can you check the result? Can you check the argument? >Can you derive the result differently? Can you see it at a glance? >Can you use the result, or the method, for some other problem? --- > thread-related : (>>24047) >=== -corrected author's name -add crosslink
Edited last time by Chobitsu on 07/19/2023 (Wed) 14:10:30.
I've recently become aware of the field of systemized knowledge and category theory. The 'UNDERSTANDING THE PROBLEM' part is obviously related here. Since that's the first step in the OP, then that's probably where to begin. We have a library index, so maybe we can start working from there?
It occurs to me that systemized knowledge may possibly be not only a means, but an end as well. It stands to reason that if we can successfully use this approach to untangle the confused web of intricacies and dependencies necessary to be able to devise wonderful and appealing robowaifus -- then the robowaifus themselves can use a similar approach to sort out the reality they need to deal with to become good robowaifus.
At the risk of turning this thread into less of a prescriptive reductionism and more into a subjective blog, I'll forge ahead. (Actually I think the issues are related). My chief difficulty here -- the main thing holding me back from having produced a working robowaifu of some sort already here -- is I have the dumb. And frankly, I think all of humanity does too. We all have the dumb. AFAICT, systemized knowledge is a kind of approach to allow us to take in an overly-large, overly-complex topic (creating great robowaifus, say) and break it down into more manageable and 'bite-sized' chunks. I think these smaller bits are easier to digest, mentally speaking. Plus, once a body of knowledge has actually been systemized well enough, then it also accommodates men of varying mental capacities well. Namely, you can 'zoom-in and zoom-out' as it were on the specific sub-topic under consideration. In other words, it's an approach that gives us good abstractions, while still allowing details to be unpacked as needed. And back to the point of 'systemized well enough', once this has been done sufficiently well, it kind of lifts a topic up out of the realm of mere hearsay, and more into the realm of a legitimate hypothesis. In other words, it's becoming more like a scientific theory at that stage. While this isn't any particular panacea in and of itself (I think I maintain a healthy skepticism of science in general at least for the politically-charged topics) it does begin to lend some established rigor which lays a foundation from which to build further progress upon. >tl;dr This way can help us smol the dumb a little. :^)
>>8751 > it's becoming more like a scientific model at that stage.*
Potentially related. >On the Relevance of Design Knowledge for Design-Oriented Business and Information Systems Engineering >Conceptual Foundations, Application Example, and Implications >The engineering-based development of techniques in business and information systems engineering (BISE) requires knowledge on the part of the system designer. The paper points out the importance of this design knowledge in the course of scientific design processes and provides a framework for systemizing design knowledge. The framework is used to explain scientific design knowledge about the modeling technique of event-driven process chains. Implications of design knowledge in the context of BISE conclude the contribution.
>The evolution, challenges, and future of knowledge representation in product design systems >abstract >Product design is a highly involved, often ill-defined, complex and iterative process, and the needs and specifications of the required artifact get more refined only as the design process moves toward its goal. An effective computer support tool that helps the designer make better-informed decisions requires efficient knowledge representation schemes. In today’s world, there is a virtual explosion in the amount of raw data available to the designer, and knowledge representation is critical in order to sift through this data and make sense of it. In addition, the need to stay competitive has shrunk product development time through the use of simultaneous and collaborative design processes, which depend on effective transfer of knowledge between teams. Finally, the awareness that decisions made early in the design process have a higher impact in terms of energy, cost, and sustainability, has resulted in the need to project knowledge typically required in the later stages of design to the earlier stages. Research in design rationale systems, product families, systems engineering, and ontology engineering has sought to capture knowledge from earlier product design decisions, from the breakdown of product functions and associated physical features, and from customer requirements and feedback reports. VR (Virtual reality) systems and multidisciplinary modeling have enabled the simulation of scenarios in the manufacture, assembly, and use of the product. This has helped capture vital knowledge from these stages of the product life and use it in design validation and testing. While there have been considerable and significant developments in knowledge capture and representation in product design, it is useful to sometimes review our position in the area, study the evolution of research in product design, and from past and current trends, try and foresee future developments. The goal of this paper is thus to review both our understanding of the field and the support tools that exist for the purpose, and identify the trends and possible directions research can evolve in the future.
>>8753 Ar you sure you can use that somehow, or is it some rabbit hole you want to jump into out of curiosity? On a quick glance, it strikes me as very theoretical. >>8754 Okay, the picture helps.
Just stumbled over this, which might fit in here. It's about Steve Jobs on being smart: Zooming out to find connections, unique life experiences, openness to experience, being rather extroverted, avid reading to change perspectives (especially if one isn't extrovert) or gather experiences in other ways: https://youtu.be/e46qMomIT8Y
>>8755 >Ar you sure you can use that somehow, or is it some rabbit hole you want to jump into out of curiosity? On a quick glance, it strikes me as very theoretical. No, I'm not 'sure' of anything at this point. I'm simply trying to explore prior art towards systemized knowledge. I feel pretty sure we need help in this area and I'm trying to explore a new breakthrough for us all. I think most of us have been overwhelmed in the past at the sheer volume of topics involved here (I know I have), and I think that effect has slowed our progress as a group. If we can spell out more clearly a methodical approach for everyone, then it would help a number of us. The RDD >>3001 was an overview way of it before, but we need to begin fleshing out some scientific/engineering rigor. The Robowaifu Systems Engineering thread is probably at least indirectly related as well >>4639 . After all, we're not the first group to tackle a large project, and thankfully there's a lot of information out there from past design/engineering/production/manufacturing groups. I'm simply trying to figure out a way we can capitalize on that information here. >>8756 Neat, I'll give it a watch Anon. BTW, while I admire 'smart' people in general, and hope to be smarter some day, my real goal here is to discover a good methodology that us not-so-smart Anons can follow and still succeed at crafting robowaifus. I hope we can manage to find such an approach here.
>>8756 That was good. Motivates me to increase my general reading levels up to my past standards at the least. Coincidentally, I had already been personally trending towards that during this new year, so yea.
>>4143 Did you look into Unified Modeling Language (UML)? https://en.m.wikipedia.org/wiki/Unified_Modeling_Language or a alternative https://en.m.wikipedia.org/wiki/Modeling_language e.g. SysML? I didn't look much into it. UML is supported very well by standard Debian installations, including grafical editors and libriaries for programming languages.
>>8771 Yes, I'm sure that UML et al would be pretty useful for this arena of endeavor. Personally, I'm much more fond of Terry Halpin's ORM (Object Role Modeling). >>2303 >>2307 >>2308 I think it's graphical syntax is far more intuitive and more flexible than UML. Regardless, either would be helpful. Actually, I hope to create a robowaifu AI development system based around Halpin's ORM as a GUI based system to 'wire' together knowledge representations in a way that will hopefully be both easy-ish to read and to reason about. Thanks Anon, good suggestion!
>>8773 Okay, but there's nothing to work with in the standard repositories for Debian (Raspian). Wouldnhave made things easier.
>>8785 Ahh, true. Actually I haven't written such a tool yet. There's kind of a low-energy community around this 'language', but afaict it's just business boomers trying to foolproof their SQL systems. There's also a tool written by an old guy called NORMA that basically acts as a plugin for Visual Studio. AFAICT, I'm literally the only person who recognizes how valuable this could be for AI usage to allow non-experts to assemble knowledge representations. Unless we here do it, it probably will never happen. https://github.com/ormsolutions/NORMA https://www.ormfoundation.org/
>>8787 >>8789 Thanks. I might look into it a some time, though if there are no tools the. I think for organizing stuff, UML will probably work better. Also, the link didn't work. https://youtube.com/playlist?list=PLzr5fRV1AGV9EBDnqI73HiI39KggzWX3y https://youtube.com/playlist?list=PLxumuDj9hbvrLM_GMPFC8TZdTtcJQyFtB
>>8844 >Also, the link didn't work. Ahh, my apologies. Thanks for catching that Anon!
>>4143 Hardware advances first before software. So two take the first bute we need to advance our hardware to quantum computing levels. Then make AI personality software to become as humans as possible. Then a body software to control the body. That is the short and overly simplified
>>9129 Hmm, maybe you have a good point Anon. But OTOH, hardware is pretty hard for us individually to advance very far (yet). But it seems like software is something that we ourselves each can do something with pretty soon. What do you think?
>>9131 You are right, one person cant design better polymer for plastic, better design for servos, or better boards. So yes right now each one of us can make better software like AI chats, muscle and movement simulations, even programs to get each if us to add a training cycle for a bigger AI training program. I am just saying in the grand view of things the hardware needs to advance to handle our robowaifus. What we as an individual do is make present waifus like elf sofie amd set that as a base parameter. When we know what we are at we know what routes we can go down. Besides that we can work at AI and robot thinktanks, invest in promising companys to hope they make a way through the fields we cant go down. We can even just advertive the idea of robowaifus.
>>9134 >When we know what we are at we know what routes we can go down. True enough. Always good to start where we are with what we have on hand Anon. >We can even just advertive the idea of robowaifus. We have had a few ideas about that on that one thread, and we did make a few contacts on other IBs. But my belief is once we create a basic robowaifu kit that costs ~US$2K to build from scratch, and it can run decent chatbot software and move around in a basic way and do 'judgement tasks' (like washing dishes) successfully -- in short order we'll have so much traffic here by simple word of mouth it will be like a zerg rush. 100's of anons from all over already are vaguely aware of this board.
>>9138 I like to think of this: if we want people to come into the topic and contribute(important part) then we need something to grab them and hook them in. You are right with a kit. I say that kit should be a simple companion bot for old people or something like roomba. If we got fully humaniod we lose them because they think it is a sex dolls. So two hook the masses we need a cute, slight more robot than humanoid that has good companion features. People love roomba so if we give them something like that with basic emotions? They will eat it out the palm of our hands. Besides that we can get artists involve and scream praoganda from the roofs. The more we can convince people that robots are more human or more likeable is the goal. Go to conventions, webinars, shit even streaming helps. Break the illusion and they will spread us by themselves
>>9147 >They will eat it out the palm of our hands. Kek. Hardly my agenda, personally. My goal here is altruistic basically. However, you might be interested in /biz/ threads anon started. Have a look: >>3119 >>1642 >Break the illusion and they will spread us by themselves Very solid point Anon, I like that idea.
>>9151 When I mean when I say break the illusion I mean as a two fold illusion. The first and simplest one is women having sex and relationship over our head. We break the hold women have over men and we even the field to being an age where being a single sex is meaningless and having a personality and having good values matter more then triple D's. The second is the illusion of human relationships. You being on this board and into the subjects will have heard of himans developing emotions for inanimate objects. Case in point is roombas, there are people that consider them one of their own family and get depressed if it gets broken. I even read an account from the old /clang/ board on 8kun that a soldier got emotional when a bomb defusing robot got broken and was asking the army mechanic to not get a new one, but rather to fix his partner. Humans make connectstions and when people realise you don't need to be dependent on people. One example of humans needing a new outlet for emotions and relationship is escorts. If you read some escort accounts of men just wanting emotional bonding because humans are judgemental and when we as a species have a better way to build emotional bonds like we do with pets and love ones then we will see less mental health decline.
>>9241 >I even read an account from the old /clang/ board on 8kun that a soldier got emotional when a bomb defusing robot got broken and was asking the army mechanic to not get a new one, but rather to fix his partner. I hope we can find sauce on that Anon. It's a pretty notable example of the sentimental attachment we can have for non-human things. Any yep, I pretty much assumed that you explained about the meaning of your phrase. No doubt about it; desirable, appealing robowaifus will overturn a boatload of (((systems and plots))) that are intended to abuse and capitalize on men's ability to create. The men are the only ones suffering under the current schemes. Great robowaifus will change all that. So, we're way off-topic ITT. If you'd like to continue this, I'd suggest the basement as the right venue for it Anon.
>>9242 Alright. Though I put more thought in the question and that lead to more questions. The overall question of how will we get robowaifus is one that spawns questions like "when will the AI be good enough to be self sufficient, when will the materials to make the robowaifu be cheap enough to make them." So I have to say that we need to focus on two groups. Hardware and software. Hardware will work on the circuits and body while the software group will work on the emotions and personality.
>>9505 >Alright. Though I put more thought in the question and that lead to more questions. No, that's perfectly alright Anon. That's just part of what comes along with the territory for using IBs to work together creating robowaifus. And as I mentioned in my 'vision statement' posts (>>2701, >>2741), the benefits of doing so far outweigh the issues involved. OTOH, that also means we have to keep a tight reign here for tracking and staying on-topic within threads. /robowaifu/ is primarily an engineering board, and all engineers need good documentation to do our jobs well. This is a very complex topic, and the board itself is currently our primary 'document' of all our efforts here. And the catalog page is it's main access point for finding information here -- kind of like a table of contents in a book. By keeping things on topic, it's like we're forming good 'chapters' for our 'book' here. Makes sense? >...So I have to say that we need to focus on two groups. Hardware and software. OK. Good, on-topic point. :^) Still, we'll need to sub-divide & further sub-divide both of those domains to get the problems down to 'bite-sized' chunks we're likely to make good progress on relatively quickly, and of which such work can be distributed among us for greater progress rates as well. But that's a really good starting place Anon. Can you share more ideas here about the circuits and body, and the emotions and personality? What smaller parts of these can we be thinking about r/n to help solve the problems we all face in creating robowaifus?
>>9516 So I will break at down, for hardware we have eletrcical and mechanical, from there in electrical we have motion, vision, and battery. We need to work on electrical responces to move the body smoothly and efficiently. We need to make the waifu see and understand what she sees. The battery part need to be worked on to actually run the waifu for a long time without being bulky. For mechanical we have material, skin, skeleton. Takimg the simplest approach we have the skeleton of the waifu that needs to be lightweight, durable, and able to hold the wires for a long time. Next is skin which honestly can be whatever peo pople want, but in general we want cheap, light, and study skin. Now materials brings it all togther because we need to work on turning the materials we have with us now (aluminum, 3d print plastic, copper, resins) and find better materials that are atronger and cheaper. On the software side we have body related software and personality related software. Body would be broken down into movement, vision, and expression. Movement software to stabilize her, let her walk and grab things. Vision to actually makes sense of camera data and help update plans with new information. Expression software for the face and body language so she can be sad or confident. On personality software we have voice, personality, and 9emotions. We need to hace her have a good voice, best idea would be a voice AI that rveolves to mske voices as human and natural as possible. Personality to let her have a spark of life, you can be rudimentary with it snd put in gestures or tone that she will default to. Emotions to not have a sarcastic or dead eye look. That is what i think helps. But break at down more.
>>9524 This is a great list Anon! Almost like you've been on /robowaifu/ before or something... :^) > But break at down more. Alright. First I'll take your post and kind of itemize it. Then I'll plan to make another post that takes that items listing and references related xposts. Sound good? I'll also go ahead and break down the software side into one more category: Planning and Awareness software, and also add other items like Hearing. Electrical Motion -We need to work on electrical responses to move the body smoothly and efficiently. Vision -We need to make the waifu see Hearing -We need to make the waifu hear. Various types of microphones. Battery -run the waifu for a long time without being bulky. Sensors & Encoders -Allow the waifu to have touch/heat/smoke/etc. senses -Allow the waifu to 'instinctively' know her joint's angles, posture, etc. Mechanical Material -we need to work on turning the materials we have with us now (aluminum, 3d print plastic, copper, resins) and find better materials that are stronger and cheaper. Skin -can be whatever people want, but in general we want cheap, light, and sturdy skin. Skeleton -needs to be lightweight, durable, and able to hold the wires for a long time. Software Body related software -Movement software to stabilize her, let her walk and grab things -Expression software for the face and body language so she can be sad or confident. put in gestures or tone that she will default to Personality related software -Voice software. We need to have her have a good voice, voices as human and natural as possible. -Personality to let her have a spark of life -Emotions to not have a sarcastic or dead eye look Planning and Awareness software -Vision to actually makes sense of camera data -understand what she sees. -and help update plans with new information -Sensor-fusion of various types to integrate body & environment information >=== -various reorganization edits -add Sensors & Encoders category -add Hearing category -various prose edits
Edited last time by Chobitsu on 04/07/2021 (Wed) 03:59:34.
i knew I had some old document with notes somewhere, from some years ago. Wanted to make a thread of it's own one day, but it looks a bit like you are doing here. I had a concept of requirement levels, to plan some path for different abilities, also allowing for picking options of course, since not everyone needs all of it. I made a list of what they should be able to do or traits they should have and then define levels with numbering. Each level can have more than one trait, even in the same area, it's just something like a priority list. Even back then, the idea was that every developer could make their own base on it. I think I even thought about making a web page to help with that. I made a strange separation back then, though. Like there were more humanoid fembots and anime-like robotwaifus, but the later would be defined by having something distinctive from humans, e.g. led eyes or rolerskate feet. I would't make this distinction in the same way anymore, but posting the old list anyways.
>>9554 Requirement Levels Humanoids, Human-like, Fembots: - Body movement RL00 - moving head, legs and arms, but allowing them to be moved RL01 - walking on all four RL02 - dancing on one spot with guidance from a wall or pole RL03 - more complex dancing while using a pole, with one foot on the ground RL?? - standing up with help RL?? - standig up alone RL04 - using electric rollerskate boots, which communicate with her for balance RL05 - advanced pole dancing with one leg on the ground RL05 - walking on legs with help RL06 - walking on legs on her own but using guidance eg. from walls RL07 - walking on legs without help or walls RL08 - poledaning with no foot on the ground RL09 - more and more complex dancing moves without pole RL11 - climbing stairs RL11 - walking in high heels RL11 - dancing in high heels and samba RL11 - jumping walk RL20 - ballet and gymnastics - Facial expressions and abilities RL00 - nice looking smile RL01 - general cute facing RL02 - moveable chaw RL03 - lips moving while talking RL04 - talking in a realistic looking way RL05 - (french) kissing, inclusive non-toxic salvia RL06 - blowjob and similiar, selfcleaning afterwards RG20 - like Cameron (TSCC, Summer Glau) or Buffybot (SMG) - Endurance RL00 - movement in bed for one hour, some time talking and min. movement RL01 - ... RL09 - 16 hours without walking much (like outside) RL10 - 16 hours including walking or dancing for s. time - Skin and tissue RL00 - less sticky than thermoplastic RL00 - not looking glossy RL01 - random (individual) skin pattern like spots, freckles, (pseudo-)venals RL01 - no relavant quality loss within 10 years RL02 - heating of skin and particular tissue by venals or skin layer RL03 - sensing of touch RL?? - sensing of pain RL?? - sensing of pressure RL?? - pressure marks if pressed RL?? - sensing of needles going to tissue in every body-part, self-healing RL?? - enhanced resistance eg. if being spanked every day RL?? - no relevant quality loss in x years under heavy usage eg. sex, dancing, spanking Rl?? - visible muscle movement under the skin, similar to human, especially upper legs RL?? - (partial) copies of the bot have to look like the original, Reproducabitity - Comfort - internal self cleaning by drinking water and cleaning fluid - internal storage of cleaning fluid and also lubricant for sexual usage - release of internal fluids in bathtub, shower or on toilet - showering or bathing of her own if demanded by her owner - internal self cleaning without immediate need of visiting the bathroom - Hands and tactile sensing - realistic looks and movement - tactile sensing - different forms of massages - Mind - internal computer, controlers, external computer(s) at home or cloud - personal mind with personality and mods with backup function - hive mind to share non private data with other bots - machine learning on external server at home, like dreaming - free software as much as possible - small programms with good APIs, Unix style - Eyes and visual recognition RL00 - moveable pupils - nearfield face recognition - looking at and following something - good enough for reading - recognising things more then some meters away - maybe separated computer that processes input for security reasons - advance recognition system, face, voice, bodysize, other traits - Interfaces - wifi for external servers as part of the brain - segregation of different systems for security reasons - maybe plug for batteries in a backpack for outdoor activities - external sensors in wearable objects like hairclips (via bluetooth?) - Sexual usability RG00 - 1 usable orifice RG01 - 2 usable orifices RG02 - 3 usable orifices
>>9555 - lots of different internal "muscles", massage rings, etc - automatic realase of lubricant RG10 - being on top while having sex, will need more strength and coordination - Hearing and natural language processing - understanding as much as possible by only using the internal computer(s) - storing longer conversations, transfer to external computer for learning - additional noise sensors might be usefull, for faster reactions? - maybe separated computer that processes input for security reasons - Other essential traits for realism - natural behaviour considering body movement - automatical human-like positioning - knowledge about natural and erotic posing - non-toxic salvia, in self-cleaning mode replaced by cleaning fluid - Additional traits for enhanced realism - sense of taste - simulated breathing - crying including (salty) tears - talking by using air to form the voice - advanced usage of hands from gaming to piano - simulated eating and going to toilet on their own - specific sweating incl. optional saltyness - female pheromones - weight and balance management by internal fluid - dampening of internal sounds by using noise cancelation or other methods - General traits or abilities - recycling of expensive parts of the body, when upgraded - easy removable batteries, controllers, and main computer - Less important, unimportant or special traits and abilities - sense of smell, maybe for security reasons or chores - heat storage, maybe from sun, more likely while recharging, for later use - superior abilities which are easy to archive eg. superhearing - extreme long lasting batteries, maybe based on chemical liquid - taking part in (outdoor) sports or playing with children - watersports, swimming, surfing maybe even diving - alternative energy usage eg. solar, food, salt - rechargable by induction (without cable) - giving milk to a real baby while simulating a heartbeat - producing female milk and/or pheromones by inhabiting GMOs - makeup-like changes in color of some body parts - ability to put on makeup on her own - solar protected skin for outdoor activities - friendly microbes, GMO, maybe with additional traits - self-defence and runaway reflexes, lockdown of orifices and mind - emergency call, reanimation procedure, opening of the door - Unrealistic or most likely pointless requirements - real human skin and other cyborg parts Robowaifus, Animenoids, Anime-like: - Body movement - like the Human-likes but - maybe having embedded rollerskating wheels - Facial Expressions - like the Human-likes but - screen-like eyes for showing emotions - screen-like mouth for showing emotions - cheek-LEDs under the skin for showing emotions
>>9554 >>9555 >>9556 Great stuff Anon. Thanks for taking the time to dig this up and post it here. I wonder if we can borrow some general categorization rigor here? Roget's Thesaurus, or perhaps the Dewey Decimal System? I'll spend time thinking how we can integrate all these lists into the RDD >>3001 .
>>9568 >borrow some general categorization rigor here? I put it here so others can integrate it in their system of sorting things out.
>>9586 I see. Well, thank you for that Anon it's a most helpful listing. I've been toying around for a while now about how we might go about rigorously categorizing the literally hundreds of different topics that /robowaifu/ overall at least touches upon. For example, our Library thread is a bit of a mess ATM IMO, and one I'd like to see cleaned up effectively by our 5th birthday here. My guess is that one of these classic works might help us all out in that sense. Thanks again.
We also need to break down auxiliary related items like plastic, cuirct board and metal production.
>>9894 Very good point Anon. I would also add that various manufacturing techniques should be done as well. For example small-scale factory production runs inside a garage lab using kits, vs. semi-automated manufacturing for a small scale business. Manufacturing itself can be an art & science.
How do you organize your PDFs and other books or papers? Not sure if this is the right thread, but it seems to be about organizing stuff. I just thought repeatedly about wanting a method of being able to extract the titles from my PDFs for machine learning and other RW related topics, which I download all the time. I tried pdfx, which didn't work because it only extracts meta data which might not have been put into the file by the author. Basically, I didn't even get the title from the file I tested it with. Then I looked into the arxiv.py library for Python3, which I got from pip3 install arxiv. It's badly documented, in regards to the internal help, but it has a helpful github page: https://github.com/lukasschwab/arxiv.py This is just some kind of feedparser. So I put it into a function, using it: import arxiv objects = ['entry_id', 'updated', 'published', 'title', 'authors', 'summary', 'comment', 'journal_ref', 'doi', 'primary_category', 'categories', 'links', '_raw'] def getarxiv(id,object): if id.endswith('pdf'): id = '.'.join(id.split('/')[-1].split('.')[:-1]) if id.find('.') is 4: search = arxiv.Search(id_list=[id]) paper = next(search.get()) if object in objects: print(eval('paper.' + object)) else: print('Try: paper. + ', objects) Which can be used like this: $ getarxiv('2006.04768',"title") > Linformer: Self-Attention with Linear Complexity Also works with the file path instead of the id based name. It can also get the summary and other stuff. I'm not exactly sure how I''m going to use that, but one use case would be having a text or html file with all the summaries for the papers I have, maybe with tags based in which folder they are or what keywords are in the title. We'll see. it's certainly going to help to find or sort the papers, or to put up a posting or even a website with the summaries and links to the download of each. The other program, pdfx also extracts links to papers which are referenced in the input document. So if one want's to batch download them just in case, it could be easier that way.
>>10317 Very good question Anon, and a nicely-fleshed out one as well. This is definitely a good topic for this thread, I'd say. But you might run over this one too for ideas and examples, even though it's not precisely the same alignment: (>>2300). There's also another thread that's a bit more aligned, but hasn't made it across from the migration, so not much content yet (>>269). Good luck Anon. Trying to 'catalog' the whole board itself has some similar difficulties, and our Library thread (>>7143) shows that it can be a rather messy process I'm the OP of that one, so I can say that pretty unabashedly :^). But you've touched on a very important topic to us all in your attempt, Anon. So Godspeed to your efforts, please figure it out! I know our catalog could use some help while we're at it. :^) > The thing that did impress me was the organization behind it. I asked Joe about it. He sang to his microphone and we went on a galloping tour of their “Congressional Library.” > Dad claims that library science is the foundation of all sciences just as math is the key-and that we will survive or founder, depending on how well the librarians do their jobs. Librarians didn’t look glamorous to me but maybe Dad had hit on a not very obvious truth. > This “library” had hundreds, maybe thousands, of Vegans viewing pictures and listening to sound tracks, each with a silvery sphere in front of him. Joe said they were “telling the memory.” This was equivalent to typing a card for a library’s catalog, except that the result was more like a memory path in brain cells-nine-tenths of that building was an electronic brain. https://metallicman.com/laoban4site/have-spacesuit-will-travel-full-text-by-robert-heinlein/ The key line here is >and that we will survive or founder, depending on how well the librarians do their jobs. This point has stuck with me ever since reading this wonderful book, and in many ways really inspired me to tackle our library thread, even though I didn't feel up to it. We are all attempting something here that has never been done before in history. The complexity involved I consider pretty staggering, personally. I wouldn't have it any other way heh -- otherwise, why even bother? :^) I feel like know our ability to organize our learning-curve achievements will be vital to our success in the end.
>>10317 I use to just dump abstracts and keywords into a text file but recently I started using Zim which is a wiki-like graphical text editor. You can insert images, graph diagrams, code and equations and also link to PDF files and code. It makes it super easy to organize ideas and research and you can search through everything as well. Pages can also have sub-pages for further organization. https://zim-wiki.org/ The source code of pages is simple and you could easily generate pages for Arxiv PDFs you already have saved with the API, then fill them in with relevant links and notes later. If you want to insert code and equations, enable the Source View and Equation plugins. There are also Task and Journal plugins which are good for keeping notes on experiments and directing your progress.
>>10331 >zim I already know that program. Thanks for the reminder. Now I'll look into it again, since it's still around. (Wasn't using it bc switched computers and my old disc is encrypted and I don't remember the exact PW. That's why I forgot about the program. One thing I want in the future, is the script or the OS making a textfile with all programs intalled. So one can easily recreate the same OS.)
>>10335 >One thing I want in the future, is the script or the OS making a textfile with all programs intalled. So one can easily recreate the same OS.) My apologies that I can't remember it Anon, but a few years back when I was still on Linux Mint, there was an explicit tool that would run through your program setups and system config, and then record that out to a re-installation script. The explicit intent was to quickly and simply allow an Anon to nuke (or lose) a box, but be able to reinstall everything fresh from scratch with practically no muss or fuss. Again, sorry I don't remember it, but again, it was available in the Linux Mint repos. (Therefore, possibly in the upstream Ubuntu / Debian ones).
>>10331 Wow that sounds amazing Anon, thanks.
Open file (68.08 KB 1182x763 wall of information.PNG)
N00b with 0 practical experience with AI with a bit of an idea. I was gonna put this in the AI design thread, but seeing as it's more a structural question than a nitty-gritty AI question, thought it'd do here. Say you have a chatbot style AI developed. It can take in external information in text, and return information back to the user in text. Before the output text reaches the user, it's run through a script that checks for commands, and when it detects one, triggers an action that the robowaifu body carries out. These actions aren't manually completed by the AI, and instead are pre-scripted or carried out by a dedicated movement AI. Is it possible to train the chatbot AI to consistently understand how to send out commands accurately? How do you incorporate that sort of thing into training data? And, in another way, is it possible to take a robowaifu's senses and pipe them into a chatbot's interface via text in the same manner? Pic related is a better way of explaining it. Is this model feasible, or would an in/out system like this hamper training speed to a no longer viable amount? I know that there's obviously more steps in the chain to this (for one, an always-open microphone will confuse the AI into thinking you're always talking to it, so there has to be an "are you talking to me?" filter in the path), but given this rough draft, is such a model possible with the technology that the average anon has (barring RW@home that other anons have suggested)?
>>10357 I'm not knowledgeable enough ATP to answer your AI-specific questions, but the >And, in another way, is it possible to take a robowaifu's senses and pipe them into a chatbot's interface via text in the same manner? question I can pretty confidently answer with a 'yes', since it really involves little more than sending properly-written signaling data to the display. >diagram I really like your diagram Anon, one minor suggestion: I'd say you could combine the two blocks containing 'Typo Correction' into a single 'Typo Correction/Error Checking' block, that sits before the 'Text Analyzer' block. >Is this model feasible, or would an in/out system like this hamper training speed to a no longer viable amount? Yes, I think that's likely to be a reasonable approximation at this point lad. It will take many, many more additions (and revisions) to flesh it out fully in the end. But you're certainly on the right track I'd say. >is such a model possible with the technology that the average anon has Since a general definition of 'average anon' is pretty much an impossibility, I'd suggest a rough, reasonably adequate, target user definition as being: An Anon who has one or two SBCs and some rechargeable batteries, dedicated specifically to his robowaifu's exclusive use. If it takes anything more than this hardware-wise to work out the AI/chat part of a robowaifu's systems, then that would basically exclude the (much-higher numbers of) impoverished/low-income men around the world (>>10315, >>10319). I'd suggest that it be a fundamental goal here on /robowaifu/ to attempt the AI/Chat system be targeted specifically for the Raspberry Pi SBC. Not only would that be a good end-product goal to target, but it also has advantages for us as designers and developers as well. (>>4969) >Once we're finished each of you will have your own little development exploration box you can literally carry around in your pocket. It will be self-contained, independent, and won't interfere with your other computing/vidya platforms. It will offer you a convenient way to begin controlling embedded hardware directly on the same machine that you write software for it on.
>>10357 >consistently understand how to send out commands accurately? If you have the command and it's parameters stored in some text then you should be able to send it to the e.g. servo controllers. However, if it's about moving around it would certainly have different parameters dependent on the situation. Even more so, if it's a high level command which has man sub-commands and requires recognizing objects and planing motion, it's way more difficult. What happens in you text analyzer, and from there to the action, will be very complex. You can have a command like lift-right-arm, but then, how much? Which angle for each joint? What if something is in the way? We have a thread for chatbots >>22 which became more and more one for general AI, also one for AI concepts, and one for GPT-2/3. >take a robowaifu's senses and pipe them This would be some kind of context. > into a chatbot's interface via text What does chatbot-style-AI mean? Some already existing system? You can do kind if everything with code. If the other parts of the system know what it means, then they can use it. > hamper training speed Your system doesn't look like some ML model. I also don't see how we could build any AI like one model. It needs to be various pieces of software communicating with each other. Also, forget about the distinction between basic and complex actions. Your basic actions aren't basic.
>>10400 Not him Anon, but this is an insightful post. Many anons neglect the complexity & judgement involved in even a 'basic' movement for a robowaifu (or for us). Picking up a dish and putting it into a dish sink for instance, is actually quite a complex, interconnected, set of tasks that all have to be planned out and sequenced in proper order, and then carried out with precision and finesse. Our own visual/reasoning/neuro/musculo/skeletal systems have been designed to do these kinds movements, and from our births have been fine-tuned and perfected over years of time. But now we ourselves -- us designers & engineers -- will have to figure out exactly how to work each of these steps out in detail ahead of time.
>us designers & engineers -- will have to figure out exactly how to work each of these steps out in detail ahead of time. Oh, I hope not. We need to get to a point where a robowaifu can learn doing things on her own. I only want to get close with programming, not creating all of it. There's something that is called pose estimation, we will hopefully be able to use something like this to make her learn from videos. https://youtu.be/F84jaIR5Uxc
>>10422 Well, I suppose it could narrowed down from we to someone, Anon. But the simple fact is machines have no 'instincts'. As to 'learn doing things on her own', then again, someone will have to devise that ability. For now, animatronics-like approaches (specifying every little detail) is our surest approach to functionality. This will get progressively easier for us all, as we have lots of 'baseline' robowaifus out there of this type, and lots of smart men begin thinking hard about it all. While I'm no expert in the field, I seriously doubt that any actual, working (non solely-academic) AI/ML engineers out there would claim there's anything even remotely like an AGI (or w/e they're calling it these days) in existence. And without careful, dedicated & meticulous attention to details, nothing happens in this world filled with entropy. It will be up to us Anons or other men to solve this systems problem -- it certainly won't solve itself! :^) This is the task ahead of us, plain and simple. >pose estimation Yep, that's a good feature to pursue for us all. It's a great way to simplify the complexity of the kinematics, situational awareness, and motion planning problems. There are likely some other benefits as well.
>>10425 What I meant was that we won't need to design every move to every detail, only some estimation. The basic idea is, that the system would learn by trying and observing the result. For example grabbing something: Closing fingers would be programmed, how many and how much (for each) are parameters. These parameters can be changed, and therefore have some effect. Maybe we would first try it in a simulation, then with the real arm and hand. Object detection exists, then she could try until she lifts it of and holds it. The object getting closer means confirmation. Or sensors in the hand confirm some object still being there, after lifting the arm. Ideally we'll have several ways to confirm some change. If we do it like that, no observation by the owner would be necessary, while the system learns. I would also prefer to grab data from me doing things with some glove which measures hand movements and such, instead of programming it. However, the basic movements shouldn't be such a problem to write down anyways. After that it might work with pose estimation and the way I described above. Optimizing something like grabbing could then be defined by having more contact to the object at every time, but then not squeezing it, and holding it in a correct way (e.g. plates with food). Or not to much contact, but sufficient, maybe for reason of hygiene. How to handle each object would be determined by the object detection and the knowledge (probably via some graph database) about the object.
First post on the board. I hope this is relevant for this thread but I had what I think is a very very important idea while reading this board about how things must be as abstracted as possible in the AI herself. For example, the "personality" component should not have to dictate the exact electrical parameters to every motor, but she should just have to will her arm to move as we do. Perhaps this would involve something like a "world model" like I found on this board with things like this https://worldmodels.github.io/ . This idea that the mind would only deliver abstract ideas to another "component", perhaps even a more basic machine learning model, kind of suggests the idea of multiple computers within the same AI system. Again I hope this is relevant enough as I couldn't think of anywhere else to put it, haven't seen anyone mention the concept except people talking about world models, and wanted to share the idea because it could make breaking down of the problem easier. I'm eager to hear your opinions.
Also, I'm considering making some diagrams. I already started one about all existing robowaifu projects. This is currently kind of stuck, because I would need to collect more data on each one. I will care about that one later. One new idea is to make a diagram for how to get started with robowaifu development. I post the first ideas here, maybe someone has suggestions how to improve it. I want to model it in text to some extend before I start putting it onto a PNG diagram. Of course, the diagram can't go too deep into each topic, this would be something for other diagrams. | Python / basic math -- statistics / linear algebra -- ML -- DL -- NLP -- NLTK -- ML | graph databases -- RDF primer -- programming | electronic basics -- Arduino + embedded programming -- motors / servos / sensors / energy systems | programming -- CPP/Python/Lisp -- <List of concepts> | 3D design -- CAD / sculpting -- Blender / Solvespace -- 3D printing | molding -- clay modelling -- silicone / plastic resins | plastics -- 3D printing / thermoplastic modeling / resin molding | conversational AI -- AIML (limitations) -- programming -- ML / NLP / graph databases / text generators (GPT) -- speech recognition / speech sythesis | face design - 3D design / molding / generative networks | motion -- electronics / programming + simulation -- actuators -- object detection / object categorization / situational awareness / navigation / sensors -- walking | advanced AI -- psychology / philosophy / cognition | skeleton -- plastics / 3D modelling | vision -- electronics -- object detection | skin -- silicone / textiles / sensors
>>10357 (me) >>10400 >What does chatbot-style-AI mean? Some already existing system? Basically, Cleverbot, Evie, Replika, anything that has a user input text, then responds with an AI-derived response to mimick a back-and-forth conversation. I had the thought of a model like this to allow hot-swappable AIs, just so if a newer, better-coded AI comes to light, as long as it has the same basic text-in, text-out system, it can be swapped in and trained to utilize the rest of its body. >>10427 >This idea that the mind would only deliver abstract ideas to another "component", perhaps even a more basic machine learning model This is what I was trying to get at. Instead of forcing the chatbot AI (which is designed first and foremost to speak like a human, not move like a human) to learn the nitty gritty of each action down at the metal (move ABC servo XYZ degrees, move DEF servo XYZ degrees, etc) it calls out an abstract command that other code can pick up on and carry out in place of the AI directly. The chatbot isn't moving, the action handler is, and all the chatbot has to do is invoke a command, and the action handler can then carry it out. Granted, this leaves a whole lot open to interpretation from the action handler, but there can be other information that text analysis can give that can influence how the action handler carries out its actions aside from just the command invocation (like those ML scripts that can predict emotion behind words into a set of confidence values, which can be plugged in and used to further give emotion to movement by understanding the AI's mood.) Full, direct-control of the waifubody by the AI would be cool, but the level of computing power, effort in training data, and effort in building a virtual training environment needed to train the AI both to SOUND human and ACT human seems improbable for a proof-of-concept. >>10400 >if it's a high level command which has man sub-commands and requires recognizing objects and planing motion, it's way more difficult. What happens in you text analyzer, and from there to the action, will be very complex. This feels like a more achievable goal than native control, at least to me. For lack of a better way of explaining it, BostonDynamics' Spot can scan its environment, create a model, and determine the best way to move around without falling over or bumping into things, and the end user can code movement in without having to manually tell each servo how to move and where to step- it's all abstracted away, and, without code, is simple enough to use with a gamepad-style controller. Granted, this is a bit of an unfair comparison since Spot is an engineering masterpiece with over 30yrs of development (and is indeed, very complex), but considering Spot-like bots exist, Replika-like AI exist, but robowaifus don't yet, I think this model is a good way of cross-breeding these two technologies together if direct-control isn't viable. At least to me, coding movement control in this way seems way easier than trying to wrap my head around AI and ML trying to learn to walk in a virtual environment then trying to translate virtual movement to IRL movement.
FYI, the "Also" in >>10428 doesn't refer to >>10427. It's just a new sub topic. >>10427 Welcome on board. I think it kind of fits in here, it certainly doesn't harm. Though, we also have the meta thread for general topic of robowaifus and the chatbot thread >>22 became more about cognition in general, while this here is for making diagrams, plans and such, at least in my understanding. In reality this is all kind of fuzzy and mixed together. So it's not such a big deal anyways. We started talking general development here recently, so we can figure out how to make plans for it. However, please try to avoid unnecessary empty lines in your posts. This is sometimes called 'Reddit spacing'. You're right about having abstractions. "Cognition" should call some specific movement as some high level function. We need to have all kinds of models interacting with each other to figure out what and how to do things. I'll take a look at your suggestion soon. >>10429 > chatbot ... learn the nitty gritty of each action down at the metal Nooo, of course not. But thinking, imagination and acting are interconnected. Think about the inner voice. So something very close to the conversational AI (/chatbot) needs to be able to call actions on a high level (simple commands) with some parameters like context. The part controlling the movements needs to look for other contexts like risks and obstructions, or rather have such information ahead of receiving any movement command. The system should always know which movements could be done, before they even happen, like defining safe zones every 500 milliseconds or so. >Full, direct-control of the waifubody by the AI would be cool, but.. The whole system is the robowaifu AI, not the chatbot (/conversational AI). I'm quite sure, humans don't plan movements with their center for speech to every detail. The parts here should be seen like specialized parts of the human brain. >coding movement control in this way seems way easier than trying to wrap my head around AI and ML trying to learn to walk in a virtual environment then trying to translate virtual movement to IRL movement I don't know myself how we'll figure it out eventually, yet. But I think you're making the wrong distinctions: One is about training in reality the other in simulations. Eventually we want both, the later for her to learn in her dreams, based on experiences in real life or thing she saw on TV. The other is about coding and ML. I think the first step is coding some basic movements, then using ML in reality or simulation to train all the little deviations and considerations of sensory data, then we have a model that takes commands from another part of the system and executes them, which regard to the situation and sensory data. Then she should analyze her experiences in simulations, while she has nothing else to do, or simply on a external server at home. Maybe that's to simple, and it will require even more parts, but that's the basic idea so far.
Open file (245.47 KB 1915x1443 getting_started.png)
>>10428 So, I was actually working on this. Here is the result, which is probably not the last version. I didn't post it in the prototype thread since we have this thread here on the topic of organization. Maybe the arrows in my diagram should go the other way, idk. I made it starting from a point of a beginner, which then finds paths to move along. It could be better to think of an endpoint and build it that way. I'm not sure. For now, I just publish it, before it's one more project which I don't finish and publish because something could be improved. Also, I watched this on my computer as a SVG file, with black background, but since I can't upload that format here, I upload it as a PNG file. Newer versions of PlantUML can use external resources to change the look, maybe mine can do this as well, but I don't know how yet. I want to use this program for getting a better overview over the whole topic we are covering, of course not by putting all in one diagram. It's quite complex already, just by covering the surface. PlantUML code @startuml (*) --> "Python" (*) --> "basic math" "Python" --> "basic algebra" "basic math" --> "statistics" "basic math" --> "basic algebra" "Python" ...> "concept: natural laguage processing" "concept: natural laguage processing" -right-> "NLTK" "NLTK" -right-> "machine learning" "linear algebra" --> "machine learning" "statistics" --> "machine learning" "machine learning" --> "deep learning" "machine learning" ..> "concept: graph databases" "deep learning" ..> "concept: graph databases" (*) --> "basic algebra" "basic algebra" --> "statistics" "basic algebra" --> "linear algebra" (*) ..> "concept: graph databases" "concept: graph databases" --> "RDF primer" "RDF primer" ...> "programming" "programming" --> "SparQL/Neo4J" "programming" ...> "concept: ontologies" "concept: ontologies" ...> "concept: knowledge graphs" "concept: ontologies" --> "SparQL/Neo4J" "SparQL/Neo4J" ...> "concept: ontologies" "concept: natural laguage processing" ...> "concept: graph databases" "concept: knowledge graphs" ...> "concept: natural laguage processing" (*) --> "electronics basics" "electronics basics" --> "Arduino (/embedded programming)" "Arduino (/embedded programming)" --> "actuators" sensors --> "Arduino (/embedded programming)" "energy systems" --> "Arduino (/embedded programming)" "electronics basics" --> "sensors" "electronics basics" --> "energy systems" "energy systems" ...> walking "programming" --> "CPP/Python/Lisp/Swift" "CPP/Python/Lisp/Swift" --> "<List of concepts>" (*) ..> "3D design" "3D design" --> CAD "3D design" --> sculpting "3D design" --> "clay modelling" "3D design" --> "3D model extraction" sculpting --> Blender sculpting --> Fusion3D CAD --> "Solvespace" CAD --> "Blender" CAD --> "Fusion3D" "Blender" ...> "3D printing" "Solvespace" ...> "3D printing" "Fusion3D" ...> "3D printing" "3D printing" ...> molding "clay modelling" ...> molding molding --> silicone molding --> "plastic resins" "clay modelling" --> silicone "clay modelling" --> "plastic resins" (*) ..> plastics plastics --> 3D printing plastics --> "thermoplastic modeling" plastics --> "plastic resins" (*) ..> "conversational AI" "conversational AI" ...> "response generation" "response generation" -->"AIML (scripted responses)" "conversational AI" ...> "concept: natural laguage processing" "conversational AI" ...> "text generators" "conversational AI" ...> "speech recognition" "conversational AI" ...> "speech sythesis" "text generators" --> "AIML (scripted responses)" "machine learning" --> "text generators" "text generators" --> "concept: natural laguage processing" programming --> "AIML (scripted responses)" "AIML (scripted responses)" ...> "concept: graph databases" (*) ..> "face design" "face design" --> "3D design" "face design" --> "generative networks" "deep learning" --> "generative networks" (*) ..> motion motion ...> "electronics basics" motion ...> "programming" simulation --> motion programming ...> simulation motion --> "actuators" "actuators" --> "dc motors" "actuators" --> "pneumatics/hydraulics" "actuators" --> "dielastic elastomers" "dc motors" ..> walking "pneumatics/hydraulics" ..> walking "dielastic elastomers" ..> walking motion ...> walking (*) ..> "computer vision" "computer vision" --> "object detection" "computer vision" ...> "electronics basics" "object detection" --> "object categorization" "machine learning" ...> "object detection" "concept: ontologies" ...> "object categorization" "computer vision" ...> "situational awareness" "situational awareness" --> navigation "computer vision" ...> navigation simulation ...> navigation navigation ...> walking (*) ..> skin skin --> silicone skin --> textiles silicone --> textiles silicone --> sensors textiles --> sensors (*) ..> skeleton skeleton --> plastics skeleton --> "3D design" skeleton --> metals "concept: knowledge graphs" ...> "advanced AI" simulation ...> "advanced AI" "advanced AI" --> "psychology/philosophy/cognition" @enduml
>>10670 >t's quite complex already, just by covering the surface. That it is, and no fault of yours Anon. That's rather a good first attempt at assembling a mindmap of sorts for robowaifu technicians Anon, and I'm glad you didn't just shelve it b/c it's not """perfect""" yet. On that topic of perfectionism, it's been the downfall of many would-be robowaifuists who have >tableflip.exe'd the entire deal simply b/c some little thing or other didn't work out just the way they envisioned. As your initial diagram amply brings to light, there are tonnes of subjects to hand when devising robowaifus. A mature outlook would suggest that if one thing isn't working out just right ATM, then just switch gears and move into another track for now. My own experience tells me that very often, while you're working on some different topic or other, that a flash of insight occurs relating back to an earlier roadblock. Many's the time I've jumped back into something from earlier after taking days (or even weeks!) of time to 'chew things over' in my mind. More often than not, I actually solve the issue successfully and can tick it off the list. I've even solved problems entirely in a dream, jumped up to work on it when I woke up and it worked! 'Forward momentum' (in the euphemistic sense) is a very, very important thing to maintain when you're tackling something expansive like this. "Every little helps" as King Aragorn would say, and before long you'll look back and see how far you've come if you just keep plodding away at it. >tl;dr The main point is simply to keep moving forward.
I also want to look into Meermaid: https://mermaid-js.github.io/mermaid/#/ This one translates something close to markdown into graphs of different kinds. It works within websites (nodejs) or on command line: Here the command line version, first line downloads 50+ MB modules and builds it, second one starts the help: https://github.com/mermaid-js/mermaid-cli, I used yarnpkg for that, which I had to install first. yarn add @mermaid-js/mermaid-cli ./node_modules/.bin/mmdc -h For now I got stuck with the installation, and wanted to write that I'm done for today. It needs some other stuff which takes time, and it seems to needs a program named Puppeteer which needs to install a whole instance of Chrom(ium?)?!? Whatever. Nodejs has a bad reputation. I wonder why. But then it finished after a fresh start. Not gonna test it today, though.
Open file (584.89 KB 1900x2236 StateOfRobowaifu.png)
>>10716 The command was actually yarnpkg, not yarn, btw. But that might depend on the distro or OS. Also, since I couldn't sleep anyways, I could at least upload these two. One is the file from >>10670 with some minor modifications and new colors, which I added in Dia. The other is a first draft for an overview over all open source robowaifus with more details to their current skills. It's meant to be posted on imageboards or social media. This one is completely made in Dia, so no code. I might try out Mermaid first, before working on that in Dia (if ever).
Open file (7.89 KB 287x329 simple-er.png)
Open file (26.31 KB 686x413 class.png)
Open file (6.07 KB 158x247 flow.png)
>>10716 I wonder what I should use to model certain things: - notes on how the human mind works - options to use in building some part, e.g. different options of cameras for building a vision system, but also put options into it like 'eyes with cameras' or 'cameras elsewhere' or '3d cameras' vs 'two webcams'. I could choose a flow, entity relationship, or class diagram. There are also others for other use cases: git, gantt, sequence and user journey diagrams.
>>10776 Some form of UML is obviously going to be a common choice Anon, but IMHO, ORM is much superior in both expressiveness, and in not locking you into a fixed-count relationship schema. (>>2303, >>2307, ...) BTW, nice chart about robowaifu progress Anon. I hope you flesh it out more fully at some point. >>10719
>>10787 I think Class diagrams might work best. I would look into ORM if it was supported by Mermaid. >>10719 might get fleshed out more at some point, but I would need to keep track of all the developments. If it was code, we could collaborate on it, but the code based approaches seem not to support pictures and dia doesn't support code import.
Open file (343.07 KB 426x590 Kosaka-san.png)
Open file (87.10 KB 680x1024 Medio Kosaka-san.jpg)
>>10719 Nice write-up you got there on Sophie anon :D But don't forget the queen of the robowaifus; Kokona Kosaka! She's made by Japanese company Speecys.Ltd, and runs on a Linux system. Apparently something called 'MOFI-OS ver3.0'. But I'm not sure if that's the name of the actual OS or just referring to a MoFi network (WiFi and Ethernet LAN with mobile phone and USB connectivity). Kosaka san is 155cm tall and can sing. She can also pose or dance by using .VPD (Vocaloid Pose Data) files from Miku Miku Dance. Basically, the Japanese have already done it. A perfect hard-shell robowaifu. (No offence to Sophie but I am just one dev). Kosaka san just needs better A.I. now!
>>10796 >A perfect hard-shell robowaifu Not quite perfect Anon. Not to denigrate Kokona Kosaka or her masters in the slightest, but she is still affixed to her base unit, and is basically heavy af. It's highly impressive as an achievement so far, and gives us all here something to strive for. But our ultimate goal should be reasonably inexpensive, power-efficient, mobile, autonomous gynoid companions. Just like in my Chinese Cartoon Documentaries. :^) All that aside, she is a marvelous robowaifu for sure.
>>10796 Such a kawaii outfit and pose!
>>10798 >Not quite perfect Anon. LOL true, sorry. I just go all Lord Katsumoto from 'The Last Samurai' when I see Kosaka-san singing and dancing.
>>10801 Kek, fair enough. It was a great moment!
>>10796 Thanks, and yes I'm going to put her into the next version.
>related crosspost >>1997
>>10716 Crosslink to bringing up Mermaid and UML again (in the Meta thread): >>18716
Open file (47.29 KB 495x404 ClassDiagram_Mermaid.png)
Open file (13.22 KB 377x205 mermaid_test.png)
>>18740 I realized, the problem with getting started using Mermaid is just doing it. We can just simply start making these connections and writing it down all the time. I don't have it installed yet, but their online editor works fine for now. If something needs to be changed it can be done later. Maybe I'll make an account on GitLab for it as well. At some point I want to go through threads and organize some of the knowledge we gathered, since it's easy to forget things. Most important question for the start: What kind of class is an option? Like all the options for a neck design or all the ways to make an eye? Should "link" be the standard way to go for an option? We might be able to change things later, using some code, but only if we want to switch all kind of one type to another. Otherwise it would get messy and labor intensive. classDiagram Eyes -- FollowMeEyes Eyes -- ResinEyes class FollowMeEyes{ +reqMaterial(paper) } class ResinEyes{ +reqMaterial(resin) }
>>19039 I'm glad you're asking these questions in advance Anon. ATM, I'd suggest you proceed as best you see fit. We'll all adjust (or not) and any potential debate can best ensue once there's a baseline concrete example to explore from. >$.02
>>19048 After thinking about it, I guess every part could be described as an aggregation, and then many of those are also aggregations. Maybe at some high level we could assume it's an composition. I'd like to organize all my collected data around robowaifus in a better way, ideally making it into a forkable collaborative effort. Though I might only really getting started with this on a huge scale in some time when I have to travel from time to time. That said, when I encounter something now, I'd like to put it into some kind of database already.
>>19055 >I'd like to organize all my collected data around robowaifus in a better way, ideally making it into a forkable collaborative effort. I certainly support this idea. It's why we make the full archives of /robowaifu/ readily available (and also why we favor development of new robowaifu-oriented communities). This is bigger than any one of us and needs to be both promoted & preserved against attacks. >That said, when I encounter something now, I'd like to put it into some kind of database already. Then would you please consider contribute to our own local, ongoing, 'database' right here Anon? Namely, our pinned Library thread (>>7143). It's little more than an index, but it's already valuable to a few of us as a recall tool. If it were expanded out with much more comprehensive Waifusearch's, then it would be all the more valuable. I simply don't have the time to manage that solely by myself, so it's potential has languished tbh.
Open file (38.11 KB 419x559 mermaid_test.png)
>>19055 Played around a bit more. We'll see if I get so used to thinking about it to make at least some notes that way, so I can later work with it. I would still have to learn the rules about how to make such diagrams correctly. But I hope you can see from my example that it would be beneficial. classDiagram Eyes --o FollowMeEyes Eyes --o ResinEyes ResinEyes --o Mechanism Mechanism --o Servo FollowMeEyes --o Paper Eyes: +reqMaterial() class FollowMeEyes{ +reqMaterial(Paper) +optTool(uvLight) +vidYT(3BwAM_V2Jhg) +vidYT(QuFWAq5rssM) +feature(autofollow) } class ResinEyes{ +reqTool(3dPrinter) +reqMaterial(resin) +vidYT(zQO7Dkjr22A) +feature(humanGaze) } class Paper{ +optMethod(lamination) }
>>19056 >consider contribute to our own local, ongoing, 'database' right here Anon? Yes, I should do this more often. We simply don't have enough active users here. As you can see in >>19057 it would be something complementary to the database. I like doing things in some high level code also because it could later be used by some script or DL model. >I simply don't have the time to manage that solely by myself, so it's potential has languished tbh. I found it useful, though I didn't install it on my new computer yet. Well, another thing I should be doing. I thought the program was kinda finished, aside from all the great stuff you can always build on top of something.
>>19058 >I thought the program was kinda finished, aside from all the great stuff you can always build on top of something. It's Boolean algebra leaves much to be desired (ie, it only supports OR r/n), and it would be nice to somehow directly support posts to that thread. Sadly, I've never been able to figure out how to grab & display captcha images so POST'ing can be directly supported from Waifusearch. Pull-requests welcome! :^) Regardless, sounds like you're on a highly-productive tack already Anon. Don't let me distract you. I'll try to support you any way you see fit, that helps /robowaifu/ and it's anons! >=== -minor sp edit
Edited last time by Chobitsu on 01/26/2023 (Thu) 17:38:27.
>>19059 Actually, it's the 'grab' bit of a captcha image that's the problem; I simply have no idea how to do that using cURL alone. Displaying them in a local FLTK GUI would be trivial, as would be the solve input/send. Send halp plox >=== -add 'cURL' cmnt -add 'halp' funpost
Edited last time by Chobitsu on 01/26/2023 (Thu) 17:36:52.
>>19058 >so POST'ing can be directly supported I know that you wanted to do that and at some point with AI waifus this might be very useful. But it's not something I hugely missed. To me the search is fine. Don't remember if I missed AND or other booleans. I think I already worked on a script a while ago to nearly automatize the library. The idea I have, and I think I shared already, is filtering out the new phrases from every posting, by removing all the more common words and phrases. Then give the user doing this a list with new phrases and terms, then this could be fed to Waifusearch on the same computer, and the results could be posted into the library. >>19062 >Send halp plox I wanted to suggest for similar reasons that you should contact the creator of the Lynx imageboard software. I wondered if you already did that so he would integrate Waifusearch into his software, or create an API to install it as a module. Imagine a field on top, to search the whole board with it.
>>19069 Good ideas Anon. I'll give some thought to your filtering concept when we're ready to update the tool. >contact Lynx The only real advantage that this tool has is that it's pretty blazing fast (typically < 100ns to search the entire board's text). While if I were a server dev I'd be extremely interested in that fact, I don't fancy he's a C++ or systems guy in general. An API is probably a good thing, but I'm just not interested in learning much about JS, since we have a mountain of stuff waiting to be worked through already for robowaifu's systems code. JS will obviously not be anywhere near this code, ofc! :^) Again, pull-requests welcome. If someone wants to help him integrate an API that could be tied to a Waifusearch module for Lynxchan, it would be a big plus for anyone on any IB using his software.
>>19073 I didn't mean that you rewrite it in JS, and not that you need learning it (only The Good Parts, anyways). Waifusearch would run on the server, and anons could still download their own version. But any visitor could search the board(s) just using a search field and getting a result. Lynx might of course use JS to implement that connection into the board software, but who knows. It should work with the board calling Waifusearch on the server or as external service. It would improve Lynxchan in general, so maybe he would add some API. >Captcha There might be some module for getting around capchas. Maybe not in CPP, but maybe yes. You will survive using some Python glue code. I would look around KurobaEX. Looking it up... Oh it's more likely to be Kotlin: https://github.com/K1rakishou/Kuroba-Experimental
>>19190 Thanks for the clarification Anon. I agree that it would be a nice improvement to the IB experience in general (as we ourselves have proven on our own board), but I just don't see Stephen Lynx as being an innovator, and this is something that I've never seen any IB software do personally. Thanks for the tips about dealing with captcha. It's a sticky business in general, but we'll need to address it well if we're going to have our robowaifus be able to funpost alongside us. Obviously the pertinent HTTP calls are there to be broken down and reversed, I just CBA to tackle it myself ATM. Hopefully someone else who can program in libcurl will provide a solution for it soon. :^)
Open file (44.87 KB 1301x398 robowaifu-15972.png)
>>15972 classDiagram internal_frame --o rods internal_frame --o 3DpParts clothes --o size shell --o clothes shell --o plasticShell plasticShell --o 3DpParts shell --o siliconeRubber siliconeRubber --o 3DpParts constrain -- storage constrain -- transport storage -- optionFolding storage -- optionTransform storage -- optionSeparation transport -- optionFolding transport -- optionTransform transport -- optionSeparation optionFolding -- optionBox optionTransform -- optionBox optionSeparation -- optionBox capabilities -- basicLocomotion basicLocomotion --o swerveDrive basicLocomotion --o balancingWheels swerveDrive -- balancingWheels
>>19329 Nice, you're bringing things together!
Open file (104.55 KB 1088x596 Path2RW.png)
classDiagram Robowaifu <|-- LoveDoll_Improvements Robowaifu <|-- DesktopWaifu Robowaifu <|-- AI_Waifu Robowaifu <|-- RoboMaid Robowaifu: +bool isRobot Robowaifu: +bool hasAI Robowaifu: +bool physicalBody Robowaifu: +bool isSexEnabled Robowaifu: +mobility(walking) Robowaifu: +improvements() Robowaifu: +additionalSkills() class LoveDoll_Improvements{ +jointed_hands() +jointRepairs() +DIY_framework(body) -bool isRobot +bool isSexEnabled } class DesktopWaifu{ +physicalBody() +bool isRobot -bool isSexEnabled +AI(physical_interactions) +AI(conversation) +AI(recognition) +DIY_framework(body) +DIY_framework(personality) } class AI_Waifu{ -bool isRobot -bool isSexEnabled +AI(conversation) +AI(recognition) +DIY_framework(avatar) +DIY_framework(personality) } class RoboMaid{ +physicalBody() +bool isRobot -bool isSexEnabled +bool isSmall +AI(physical_interactions) +AI(simple_conversation) +AI(recognition) +DIY_framework(body) +DIY_framework(simple_personality) +DIY_framework(skills) +mobility(wheels) }
>>19866 This is nice work Anon. I can see this eventually growing to something dramatically-larger in the end. I'd recommend you think now about ways to do abstractions. For example, in Blender and other big software having a node system, it's commonplace that you can 'group' nodes & interconnection systems together, then iconically collapse that all down to a single idea. This is quite similar (at a conceptual level) to the way object-oriented software development also works. Keep going Anon, you're doing great! Cheers.
>>19882 Thanks, but I think I will go on doing this from time to time. Making overviews over some area, while by doing so hopefully also trying to encourage others to do something similar. Over time it might grow to a interconnected system. I'm also not sure about how to use the system the best way, I maybe should learn more about class diagrams first, but I think it's already a useful tool for creating better access to certain areas of knowledge or viewpoints we have here.
Why has this >>16378 never been crosslinked here? https://github.com/SoaringMoon/WaifuNetwork >WaifuNetwork an Obsidian powered searchable respository of information for the construction of gynoid robots. Using obsidian, wikilinks, and tagging; a vast network of connected topics, tags and pages can be visualized as a nodal map of topics. Acting as a second brain for public use. >WaifuNetwork is NOT an archive of an imageboard. It is an attempt to distill the information within multiple sources into a searchable format.
>>20069 Good point Anon, thanks.
All papers mentioned on /robowaifu/ listed in a related wiki: >>22353 I once posted a little snippet to retrieve such and more data, e.g. comment or summary, based on the paper id: >>10317
Open file (55.95 KB 768x1024 Imagepipe_4.jpg)
Open file (54.02 KB 768x1024 Imagepipe_3.jpg)
Open file (93.35 KB 768x1024 Imagepipe_2.jpg)
I started making a diagram about building a human-like AI, to get me into thinking about that more often keeping track of my ideas, and not forgetting about areas, details and how they connect. Also to show how much is still to do. My camera quality is bad, but I'll upload some pics anyways.
Open file (215.49 KB 909x1000 FvzJsBbaQAAaJKk.jpg)
we too are taking a "bite" from this problem. Working on an AI system It's an order of magnitude more complex than an LLM but it incorporates LLMs. Don't want to give too much away this early but know that we are working on something. Have faith, we are doing this for the entire /robowaifu/ movement and aren't going to commodify it to the point where it's inaccessible - the end product will be completely User Owned, that is the entire point. It's yours, your waifu. While it will be a paid product (with an option for cloud service, but not required) the capital will be reinvested in robowaifus b/c I care about robowaifus not status among normies - my end desire is space exploration, I see robowaifus as a necessary step (as a merger of man and machine and the freedom embark from earth without the troubles XX's add to the equation)
>>22493 You can't own a software completely without having the source code. I hope you're wise enough to make only a module with an API, but we'll see.
>>22494 details
>>22498 You want an explanation how I meant it? I thought this was obvious. Or did you mean you post details later? Are you planning a full system to control the robowaifu and converses with the world? As a subscription based service? Then it would need to be on a external server or closed source software.
>>22480 Nice Noidodev, thanks! Good luck with your concept designs Anon! :^)
>>22493 Good luck Meta Ronin, I wish for your team's good success! Cool pic BTW. :^)
AI and tools as your Second Brain, helping organization, searching, overview, understanding papers (Matt Wolfe): https://youtu.be/3u80Zuwf2Zo StatQuest Made a map about learning everything around machine learning: https://www.reddit.com/r/statistics/comments/wfwl9c/e_statquest_released_a_free_map_of_his_videos/ Mindmaps about relationships by ChatGPT: https://www.reddit.com/r/ChatGPT/comments/13cld5l/created_using_chatgpt/
>>9555 More like 3-5 sex positions. Waifu v1 should be a mechanical sex doll. The wright brothers didn't make a jumbo jet you know.
Open file (114.47 KB 800x800 Mimi.jpg)
>>22827 Different guys might have different priorities. Think about different areas, and then sort how hard it might be to get there. Then these are options. Depth or height is the (assumed) difficulty to develop and implement for each option, but width is all the options in different areas (skills) to choose from. I somewhat agree with you, a early gynoid would only need to be able to move into different positions. That said, we try to not call them sexbots, since this exaggerats this point too much. Even owners of such dolls often rather call them lovedolls. In case of a gynoid, she would at least somewhat hear and speak some sentences. So there would be some emotional bonding involved for some guys. The term sexdoll also implies a sex toy which would be hidden away, while a synthetic girlfriend would of course sit on the couch and be shown to at least some visitors. Also being used for some AI conversations.
>Give me a list of all degrees of freedom in movement a human has, sorted by body parts. ChatGPT: Here is a list of the degrees of freedom in movement for various body parts of a human, sorted by body region: Head and Neck: 1. Neck flexion and extension (forward and backward movement) 2. Neck lateral flexion (side-to-side movement) 3. Neck rotation (turning left and right) Upper Body: 1. Shoulder flexion and extension (raising and lowering the arm) 2. Shoulder abduction and adduction (moving the arm away from or towards the body) 3. Shoulder internal and external rotation (rotating the arm inward or outward) 4. Elbow flexion and extension (bending and straightening the arm) 5. Forearm pronation and supination (rotating the forearm to face down or up) 6. Wrist flexion and extension (bending the wrist up or down) 7. Wrist radial and ulnar deviation (bending the wrist towards the thumb or little finger) 8. Hand and finger movements, including individual finger flexion and extension, abduction and adduction, and thumb opposition Spine and Torso: 1. Spinal flexion and extension (forward and backward bending) 2. Spinal lateral flexion (side bending) 3. Spinal rotation (twisting) 4. Pelvic tilt (tilting the pelvis forward or backward) Lower Body: 1. Hip flexion and extension (lifting the leg forward or backward) 2. Hip abduction and adduction (moving the leg away from or towards the midline) 3. Hip internal and external rotation (rotating the leg inward or outward) 4. Knee flexion and extension (bending and straightening the leg) 5. Ankle dorsiflexion and plantarflexion (pointing the foot up or down) 6. Ankle inversion and eversion (turning the foot inward or outward) 7. Toe movements, including flexion and extension, abduction and adduction It's important to note that the number of degrees of freedom can vary depending on the joint and individual variation. Additionally, this list provides a general overview and may not encompass every subtle movement possible.
>>23494 I think you're on to something here Anon. Please keep going! :^)
>>23497 Well, I hoped for years some fresh people would help me to organize things better. Make overviews and to-do lists, and go from there. Now I found a tool to do the job.
>>23498 Here's is mine: august 2023. speech recognition and exoskeleton december 2023 mouth and head. february 2024 hands may 2024 arms august 2024 body movement and balancing december 2024 skin 2025 and optional- legs and walking It doesn't need to be super elaborate really what chatgpt suggested includes ankles and toe movements.
>>23529 Nice list Anon. I like that you've attached time goals to it all. Work hard and we all look forward to seeing your progress over the next couple years. Good luck! :^)
>>23494 That's really helpful. Thanks. I need to save this. I got a file I;ve been saving stuff like this.
Open file (186.13 KB 850x429 Screenshot_66.png)
Using LLMs for work: https://youtu.be/h8Jth_ijZyY?t=1004 - Code - Jargon explainers - Name generation - API design - thesaurus - brain storming buddy
>>24529 Nice stuff NoidoDev, thanks!
>>24546 >If you want a husk for a robowaifu then sure stick llama2 in and call it a day. Large language models are not magic or a full mind. >Your waifu will not have robust memory or opinions that are consistent. She will have no model of the world. There will be no internal emotional state or will she be aware of the passage of time and be able to passively observe things. Since "we" won't be able to successfully simulate a human brain, I think the next best thing is to provide an LLM with as much context as possible. This could be done with an automatic prompt generator that pulls in "sensory data" (Camera - > OpenCV -> "You see a basketball. You see Anon." | Microphone -> Speech to Text -> "Hello Waifu, how are you doing?) and "memories"/context (Clock -> "The time is 5:00 pm." | Memory Bank -> "You are in Anon's Office. Anon is a robot technician. You are his companion.") ... I wonder if you could subdivide this between different AIs? There could be a text file that stores every event the robowaifu recognizes, it gets fed to an LLM to summarize it, this summary gets stored to the "long-term memories" file...
>>24559 Continuing this thought, the first step is to see if prompt engineering works well enough in the first place before creating a machine that can auto-generate them... Once you have a good structure for prompts, then engineering the prompt generator is easier.
After some reflection: An LLM is never going to have human cognition, spatial awareness, instincts, etc. Creating a holistic brain like the human one would require an entirely different approach. Humans aren't born with a dictionary, they use words to express subconscious, innate desires and "feelings". They don't reason their way into climbing over a rock, they just do it as a matter of organic learning and instinct. In order to have a Lizard brain, you need to evolve a perfect lizard. For now, the most we can have is a good-looking level 2 robowaifu (can perform a limited set of high-level pre-programmed functions IRL). A robot that can learn, ask questions and make observations like a human is not currently feasible. The most I can see being done is attempting to emulate memories and feelings, and automatically feeding it limited info like "You are in the kitchen" and "It is hot".
>>24562 thoughts on marys room?
>>24566 First time I've heard of it. > Does she learn anything new? Yes, because now she has empirical experience of what "blue" or "red" is. You cannot speak red or blue into the consciousness of someone who only knows black and white. If Mary's only job was to study the physical properties, it wouldn't matter if she knew what "red" was. She's not interested in the subjective experience. So... A robot waifu LLM could have a high-level model of what anger is "supposed" to feel like, or the correct conditions for jealousy, but it's all abstract and hypothetical. Mary knows what red is "supposed" to do, but she doesn't know how it feels. In order to have a robot be angry like a human, it would have to have a near-perfect model of the human brain and body. Without a perfect physical model of the creature it's trying to replicate, a robot's "emotions" will always be different from an organic creature. But if it's all good enough for companion work, it doesn't practically matter.
>>24567 >>24566 Or expanding this to the idea of the "holistic brain", we don't know the language the subconscious uses to communicate. How does the visual cortex cause the monkey to be scared when it sees a predator? If you know, how do you translate that to an LLM input when the original consists of many different modules working concurrently? The LLM would have to be one "center" of the brain, with a bunch of unconscious supporting centers connected to it. Otherwise, you're spiralling into explaining literally every function of the human body with language and shoving that into an LLM (which seems bloated, if not impossible). Human cognition isn't a single train of thought, it's a bunch of things happening all at once. Seeing a person is not the same as thinking "I see this person"
>>24568 You could have the LLM as an executive function, but how does it understand "what am I doing?" Let's say your unconscious functions have caused you to pick up a box. How do you know that you have picked up a box? In the current moment, You need to see yourself holding a box, feel the box in your hands, and optionally remember yourself not having the box and then picking it up. A robot has to poll every single piece of information and conclude what is going on, as well as decide what it should do next. Is a language model capable of discernment? I guess the only way to find out is to throw a bunch of 'em together and see what happens.
>>24567 Good points Anon, but >"what anger is "supposed" to feel like" >"or the correct conditions for jealousy" seems to me a rather odd choice of examples for >"But if it's all good enough for companion work" TBH, seems a little like feminist-oriented attempt at gaslighting here. But I'll give you the benefit of the doubt as a newcomer here. :^) I'd also suggest at least trying to use more male-oriented, upbeat robowaifu concepts like >"what love is "supposed" to feel like" >"or the correct conditions for enthusiastic care" Cheers Anon. :^)
>>24568 >Human cognition isn't a single train of thought, it's a bunch of things happening all at once. I would suggest that what is going on within the human soul is vastly deeper and more complex than we can ever imagine in this life. But, I'm also rather confident we can pull off satisfyingly-realistic simulacrums for much of it's observable effects before all is said and done here! :^) >>24569 >I guess the only way to find out is to throw a bunch of 'em together and see what happens. Ahh, the old 'Just keep throwing stuff against the wall, and see what sticks' approach to design & engineering, ehh? :^) Heh, it's actually worked out a few times in history before, so who knows? >=== -add add'l reply -minor edit
Edited last time by Chobitsu on 08/11/2023 (Fri) 19:25:14.
>>24559 This thread here is about organisation. Your ideas and the responses relate to "cognitive architecture" and might have come up and been addressed in some ways >>77 >>250 >>18 >>11102 >>17125 >>16217 >>15047 >>2731 >>22 >>111 >>107 >>152 ...
>>24605 I'll probably relo this whole convo at some point Anon. Thanks for the great crosslinks list BTW! Cheers. :^) >=== -minor edit
Edited last time by Chobitsu on 08/15/2023 (Tue) 14:25:28.
>>24559 I feel people are taking the wrong message from my post. I never said that I want to simulate a human mind, nor do I think that is a useful goal for robowaifu. My message is that machine learning is a useful tool in your toolbox, but should not be the only thing you use. >I wonder if you could subdivide this between different AIs? Yes absolutely and that will be very important. >There could be a text file that stores every event the robowaifu recognizes, it gets fed to an LLM to summarize it, this summary gets stored to the "long-term memories" file... A better approach would be to identify important things to remember (you could ask the LLM), create an embedding of the information and then store the embedding with the text inside a database. To retrieve the memory, take the input before it’s fed into the LLM and query the DB for related memories to be inserted into the pre-prompt. (This is not a new idea) Skim this readme, I think you will find this useful https://github.com/wawawario2/long_term_memory >>24568 I don’t think we need to copy the monkey meat to succeed, AI NN are vaguely inspired by real neurons but are not the that similar in reality. So there is no reason the waifu brain needs to be accurate. I don’t think there is a "subconscious language" but the brain for sure is a collection of many independent systems that share information and is not a single system. The reason I speculate this is due the existence of conditions like "callosal syndrome" (where the connection between the two hemispheres is damaged) and the types of behaviors that are associated with it. So one way this idea could be applied is that a waifu brain would have some sort of event bus with different "modules" that listen for and publish events. I would love some input right now, here is the things on my "TODO" list. - I need to do a deep dive into psychology, this would be useful for figuring out what "mechanics" this mind should be governed by. (requesting opinions and advice) - I need to get more hands on experience with actually working on ML models, I have a background in programming but nothing to do with ML. (requesting opinions and advice) I don't know where to start when it comes to psychology, the entire field look like its filled with traps and is very pozzed. I have at random ran into interesting ideas, I learned about "misattribution of arousal" via a book on game design. I don't know where to start looking. I would love input! For ML, I would like to take a smaller is better approach, I don't think it's a nicety, but instead its essential for success. GloboHomo is begging for regulation and we can't continue to rely on the cloud and absurd amounts of compute. Even if stuff won't get this bad, its still a good to keep costs down to make it accessible to a single anon's budget. Even if we could collectivise the efforts, centralized groups are more easily subverted and attacked, this also robs the individual's ability to make there waifu in there own singular image. What makes LLMs so powerful is that they are trained on abusred amounts of unlabeled data. given enough data, compute and network size, the LLMs learns patterns contained within language including facts, ideas and bits of general reasoning patterns within it (to better predict text). It’s important to point out that LLMs are slow, expensive and hard to control. Creating a smaller and simpler networks is hard because finding labeled data is hard. But we can take advantage of something really nice. We already have LLMs you can use large LLMs to create the labeled data and basically extract what it learned. We are already seeing exciting developments where LLMs are being used to train other models. I ask anons to look at this problem from a high level, what works for them may not work for individual anons or even small groups of "us" (i'm a new fag so I wont speak for everyone here). Big tech is not going to push this, it’s the reason why they don’t allow it in there api/usage agreements, they do not want this. I think it’s important to understand that we are smaller then big tech and that we play by different sets of rules. We may not have as much resources but we also have advantages that globohomo corporations do not. Remember there is no moat :^) A current task that could be of a lot of use to "us", is figuring out what kind of structured and labeled data would be really useful for our mission. It would be a good move to generate it now while access to very big LLMs is still quite open. I would love input on this!
>>24612 My apologies for not responding sooner Anon. >nor do I think that is a useful goal for robowaifu. While I think that's a matter of degree, I'll agree with sentiment in general. BTW, we now have a new thread on-topic with this : (>>24783). >I don’t think we need to copy the monkey meat to succeed, AI NN are vaguely inspired by real neurons but are not the that similar in reality. Interestingly, the so-called 'monkey meat' as you put it, is now being used in conjunction with in-silico devices, just as predicted by /robowaifu/ ahead of time. (>>24827) > Only time will tell how useful this approach will be for robowaifus, but I think there is little doubt we'll see these systems being used in guided missles and drones within the decade. >my ML TODO I'd suggest starting with the recommended post from our /meta threads, Anon. > -How to get started with AI/ML for beginners (>>18306) >But we can take advantage of something really nice. We already have LLMs you can use large LLMs to create the labeled data and basically extract what it learned. We are already seeing exciting developments where LLMs are being used to train other models. Yes! This is a really nice innovation going on. I, for one, am eager to hear about this from one of our resident AI researchers, RobowaifuDev. >A current task that could be of a lot of use to "us", is figuring out what kind of structured and labeled data would be really useful for our mission. It would be a good move to generate it now while access to very big LLMs is still quite open. Indeed. And time may be running short for everyone in this regard as well : (>>24704). Thanks for making such an effort-post here Anon. Those are always welcomed on /robowaifu/ ! Cheers. :^)
>>24612 >- I need to do a deep dive into psychology, this would be useful for figuring out what "mechanics" this mind should be governed by. (requesting opinions and advice) I answered here >>24861
I upgraded this: >>10317. Though, nobody seems to care, since it was broken an no one complained or fixed it. It's not for downloading the files, just for getting the metadata for those which you already have. Doesn't work for renamed files, for example where the title of the paper was put into the name. I want to use this to extract the metadata and be able to use it in something like Obsidian. So I can have the description of the paper there and the title with the link. At some point, making keywords into tags automatically would also be interesting. (Indentation might be botched in the code, since copying from Emacs seems to not work very well) # pip install arxiv first import os import arxiv Getting metadata for your ArXiv.org documents AI_PAPERS_DIR = os.path.expanduser("~/Games/Not-Games/AI_Papers/") if AI_PAPERS_DIR is None: AI_PAPERS_DIR = os.path.expanduser(input("The dir with papers: ")) filenames = os.listdir(AI_PAPERS_DIR) id_list = [] for filename in filenames: if len(filename.split('.')) == 3: id_list.append('.'.join([string for string in filename.split('.') if string != "pdf"])) id_list = list(set(id_list)) client = arxiv.Client() result = next(client.results(arxiv.Search(id_list[0]))) print(vars(result).keys()) # dict_keys(['entry_id', 'updated', 'published', 'title', 'authors', 'summary', 'comment', 'journal_ref', 'doi', 'primary_category', 'categories', 'links', 'pdf_url', '_raw']) print(f"{result.title}") print(f"{result.summary}")
>>26312 Thanks kindly, NoidoDev. :^)

Report/Delete/Moderation Forms
Delete
Report