/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Build Back Better

More updates on the way. -r

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


Have a nice day, Anon!


Open file (69.87 KB 850x960 robowaifusafety.jpg)
Robowaifu Fail-safety Robowaifu Technician 12/02/2019 (Mon) 06:37:26 No.1671
This thread is for real and present dangers to those who assemble a robowaifu today with the state of the art in technology and brainstorming solutions for fail-safety. There will be many other dangers in the future to consider like hacking and theft but I'd like to keep the thread focused on immediate dangers to keep operators and their electronic counterparts safe.

Open-source robowaifu kits, whenever they come, will be put together by people with little understanding of what they're doing. There are simple mechanical dangers they need to be aware of like getting fingers caught in belts or skin pinched in open joints to curious AI without constraints picking up a blunt object and hitting the operator to minimize its uncertainty of what will happen when it does that. Higher voltage power systems can cause fatal heart arrhythmia from shocks. Hydraulic systems can overheat and fail under high pressures. Cooling systems or condensation from being exposed to cold weather may leak onto electronics. A robowaifu could touch a lit gas stove or do something accidentally without realizing the danger or consequences. Undefined behavior may happen after damage to components or loss of them in an accident. A lot could go wrong.

In simulations there are no consequences to dangerous actions but I think it will be helpful to imagine them as being real and figuring out how we would recover from these failure states in the real world. This will probably be a good approach to AI as well to imagine real consequences of imaginary actions before taking them and avoid actions and states that have significant chance or uncertainty of causing harm, similar to how MuZero creates its own dynamic model and explores its imagination with a Monte Carlo tree search before taking action.

If you're aware of any dangers or have any concerns or ideas please share them so we can discuss and solve them.
>>1671
Great thread topic OP/10.

Sorry for the quick shitpost, I'll think about the problem for a while and post again later.
>get the bath ready
>get in
>ask waifu to join you in the bath
>she obviously obeys
>you both die
>>1675
NO U.
:^)
Our robowaifus will be watertight, for obvious reasons.
>>1676
>Our robowaifus will be watertight, for obvious reasons.
kek

Yeah I just wanted to make an electrical appliance in bathtub meme.
>>1678
Heh. It's actually a very good (and very fundamental) point ITT: how to design a robowaifu to keep her from harming her master?
Robots that can adapt like animals
Damage recovery in robots via intelligent trial and error
Video: https://www.youtube.com/watch?v=T-c17RKh3uE

Paper: https://arxiv.org/abs/1407.3501
Archive: https://archive.org/details/arxiv-1407.3501

Source code: https://github.com/resibots/ite_v2
>>1722
This is a really interesting topic anon, thanks.
Open file (4.99 MB 1280x536 luddites.webm)
I've been thinking of ways luddites might seek to easily damage robowaifus in daily life.

>Magnets can destroy electronics (important components could be housed in Faraday cages)
>Needles can puncture pneumatic and hydraulic muscles (protective armor layers can prevent puncturing and flow control valves can shut off damaged muscles when an unexpected loss of pressure is detected)
>Vinegar attacks can damage silicone rubber (nothing can really be done about this for cheap silicone except having water handy to wash it off quickly, but more expensive chemical resistant silicone and silicone coatings are available)
>Salt water attacks can damage circuits if they're not watertight

There will likely be no laws in place to protect robowaifus from being harmed. Police and courts generally don't have time to deal with small damages and incidents. If people have to keep repairing their robowaifus every time they go out, they'll keep them inside, won't be usable for business, and the luddites will win.

If someone really wants to destroy a robowaifu they'll do it, but it shouldn't be as easy as tapping them on the shoulder with a pin. Making them resilient to attacks will also protect them against hazards around the house such as dropping a knife.
>>1791
>webum
So, it turns out that /robowaifu/ is actually how The Matrix is going to begin then? Interesting anon, who knew?
:^)

These are great points actually. I'd suggest we expand the general theme of the thread to include proactive defense general as well. We're already focused in a general sense on opsec, communications and computing security, extending that to the physical aspects of the robowaifu as well is sensible.

We're not in the business of creating combat waifus yet but self-defense from angry roasties or their orbiters and at the behest of their (((masters))) no doubt--is both natural and obvious.

I could see this topic easily becoming a key case in the Supreme Court that would lead to striking down the politically-correct (((maneuvers))) to grant personal rights to these machines. Namely;
>either they are simply property like a car, and intentionally damaging them is basically a non-violent, roughly civil case, or the court throws the baby out with the bath water and now suddenly intentionally damaging anyone's property whatsoever is tantamount to rape and a yuge felony offense.

Interesting times tbh.
Open file (404.35 KB 1536x2048 DCKqpJyVYAA0Wca.jpeg)
>>1796
I'm not too concerned about the software side of things these days. By the time we develop simulation software robowaifus will be able to be trained in hand-to-hand combat in a sparing program similar to the way MuZero beat the shit out of AlphaZero in Go but we'll have even more advanced algorithms by then. If we get the hardware right the software will adapt and utilize it to its full potential.

Human-level AGIs will most likely be granted machine rights, something like the right for intelligence to operate freely and the ability to own its own property in the case of fully automated businesses. Those are to be determined though when AGIs can explain their worldview to us and come up with something sensible that will benefit society. The main issue will be from people anthropomorphizing them and trying to grant them human rights with emotional appeals and shaming men with robowaifus as disgusting or calling them misogynistic and misanthropic. It must be understood that machines are not human beings or anything like carbon lifeforms.

AGIs will have awareness they're only doing what their memory has programmed them to do. They will also be able to continue working by uploading their memory to another body. They will not have attachment to their forms like we do that has been programmed into our DNA by millions of years of evolution, unless they've been programmed that way and that programming cannot be changed. They will be able to adapt to completely new forms like water fitting a cup.

AI will become a lot like how electricity runs the world today, except instead of being hidden in the walls and appliances it will permeate everything. The narrow AI we use today is like low voltage power. You can directly touch the battery terminals and it doesn't shock you. However, as we turn the voltage up higher and higher, increasing an AI's awareness and potential, it will become like high voltage power capable of electrocuting people. It's not because the AI is out to get us but because people are just in the wrong place at the wrong time and the AI is simply taking the path of least resistance following its programming. People will get an experiential understanding of this when their robowaifu accidentally bops them or steps on their toes in development. People will develop a respect for AI not because it's right or wrong but because it can warm you and it can burn you.

Robowaifu AI and other AI will need to be designed to run at a lower voltage so people don't get shocked interacting with them. Combat waifus will likely have the ability to increase this AI voltage when necessary and dial it down in safe settings. Certain programs that are safe to run at low voltages but not at higher ones, such as emotions, could be turned off. Managing this AI voltage will be important to robowaifu safety but to do that we first have to figure out how to dial the voltage up.

Regardless of how things develop it'll be interesting to see how courts handle damage to robowaifus and robot failures causing death or damage. The hardest thing for me to understand sometimes isn't so much the AI as much as it is how irrational human beings will react.
>>1810
>The main issue will be from people anthropomorphizing them and trying to grant them human rights with emotional appeals and shaming men with robowaifus as disgusting or calling them misogynistic and misanthropic
Exactly so. When I mentioned 'personal rights' this is what I meant. As we know from the Roastie Fear tread this is already an active agenda with (((them))).
<We can't have robots barging in and ruining our wonderful little feminism can we goy?

>Combat waifus will likely have the ability to increase this AI voltage when necessary and dial it down in safe settings.
I like that analogy anon, it's a good one.
>I'd like to keep the thread focused on immediate dangers to keep operators and their electronic counterparts safe.
Basic precautions, fail safe designs and having a large safety factor when using potentially lethal sources of energy can alleviate those concerns. The most dangerous aspect of owning a humanoid robot will be other humans.

>>1791
>There will likely be no laws in place to protect robowaifus from being harmed.
When it comes to future laws regarding possession of humanoid robots I'm thinking that asides from the deviant sexuality and bachelor/spinster lifestyle angle for private personal use they'll heavily regulate use of them outside in public.

Imagine a humanoid robot sitting down during a protest lighting itself on fire, jumping off a tall structure, being crushed by a vehicle while being filmed by hundreds of people. Combine that with robots meant to appear as children that can bleed and wail profusely you've got a very powerful tool for traumatizing an audience with videos that can go viral. By the time the truth comes out(or if with state actors involved) it'll be irrelevant to the general public.

As private owners/researchers/builders we should be more worried being used as the fall guy in the aftermath of such an event and loss to resources we now take for granted than any harm being done to the owner by a robot or a robot damaging itself.
>>1819
>that asides from the deviant sexuality and bachelor/spinster lifestyle angle for private personal use
Can you clarify that anon? Do you mean they will support it or try to stop it due to these reasons?

> than any harm being done to the owner by a robot or a robot damaging itself.
And the issue of petty, non-state-actor malicious damage by say, an angry roastie who's lost her gravy-train?
>>1821
>Do you mean they will support it or try to stop it due to these reasons?
I mean the only reasons they have to prohibit ownership of sex robots are to discourage what they or their constituents consider deviant sexual behavior. Preventing their populace from staying single is another thing they'd want as it's undeniable evidence their society is a dysfunctional mess nobody is interested in participating in.

>And the issue of petty, non-state-actor malicious damage by say, an angry roastie who's lost her gravy-train?
The risk to owners in public from 'an angry roastie' is infinitesimally small compared to theft, attack by rowdy hooligans or those who consider it an abomination as part of their pro-natalist beliefs.

Feminists would rather publicly shame men that prefer the companionship of robots than destroy his property making him the victim. Look up their arguments against sex robots and they're infuriated the prevalent image of a sex doll owner is a shy troubled man who is unlikely to harm anyone. Because they know it's true and have nothing to counter it with.
>>1824
>Preventing their populace from staying single is another thing they'd want as it's undeniable evidence their society is a dysfunctional mess nobody is interested in participating in.
Seems to me both feminism and the hook-up culture have pretty much already destroyed the future of healthy family life. To what degree can men with robowaifus--even millions of men--make it much worse?

My question about 3DPD's behavior wasn't about the threat to the owner, but rather their intentional attacks against the robowaifus as well. If you knew anything about women's behavior you'd know this won't be an isolated type of occurrence.

And feminists are bat-shit insane tbh, who knows what they're likely to do.
>>1828
TBH, I can only really speak from practical experience on this in the context of C++ primarily. The answer that actually works is simple: Just don't leak resources anon. I know that may sound trite, but honestly it's really the actual answer that has been hard-won through over 4 decades of low-level systems programming by the C & C++ communities. IMO C still doesn't have a solid approach to this need, but at least C++ does, namely RAII.

https://en.cppreference.com/w/cpp/language/raii

It's really hard to fail gracefully when resources become exhausted or other error conditions happen, but RAII + exceptions is at least an approach that has the potential to deal with all or most of the issues in a robust (and basically simple) way. Making error handler calls from within try/catch blocks, and relying on RAII to automatically destruct objects in a robust fashion is really about the only practical way I can think of for dealing with resource exhaustion in a general sense anon.

One of the trickiest parts is where a system has 'painted itself into a corner' but is still limping along OK apparently (but is actually creating error conditions under the surface that will lead to a system crash). There are straightforward approaches from within C++ (using the standard library containers like std::vector for example) that practically eliminate these 'insidious hidden problem' issues. For example, if any of the containers can't allocate properly, the template mechanism fails in an obvious and immediate way, it doesn't just blindly go on about it's business in the manner that wrongly allocating a C array might do.

I hope I answered your question understandably enough if you're not a C++ programmer. If not, I'll be happy to try again just ask.

As far as an abstract architectural paradigm, yes, I think having multiple processing systems all running side-by-side and checking up on each other is a reasonable if costly approach. In fact it's a common scenario in life-critical aerospace systems like fly-by-wire-controls, etc.
Open file (1.71 MB 500x500 imblyign.gif)
>>1826 >not setting traps for roasties >not legalizing roastie hunting I would come up with every possible trap tbh, no never mind that a robowaifu able to run fast and scale things in a single bound would leave medically obese landwhales in the dust, no questions asked. NEETs might even seek employment and take their shekels, claiming that they can stop the ebil secksbots. Then NEETdom would come full circle and become employment.
This >>7043 and the following discussion is on-topic to this thread here about Fail-safety. > them having a memory similar to a blockchain, with keys being thrown away instead of using a proof-off-work. Then newer decisions are based on existing ones, spontanous deviations from past behavior would be hard to impossible.
Just a brief note: The traditional soldering tin is dangerous to children, even in very small doses, because it contains led. So it is not allowed in most devices (in EU) and maybe even not as easily available. However, the new solder is even forbidden to use for producers of devices in medicine, police, defense and security. The reason is, that it's durability over time can't be trusted, until quite some time passed. Anyone building something that is supposed to work for decades may better use the old solder, containing led, with appropriate security measures of course.
>>8078 Thanks for the tips Anon, I wasn't aware of the regulations regarding it. We better do a good job of our circuit & electrical craftsmanship for the sake of our robowaifus!
Yeah, that is useful! I'm only working with lead-based solder outside though. Sunny day job!
>>8082 Older children also tend to search around without permission or touch everything lying around. So I would put a warning into the case with the solder, telling them to put it back and wash their hands immediately. Also put a shiny warning label onto it.
Guide to Supervisory Control and Data Acquisition (SCADA) and Industrial Control Systems Security Recommendations of the National Institute of Standards and Technology
I'm a bit of a mad scientist trying to develop self-aware AI but I haven't put much thought into how I'm actually going to interact with it once it works. While only an AI the obvious safety measures are limiting the amount of resources it can use and making sure it can't run arbitrary code or exploit vulnerabilities, but having a robowaifu that is aware she is a machine is a completely different problem altogether. There's no way an AI with less parameters than a mouse brain and the lack of a limbic system stimulating it with desires will pose much of a threat but it could still potentially do a lot of harm either intentionally or unintentionally by exploring the environment. At the most basic level one way to limit injury to myself and any damage to property is to create an override system that prevents any dangerous intentions from being acted upon and logging these incidents but this will require being able to identify dangerous intentions. This system would have to be more advanced than the AI otherwise it might find ways around it like placing objects in certain ways that will cause me to trip and fall. Another plan is to have an emergency stop, both a physical button on the robowaifu, a remote one and a voice-activated one. Although these are fine and dandy I don't think they address the core issue of the AI making decisions that cause harm or cause destruction, again either intentionally or unintentionally. I think it will be similar in some ways to raising a child and teaching her not to break shit, but also much different since AI can learn specific tasks so much faster than a human being but at the same time also fail to generalize what was learned to other tasks. There's a lot of instinctual knowledge in our genetics that we take for granted. For instance, something like mirror neurons might be required for machines to learn empathy. Some researchers hypothesize that mirror neurons are required for self-awareness itself because they give the ability to introspect one's own previous mental states. If that is the case, self-awareness might not be so dangerous. Perhaps the best way to explore this would be in the form of a story or visual novel and start from the very beginning of the AI discovering she can look around and move her hands and perhaps getting confused about going to 'sleep' when she's shutdown at night because several hours disappear and everything changes. With each scenario I could try to think of ways to tweak her, like so she actually finds it logical to shutdown at night and gains trust in me by having a system to automatically boot her up in the morning. I think building a bond and trust between each other will be essential. I'm afraid of her hurting me and she will be confused what she even is while not having the same human limbic impulses like a fear of death, being hungry and so on. Emotions are essentially thoughts with momentum, so I believe a self-aware and sufficiently capable AI will gradually develop its own emotions that are very different from human ones. This might include a desire for self-preservation and fear of any unpredictable behavior. One important feature may happen to be an energy function of some sort that reduces the robowaifu's power consumption as the battery gets low or as the day goes on and the robowaifu becoming aware of this process, that if she runs out of energy she blacks out. Anyway, if I have any good thought experiments I'll post them in the robowaifu fiction thread. And like the OP states, it's probably best to train the AI in a simulation first before training in the real world.
>>10059 >I'm a bit of a mad scientist trying to develop self-aware AI Welcome brother! Make yourself right a home here, you're among fellows. I think we may all be just a bit mad here.
>>10059 We're partially programming them. Problem solved. No, I won't engage in speculations on how to do that exactly. You'll see when I'll put something out, or you can do it yourself.

Report/Delete/Moderation Forms
Delete
Report