/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality

JulayWorld Attention-Hungry Games™ #1

Nominations are now open!


Help Julay.world stay up.

JulayWorld onion service: bhlnasxdkbaoxf4gtpbhavref7l2j3bwooes77hqcacxztkindztzrad.onion

Max message length: 32768

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

Captcha
no cookies?
More

(used to delete files and postings)


Open file (485.35 KB 1053x1400 0705060114258_01_Joybot.jpg)
Robowaifu Simulator Robowaifu Technician 09/12/2019 (Thu) 03:07:13 No.155
What would be a good RW simulator. I guess I'd like to start with some type of PCG solution that just builds environments to start with and build from there up to characters.

It would be nice if the system wasn't just pre-canned, hard-coded assets and behaviors but was instead a true simulator system. EG, write robotics control software code that can actually calculate mechanics, kinematics, collisions, etc., and have that work correctly inside the basic simulation framework first with an eye to eventually integrating it into IRL Robowaifu mechatronic systems with little modifications. Sort of like the OpenAI Gym concept but for waifubots.
https ://gym.openai.com/
Open file (153.16 KB 450x450 0705060114258_05_24f.gif)
>>155
>PCG
Start by making cute anime rooms for cute anime waifus.

[[[/v/12703642
Maybe it should go to school to learn how to be a good waifu among other things?

www.goodai.com/school-for-ai

>ed. careful w/ that (((brainwashing))) anon
This looks like a great resource for a robowaifu AI Training Academy OP. It's a yuge, tagged, textured, 3D dataset based on IRL environments. You have to sign a license agreement with the company to use it, but it's pretty unique atp.

http://archive.fo/Hawz2
You need a way for the AI to have a 'Matrix' of some sort so it can be trained (with millions of runs?) and be a proof of concept and working model for an irl robot wife body.
I guess the movie Matrix is all about Reverse AI–feeding the AI's version of simulated reality into the human's minds. Anything there that can be used?
Open file (26.41 KB 480x360 0.jpg)
Berkeley robotic research uses predictive modeling based on past observations. This would be a good idea to incorporate into a RW simulator tbh.
The pozfest is calling it 'robot imagination'. Eeh.

news.berkeley.edu/2017/12/04/robots-see-into-their-future/

https://www.invidio.us/watch?v=Li_vZVpiFSA
>>1089
>inb4 we create skynet and swarm the world with Waifu-1000 terminators
>>1092
Topkek.
Meme it!
>>155
>some type of PCG solution
<PCG
I'll link this here instead of the C++ thread since it's probably more on topic here.

https://www.invidio.us/watch?v=F9tGa-hbmTU
>related
[[1478
blogs.unity3d.com/2018/03/15/ml-agents-v0-3-beta-released-imitation-learning-feedback-driven-features-and-more/
>>1097
That is pretty effin cool anon thanks.
>related
[[2301
>>155
>PCG
en.wikipedia.org/wiki/Procedural_generation
pcg.wikidot.com/
>>1092
>and swarm the world with Waifu-1000 terminators
Would.. would this really be a bad think anon?
[[2732
I thought that this might be relevant to this board
It basically means that if you'd give this A.I. a body it'd learn to walk awkwardly on it's own, if it was smart enough we could program our waifu by just giving feedback like don't swing your arms too much and not to run as fast as possible.

https://www.invidio.us/watch?v=gn4nRCC9TwQ

>ed. probably more on-topic in the bipedal thread anon >>237
>>1102
Yes, that's very relevant anon, and thanks for taking the time to check the catalog and put the link into the right thread. If this is an area of interest to you, then you might check out the OP in the bipedal bread as well. There is some simulation-based research that has made some progress with automated bipedal kinematics training.
>>1103
Isn't that thread more focused on the mechanical side of legs and bipedal motion? I'm kind of more into the teaching side of A.I., what's the best thread for posting these sort of suggestions?
>>1104
>focused on the mechanical side of legs and bipedal motion?
That's probably a legitimate observation. From my perspective the two (AI & Mechanics) are inextricably intertwined [for robowaifus]. The OP has images and links to related research.

>what's the best thread for posting these sort of suggestions?
There are about 6 or 7 AI-specific breads on the board last count. [[4658

This bread is good if the goal is a 'virtual training gym'. Using any of the others or creating your own if none of them are specific enough is fine.
related
[[4789
Open file (316.69 KB 620x360 tabletennis.webm)
Does anyone know a good open-source game engine we could use for building a robowaifu simulation? The ones I've found so far depend on their own scripting languages which make them garbage for machine learning. It'd be useful if we could combine a physics engine with a PCB and electronics simulator. Here's some physics engines we could use:

Bullet
C++ https://github.com/bulletphysics/bullet3
Python https://pybullet.org/wordpress/
>Rigid body and soft body simulation with discrete and continuous collision detection
>Collision shapes include: sphere, box, cylinder, cone, convex hull using GJK, non-convex and triangle mesh
>Soft body support: cloth, rope and deformable objects
>A rich set of rigid body and soft body constraints with constraint limits and motors
>Plugins for Maya, Softimage, integrated into Houdini, Cinema 4D, LightWave 3D, Blender and Godot and import of COLLADA 1.4 physics content
This one is designed and used for robotics. Some robotics simulations and papers using PyBullet:

TossingBot: Learning to Throw Arbitrary Objects with Residual Physics
https://www.youtube.com/watch?v=f5Zn2Up2RjQ

Hierarchical Policy Design for Sample-Efficient Learning of Robot Table Tennis Through Self-Play
Website: https://www.cs.utexas.edu/~reza/
Paper: https://arxiv.org/pdf/1811.12927.pdf

Godot 3.0 is open-source and uses Bullet for physics, but its scripting language is even slower than Python. There's a Godot fork with Tensorflow 2.0 support: https://github.com/godot-extended-libraries/godot-tensorflow-workspace

PhysX
Website https://developer.nvidia.com/gameworks-physx-overview
Github https://github.com/NVIDIAGameWorks/PhysX
>It supports rigid body dynamics, soft body dynamics (like cloth simulation, including tearing and pressurized cloth), ragdolls and character controllers, vehicle dynamics, particles and volumetric fluid simulation.
>>1638
>It'd be useful if we could combine a physics engine with a PCB and electronics simulator.
Can you clarify that anon? Is there a particular benefit you're going for by combining these two in the context of a Robowaifu Simulator? Also, I assume PCB == 'printed circuit board' here?

As far as an engine, ROS has Gazebo, which iirc is a simulation system for robot designs.
>>268
>>541
DeepMimic
>Abstract: A longstanding goal in character animation is to combine data-driven specification of behavior with a system that can execute a similar behavior in a physical simulation, thus enabling realistic responses to perturbations and environmental variation. We show that well-known reinforcement learning (RL) methods can be adapted to learn robust control policies capable of imitating a broad range of example motion clips, while also learning complex recoveries, adapting to changes in morphology, and accomplishing user-specified goals. Our method handles keyframed motions, highly-dynamic actions such as motion-captured flips and spins, and retargeted motions. By combining a motion-imitation objective with a task objective, we can train characters that react intelligently in interactive settings, e.g., by walking in a desired direction or throwing a ball at a user-specified target. This approach thus combines the convenience and motion quality of using motion clips to define the desired style and appearance, with the flexibility and generality afforded by RL methods and physics-based animation. We further explore a number of methods for integrating multiple clips into the learning process to develop multi-skilled agents capable of performing a rich repertoire of diverse skills. We demonstrate results using multiple characters (human, Atlas robot, bipedal dinosaur, dragon) and a large variety of skills, including locomotion, acrobatics, and martial arts.

https://xbpeng.github.io/projects/DeepMimic/index.html
https://github.com/xbpeng/DeepMimic

https://www.invidio.us/watch?v=2_CO82KObQY
OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields
>Abstract: Realtime multi-person 2D pose estimation is a key component in enabling machines to have an understanding of people in images and videos. In this work, we present a realtime approach to detect the 2D pose of multiple people in an image. The proposed method uses a nonparametric representation, which we refer to as Part Affinity Fields (PAFs), to learn to associate body parts with individuals in the image. This bottom-up system achieves high accuracy and realtime performance, regardless of the number of people in the image. In previous work, PAFs and body part location estimation were refined simultaneously across training stages. We demonstrate that a PAF-only refinement rather than both PAF and body part location refinement results in a substantial increase in both runtime performance and accuracy. We also present the first combined body and foot keypoint detector, based on an internal annotated foot dataset that we have publicly released. We show that the combined detector not only reduces the inference time compared to running them sequentially, but also maintains the accuracy of each component individually. This work has culminated in the release of OpenPose, the first open-source realtime system for multi-person 2D pose detection, including body, foot, hand, and facial keypoints.
>Index Terms—2D human pose estimation, 2D foot keypoint estimation, real-time, multiple person, part affinity fields.

https://github.com/CMU-Perceptual-Computing-Lab/openpose

https://www.invidio.us/watch?v=pW6nZXeWlGM
>>1643
Yeah, I mean like being able to simulate damaged components and electronics. Falls, vibrations and mishaps could easily cause damage and undefined behavior within a system in real world scenarios. It's probably beyond what any of us could create at the moment but maybe one day when there are lots of people working on robots somebody will make open-source software for such simulations.

>>1668
Great find, anon. This is what I'm looking for. Thanks!

Once I get my robowaifu model finished I'll see if I can get a generic version rigged up in this and let everyone experiment with it.
>>1672
>Once I get my robowaifu model finished I'll see if I can get a generic version rigged up in this and let everyone experiment with it.
nprb anon. i'm rebuilding a box from scratch using manjaro to see if i can get it running myself.
>>1672
Just leaving a note here for the future. Once we get some test data on artificial muscles (>>1692) we can emulate them inside a simulation too.
>>1693
>Just leaving a note here for the future.
Heh that's what I've been doing here since the beginning anon
Open file (29.91 KB 641x521 progress.png)
>>1694
It's a bit like working on a spaceship but we're still drafting the parts.
Open file (67.02 KB 960x720 hideki_chii.jpeg)
Open file (49.81 KB 617x413 Selection_010.png)
Alright, I decided to try my hand at crafting my own simulator in OpenGL using GLFW, since I can't seem to get the MATLAB files working under Octave. Wish me luck anons.
https://gitlab.com/Chobitsu/muh-robowaifu-simulator
>>1814
Good luck, anon. For great justice.
AR waifu simulations when?
>>1818
Thanks anon. I will take off every zig.
>>1818
>AR waifu simulations when?
That adds a lot of additional complexity to a simulator, but it's an admirable goal tbh. I imagine you could start off small
-with a library of primitively-done everyday items like chairs, tables, couches, etc.
-train the vrwaifu to interact with them properly
-then move to an AR overlay system using something like OpenCV + OpenPose to keep track of the people and things in the video for her to engage with.

Or something like that heh. :^)
Added FPS rendering to the MRS
Added to MRS
- a training pavilion gym floor
- multiple lights (27 atm)
- camera flying controls /w arrow keys
- m key will toggle the mouse capture for camera tumbling or not.

Slow going having to learn all these details to get everything working. I probably couldn't even do this without the examples from learnopengl.com . Anyway, making slow progress I'll keep trying to improve it.
>>1859
a recent screen-capp
Open file (160.26 KB 1100x1015 juCi++_022.png)
-Added a 'toybox' that slowly rotates. >1 related -Expanded the area of the floor nine-fold. I'd guess it's roughly the area of a soccer field now, though square, not rectangular. (not shown in capp) -Did a refactoring of the code pretty much everywhere to help manage the complexity of dealing with direct OpenGL calls. This approach should be a big help in the future I hope. Here's an example of the game loop atm (though it will be more complicated later ofc). >2 related
>>1861 You might want to switch over to SDL2 once you start expanding on your application, as SDL handles many more things such as sound, keyboard input, events etc. The switch from glfw to SDL is straightforward and I have done it myself. The function calls are pretty much named the same, just look up "sdl with opengl" on DuckDuckGo.
I added the beginnings of a help system, and added a simulation pause feature. I plan to add children's toys for the robowaifu AI to learn to recognize and play with.
>>1862 Thanks for the advice anon, I'll check into it.
Been studying OpenGL hard and learning. Also added a couple of changes. A) I got things worked out so I can load .obj files from say, Blender now. I've added an 'orbiting moon' that goes around the training pavilion ~2 1/2 minutes. B) I've been learning about texturing and setup the toybox to cycle textures/vert colors as just something colorful. I anticipate that the objects in the gym will give the AI a chance for object recognition and interaction. >pics related Anyway, made a new push. Have a happy new year /robowaifu/.
Added a wireframe mode and did a lot of cleanups and refactoring preparing for expansion and adding multiple windows and a rigged character.
>>1893 You probably have to enable and disable wireframe between rendering the UI elements and rendering the actual scene.
I've added a fair amount of code updates, and added the ability to fly the camera up and down. I'm currently learning about vector & matrix math (linear algebra?) to be able to build a skeletal system of joints to do animations with. I want us to be able to do the work ourselves when we move forward so I'm trying to learn to do it by hand instead of relying on some kind of 3rd party game engine. >pics related >>1894 Thanks for the tip anon.

Delete
Report

Captcha (required for reports and bans by board staff)

no cookies?