/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality

Porn boards have been deleted. Orphaned files will be cleared in 3 days, download images if you have hotlinks.

Days left: 34

JulayWorld fallback document - SAVE LOCALLY

JulayWorld onion service: bhlnasxdkbaoxf4gtpbhavref7l2j3bwooes77hqcacxztkindztzrad.onion

Max message length: 32768

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB


(used to delete files and postings)

Open file (485.35 KB 1053x1400 0705060114258_01_Joybot.jpg)
Robowaifu Simulator Robowaifu Technician 09/12/2019 (Thu) 03:07:13 No.155
What would be a good RW simulator. I guess I'd like to start with some type of PCG solution that just builds environments to start with and build from there up to characters.

It would be nice if the system wasn't just pre-canned, hard-coded assets and behaviors but was instead a true simulator system. EG, write robotics control software code that can actually calculate mechanics, kinematics, collisions, etc., and have that work correctly inside the basic simulation framework first with an eye to eventually integrating it into IRL Robowaifu mechatronic systems with little modifications. Sort of like the OpenAI Gym concept but for waifubots.
https ://gym.openai.com/
That is pretty effin cool anon thanks.
>and swarm the world with Waifu-1000 terminators
Would.. would this really be a bad think anon?
I thought that this might be relevant to this board
It basically means that if you'd give this A.I. a body it'd learn to walk awkwardly on it's own, if it was smart enough we could program our waifu by just giving feedback like don't swing your arms too much and not to run as fast as possible.


>ed. probably more on-topic in the bipedal thread anon >>237
Yes, that's very relevant anon, and thanks for taking the time to check the catalog and put the link into the right thread. If this is an area of interest to you, then you might check out the OP in the bipedal bread as well. There is some simulation-based research that has made some progress with automated bipedal kinematics training.
Isn't that thread more focused on the mechanical side of legs and bipedal motion? I'm kind of more into the teaching side of A.I., what's the best thread for posting these sort of suggestions?
>focused on the mechanical side of legs and bipedal motion?
That's probably a legitimate observation. From my perspective the two (AI & Mechanics) are inextricably intertwined [for robowaifus]. The OP has images and links to related research.

>what's the best thread for posting these sort of suggestions?
There are about 6 or 7 AI-specific breads on the board last count. [[4658

This bread is good if the goal is a 'virtual training gym'. Using any of the others or creating your own if none of them are specific enough is fine.
Open file (316.69 KB 620x360 tabletennis.webm)
Does anyone know a good open-source game engine we could use for building a robowaifu simulation? The ones I've found so far depend on their own scripting languages which make them garbage for machine learning. It'd be useful if we could combine a physics engine with a PCB and electronics simulator. Here's some physics engines we could use:

C++ https://github.com/bulletphysics/bullet3
Python https://pybullet.org/wordpress/
>Rigid body and soft body simulation with discrete and continuous collision detection
>Collision shapes include: sphere, box, cylinder, cone, convex hull using GJK, non-convex and triangle mesh
>Soft body support: cloth, rope and deformable objects
>A rich set of rigid body and soft body constraints with constraint limits and motors
>Plugins for Maya, Softimage, integrated into Houdini, Cinema 4D, LightWave 3D, Blender and Godot and import of COLLADA 1.4 physics content
This one is designed and used for robotics. Some robotics simulations and papers using PyBullet:

TossingBot: Learning to Throw Arbitrary Objects with Residual Physics

Hierarchical Policy Design for Sample-Efficient Learning of Robot Table Tennis Through Self-Play
Website: https://www.cs.utexas.edu/~reza/
Paper: https://arxiv.org/pdf/1811.12927.pdf

Godot 3.0 is open-source and uses Bullet for physics, but its scripting language is even slower than Python. There's a Godot fork with Tensorflow 2.0 support: https://github.com/godot-extended-libraries/godot-tensorflow-workspace

Website https://developer.nvidia.com/gameworks-physx-overview
Github https://github.com/NVIDIAGameWorks/PhysX
>It supports rigid body dynamics, soft body dynamics (like cloth simulation, including tearing and pressurized cloth), ragdolls and character controllers, vehicle dynamics, particles and volumetric fluid simulation.
>It'd be useful if we could combine a physics engine with a PCB and electronics simulator.
Can you clarify that anon? Is there a particular benefit you're going for by combining these two in the context of a Robowaifu Simulator? Also, I assume PCB == 'printed circuit board' here?

As far as an engine, ROS has Gazebo, which iirc is a simulation system for robot designs.
>Abstract: A longstanding goal in character animation is to combine data-driven specification of behavior with a system that can execute a similar behavior in a physical simulation, thus enabling realistic responses to perturbations and environmental variation. We show that well-known reinforcement learning (RL) methods can be adapted to learn robust control policies capable of imitating a broad range of example motion clips, while also learning complex recoveries, adapting to changes in morphology, and accomplishing user-specified goals. Our method handles keyframed motions, highly-dynamic actions such as motion-captured flips and spins, and retargeted motions. By combining a motion-imitation objective with a task objective, we can train characters that react intelligently in interactive settings, e.g., by walking in a desired direction or throwing a ball at a user-specified target. This approach thus combines the convenience and motion quality of using motion clips to define the desired style and appearance, with the flexibility and generality afforded by RL methods and physics-based animation. We further explore a number of methods for integrating multiple clips into the learning process to develop multi-skilled agents capable of performing a rich repertoire of diverse skills. We demonstrate results using multiple characters (human, Atlas robot, bipedal dinosaur, dragon) and a large variety of skills, including locomotion, acrobatics, and martial arts.


OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields
>Abstract: Realtime multi-person 2D pose estimation is a key component in enabling machines to have an understanding of people in images and videos. In this work, we present a realtime approach to detect the 2D pose of multiple people in an image. The proposed method uses a nonparametric representation, which we refer to as Part Affinity Fields (PAFs), to learn to associate body parts with individuals in the image. This bottom-up system achieves high accuracy and realtime performance, regardless of the number of people in the image. In previous work, PAFs and body part location estimation were refined simultaneously across training stages. We demonstrate that a PAF-only refinement rather than both PAF and body part location refinement results in a substantial increase in both runtime performance and accuracy. We also present the first combined body and foot keypoint detector, based on an internal annotated foot dataset that we have publicly released. We show that the combined detector not only reduces the inference time compared to running them sequentially, but also maintains the accuracy of each component individually. This work has culminated in the release of OpenPose, the first open-source realtime system for multi-person 2D pose detection, including body, foot, hand, and facial keypoints.
>Index Terms—2D human pose estimation, 2D foot keypoint estimation, real-time, multiple person, part affinity fields.


Yeah, I mean like being able to simulate damaged components and electronics. Falls, vibrations and mishaps could easily cause damage and undefined behavior within a system in real world scenarios. It's probably beyond what any of us could create at the moment but maybe one day when there are lots of people working on robots somebody will make open-source software for such simulations.

Great find, anon. This is what I'm looking for. Thanks!

Once I get my robowaifu model finished I'll see if I can get a generic version rigged up in this and let everyone experiment with it.
>Once I get my robowaifu model finished I'll see if I can get a generic version rigged up in this and let everyone experiment with it.
nprb anon. i'm rebuilding a box from scratch using manjaro to see if i can get it running myself.
Just leaving a note here for the future. Once we get some test data on artificial muscles (>>1692) we can emulate them inside a simulation too.
>Just leaving a note here for the future.
Heh that's what I've been doing here since the beginning anon
Open file (29.91 KB 641x521 progress.png)
It's a bit like working on a spaceship but we're still drafting the parts.
Open file (67.02 KB 960x720 hideki_chii.jpeg)
Open file (49.81 KB 617x413 Selection_010.png)
Alright, I decided to try my hand at crafting my own simulator in OpenGL using GLFW, since I can't seem to get the MATLAB files working under Octave. Wish me luck anons.
Good luck, anon. For great justice.
AR waifu simulations when?
Thanks anon. I will take off every zig.
>AR waifu simulations when?
That adds a lot of additional complexity to a simulator, but it's an admirable goal tbh. I imagine you could start off small
-with a library of primitively-done everyday items like chairs, tables, couches, etc.
-train the vrwaifu to interact with them properly
-then move to an AR overlay system using something like OpenCV + OpenPose to keep track of the people and things in the video for her to engage with.

Or something like that heh. :^)
Added FPS rendering to the MRS
Added to MRS
- a training pavilion gym floor
- multiple lights (27 atm)
- camera flying controls /w arrow keys
- m key will toggle the mouse capture for camera tumbling or not.

Slow going having to learn all these details to get everything working. I probably couldn't even do this without the examples from learnopengl.com . Anyway, making slow progress I'll keep trying to improve it.
a recent screen-capp
Open file (160.26 KB 1100x1015 juCi++_022.png)
-Added a 'toybox' that slowly rotates. >1 related -Expanded the area of the floor nine-fold. I'd guess it's roughly the area of a soccer field now, though square, not rectangular. (not shown in capp) -Did a refactoring of the code pretty much everywhere to help manage the complexity of dealing with direct OpenGL calls. This approach should be a big help in the future I hope. Here's an example of the game loop atm (though it will be more complicated later ofc). >2 related
>>1861 You might want to switch over to SDL2 once you start expanding on your application, as SDL handles many more things such as sound, keyboard input, events etc. The switch from glfw to SDL is straightforward and I have done it myself. The function calls are pretty much named the same, just look up "sdl with opengl" on DuckDuckGo.
I added the beginnings of a help system, and added a simulation pause feature. I plan to add children's toys for the robowaifu AI to learn to recognize and play with.
>>1862 Thanks for the advice anon, I'll check into it.
Been studying OpenGL hard and learning. Also added a couple of changes. A) I got things worked out so I can load .obj files from say, Blender now. I've added an 'orbiting moon' that goes around the training pavilion ~2 1/2 minutes. B) I've been learning about texturing and setup the toybox to cycle textures/vert colors as just something colorful. I anticipate that the objects in the gym will give the AI a chance for object recognition and interaction. >pics related Anyway, made a new push. Have a happy new year /robowaifu/.
Added a wireframe mode and did a lot of cleanups and refactoring preparing for expansion and adding multiple windows and a rigged character.
>>1893 You probably have to enable and disable wireframe between rendering the UI elements and rendering the actual scene.
I've added a fair amount of code updates, and added the ability to fly the camera up and down. I'm currently learning about vector & matrix math (linear algebra?) to be able to build a skeletal system of joints to do animations with. I want us to be able to do the work ourselves when we move forward so I'm trying to learn to do it by hand instead of relying on some kind of 3rd party game engine. >pics related >>1894 Thanks for the tip anon.
>>1899 Good shit. Can I ask though, why you decided to write basic graphics routines and raw OpenGL yourself instead of using an engine or framework? I know all the open-source engines out there are huge balls of cruft, but minimal ones surely exist.
>>2015 thanks. i've kind of gotten sidetracked & stalled trying to wrap my head around linear algebra to build a skeletal animation system for us. i think i've gotten a reasonable approach worked out now, it's just a matter of picking the project back up. >why you decided to write basic graphics routines and raw OpenGL yourself because A) i'm using a small little under-powered box (an atom processor + integrated graphics) which isn't much more powerful than say, a Pi4. B) i want this project to be able to literally run on a toaster. C) from a personal perspective, i like the challenge of doing it myself, because i know that will only strengthen me as a developer and as a man, and therefore be better equipped to help everyone here through that personal improvement and increased knowledge power. D) i want this to be an actual simulator, ie, the code iWe write here should be transferable nearly intact onto a real irl prototype robowaifu--at least that's the plan. by keeping the code lean and mean (to the degree i know how) should help facilitate that transition. i think that covers most of the reasons anon. >but minimal ones surely exist. perhaps so, but i'm not aware of them tbh.
>>2017 Fair enough. By the way, were you around for the failed/stalled waifu simulator from a few years back? We tried to do a similar thing using Urho3D. The idea was to build a small, enclosed environment, in our case the interior of a small house, and work on navigation within the space and interaction with common objects )and the user. Also kicked around speech recognition and synthesis, and using SmartBody for procedural animations. >but minimal ones surely exist. >perhaps so, but i'm not aware of them tbh. Damn. I thought you might have come across some before deciding to start from scratch. Personally, I had thought that raylib would fit the bill for my graphical projects, but it quickly turned into segfault city when I tried to build larger applications on top of the examples.
>>2018 >raylib that's probably the better choice, but frankly i suck at wrangling pointers in C. i'll literally go out of my way to write library code to encapsulate them if the proper use and side-effects aren't blatantly obvious, so i'm sure i'd crash and burn trying to use raylib effectively. i'll leave that to the graybeards who really know their shit on the frontier between hardware/C/C++. For example, I find this code for compiling a GLSL shader from scratch much easier to follow than allmost of the examples I've seen: https://gitlab.com/Chobitsu/muh-robowaifu-simulator/-/blob/master/src/muh_Shader.hpp None of the previous project you mentions rang a bell for me. Dang, wish I'd known about it. /robowaifu/ began ~3yrs ago. I was aware of the Tay debacle and /machinecult/, but that's about it as far as i recall. I don't suppose there are any archives of the project you mentioned anon?
>>2018 btw, i hadn't seen those so i looked them up. here's the links for anyone else who might be interested. /// urho3d https://urho3d.github.io/ /// smartbody https://smartbody.ict.usc.edu/
Open file (1.16 MB 2048x1152 landing.jpg)
>physically-based rendering book online http://www.pbr-book.org/3ed-2018/contents.html >immersive linear algebra book online (matrix ch) http://immersivemath.com/ila/ch06_matrices/ch06.html
>>2046 Why would you want to use ray tracing for rendering? That is very computationally expensive and it would give you less room to have a more elaborate waifu A.I. Or is there any recent development that gibes silky smooth 60 FPS now?
>>4072 It's a fair point. But the long-term goal, at least for the Visual Waifu sub-group interests, is highly-immersive VR. Yes, there have been notable advances in GPU performance in (real or approximate) ray-tracing (though not photographically 'real' yet). Along with advances in both multicore CPUs, hybrid APUs, and ofc GPUs, the notion of adding raytracing to a sim isn't to difficult to envision. Along with concurrency and parallelism advances in the base C++ language, I'd estimate it will be quite feasible by the end of 2023.
Open file (92.56 KB 800x597 1529774255941.jpg)
>>2046 Well fuck that means it's gg no re for my toaster machine.
>>4076 Yea, it's really only for strong hardware. We have to think of both the future and the past here tbh.

Report/Delete/Moderation Forms

Captcha (required for reports and bans by board staff)

no cookies?