/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Build Back Better

More updates on the way. -r

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


Have a nice day, Anon!


Open file (485.35 KB 1053x1400 0705060114258_01_Joybot.jpg)
Robowaifu Simulator Robowaifu Technician 09/12/2019 (Thu) 03:07:13 No.155
What would be a good RW simulator. I guess I'd like to start with some type of PCG solution that just builds environments to start with and build from there up to characters.

It would be nice if the system wasn't just pre-canned, hard-coded assets and behaviors but was instead a true simulator system. EG, write robotics control software code that can actually calculate mechanics, kinematics, collisions, etc., and have that work correctly inside the basic simulation framework first with an eye to eventually integrating it into IRL Robowaifu mechatronic systems with little modifications. Sort of like the OpenAI Gym concept but for waifubots.
https ://gym.openai.com/
>>8486 Wow, thanks Anon! That's pretty inspiring actually. I'll make time to dig through this guy's work and see if I can figure out what he's doing. Also, thank you for using nitter instead.
>>8491 You're welcome. Teardown seems to be the name of the Game now, but there's more: >voxel engine /plugin The guy reporting on it, has more videos: https://youtu.be/Jnghq0yTmZo >"Physics" engine /plugin Primary liquids, not sure how realistic. https://youtu.be/DBcqiQLp7lY What we would need is rather a household with people, and all kinds of things, where the behavior depends on the material, temperature, and so on. His vids are more aimed towards gaming and fun or art. At the end of the physics video he points out that more is possible.
>>8501 I like the idea of creating a robowaifu simulator using voxels. I assume they are a pretty efficient way to create interactive physics, just from looking at the play from that vidya.
https://www.raylib.com/ might be interesting for simulators or virtual waifus. It's for gaming and works basically everywhere. Its very modular, modules can be used on their own.
>>8925 Thanks for the reminder Anon. Some other anons mentioned it here before, but I'd gotten so busy I'd forgotten about. You've reminded me about it now. Here's a cleaned-up version of his animation example code, looks pretty simple to use: /******************************************************************************* * * raylib [models] example - Load 3d model with animations and play them * * This example has been created using raylib 2.5 (www.raylib.com) * raylib is licensed under an unmodified zlib/libpng license (View raylib.h * for details) * * Example contributed by Culacant (@culacant) and reviewed by Ramon * Santamaria (@raysan5) * * Copyright (c) 2019 Culacant (@culacant) and Ramon Santamaria (@raysan5) * ******************************************************************************* * * To export a model from blender, make sure it is not posed, the vertices need *to be in the same position as they would be in edit mode. and that the scale *of your models is set to 0. Scaling can be done from the export menu. * ******************************************************************************/ #include "raylib.h" #include <stdlib.h> int main(void) { // Initialization //---------------------------------------------------------------------------- const int screenWidth = 800; const int screenHeight = 450; InitWindow(screenWidth, screenHeight, "raylib [models] example - model animation"); // Define the camera to look into our 3d world Camera camera = {0}; camera.position = (Vector3){10.0f, 10.0f, 10.0f}; // Camera position camera.target = (Vector3){0.0f, 0.0f, 0.0f}; // Camera looking at point // Camera up vector (rotation towards target) camera.up = (Vector3){0.0f, 1.0f, 0.0f}; camera.fovy = 45.0f; // Camera field-of-view Y camera.type = CAMERA_PERSPECTIVE; // Camera mode type // Load the animated model mesh and basic data Model model = LoadModel("resources/guy/guy.iqm"); // Load model texture and set material Texture2D texture = LoadTexture("resources/guy/guytex.png"); // Set model material map texture SetMaterialTexture(&model.materials[0], MAP_DIFFUSE, texture); Vector3 position = {0.0f, 0.0f, 0.0f}; // Set model position // Load animation data int animsCount = 0; ModelAnimation* anims = LoadModelAnimations("resources/guy/guyanim.iqm", &animsCount); int animFrameCounter = 0; SetCameraMode(camera, CAMERA_FREE); // Set free camera mode SetTargetFPS(60); // Set our game to run at 60 frames-per-second //---------------------------------------------------------------------------- // Main game loop while (! WindowShouldClose()) // Detect window close button or ESC key { // Update //-------------------------------------------------------------------------- UpdateCamera(&camera); // Play animation when spacebar is held down if (IsKeyDown(KEY_SPACE)) { animFrameCounter++; UpdateModelAnimation(model, anims[0], animFrameCounter); if (animFrameCounter >= anims[0].frameCount) animFrameCounter = 0; } //-------------------------------------------------------------------------- // Draw //-------------------------------------------------------------------------- BeginDrawing(); ClearBackground(RAYWHITE); BeginMode3D(camera); DrawModelEx(model, position, (Vector3){1.0f, 0.0f, 0.0f}, -90.0f, (Vector3){1.0f, 1.0f, 1.0f}, WHITE); for (int i = 0; i < model.boneCount; i++) { DrawCube(anims[0].framePoses[animFrameCounter][i].translation, 0.2f, 0.2f, 0.2f, RED); } DrawGrid(10, 1.0f); // Draw a grid EndMode3D(); DrawText("PRESS SPACE to PLAY MODEL ANIMATION", 10, 10, 20, MAROON); DrawText("foo_bar_baz", screenWidth - 200, screenHeight - 20, 10, GRAY); EndDrawing(); //-------------------------------------------------------------------------- } // De-Initialization //---------------------------------------------------------------------------- UnloadTexture(texture); // Unload texture // Unload model animations data for (int i = 0; i < animsCount; i++) { UnloadModelAnimation(anims[i]); } RL_FREE(anims); UnloadModel(model); // Unload model CloseWindow(); // Close window and OpenGL context //---------------------------------------------------------------------------- return 0; } https://www.raylib.com/examples.html
>>8926 Ah, didn't look for it. Here: >>5810 >>2018
I am currently building a framework for 2D simulations, similar to dwarf fortress. The goal is to have a simulation that is as realistic as possible, while also using up as little performance as reasonable. I want to train multi-agents with evolutionary model optimization in the simulation; to test a few ideas on how to solve artificial general intelligence without having to bother with RobotVision and ArtificalMuscleMovement. Main goal is to solve general AI but I also want to try and solve NaturalLanguage using a new (?) approach. Going to be open source one day. (it's not my day-job to code this so progress is kinda slow. Anyone interested in it?)
>>9032 >Anyone interested in it? Sure ofc, Anon. We're always interested in that kind of thing here. Give us more details please.
>>9032 When I first got started with neural networks twenty years ago I made a simple evolution simulation in Game Maker with prey and predators. I've always wondered if they were given the ability to make different sounds and hear each other walking if they would've formed their own crude language eventually. I think that would be a fun project. I also want to solve general AI and feel that AI needs a more complex environment to really learn and grow. Most AI research focuses on maxing something out but in real life there's always something more to learn.
>>9034 >When I first got started with neural networks twenty years ago Wow that's pretty remarkable I had no idea anyone here had that kind of experience. It would be really nice if you conducted a beginner's class on these topics if you're up for something like that Anon. Cheers.
>>9045 Nah, I just played around with them for a bit as a kid after watching Chobits. I didn't really get into AI until around 2015. I could probably still teach things but there's lots of material for that on the web and I'd rather work on my own stuff.
>>9047 Oh haha I see. Well, it's still good to know you're here. Good luck with your 'stuff' Anon. I too found Chobits inspirational when I was young.
Open file (281.77 KB 1441x768 physx.jpg)
Has anyone tried PhysX? The most interesting feature of it for me is the GPU acceleration because PyBullet isn't really ideal for training thousands of iterations simultaneously. https://www.youtube.com/watch?v=K1rotbzekf0 https://github.com/petrikvladimir/pyphysx I'd like to start moving on from just learning about AI and actually apply it to some real problems that can be transferred to the real world. My goal is to start assembling my own robowaifu by the end of the year that can do some basic task like locate my hand and give me a high five or something.
>>9708 Yes, I've played with it bit (cloth sims & rigid body collisions mostly). Nvidia is plainly an evil Big Tech/Gov entity like the others, but they have made a remarkable push to make their GPU ecosystem easy to use if you at least have the basics of C programming under your belt.
While I wait for stuff to train I started working a side-project for playing video games with hindsight experience replay and became fascinated with the idea of repurposing the model afterwards for a piano simulation, similar to how they trained a robot hand to solve a Rubik's cube in a simulation then did transfer learning to the real world. A ton of progress has been made in making computer vision and reinforcement learning more efficient. Just two years ago it was probably too early to make any useful simulation and reasonably train an agent in it to do something interesting but now I think it's within reach. Maybe it's only an exciting idea to me but I think it would be a great way to demonstrate how far along AI has progressed and that anyone can make their own waifu play piano MIDI files given. We could demo other capabilities with it too like making her say the title of the song before playing and generate some virtual waifu hype. The simulation code would also provide a base for others to get started doing other custom simulations such as moving arms, balancing, and walking.
>>10073 >Maybe it's only an exciting idea to me No, I think it's a great idea Anon. I agree this would both be an innovative and useful way to build up training for AIs/Robowaifus.
Over the past few days I've gotten the latest Blender and UE4 working on my machines.
Someone mentioned Webots: Robot Simulator here >>10531 Link: https://cyberbotics.com/ >It has been designed for a professional use, and it is widely used in industry, education and research. Cyberbotics Ltd. maintains Webots as its main product continuously since 1998. >Webots core is based on the combination of a modern GUI (Qt), a physics engine (ODE fork) and an OpenGL 3.3 rendering engine (wren). It runs on Windows, Linux and macOS. Webots simulations can be exported as movies, interactive HTML scenes or animations or even be streamed to any web browser using webgl and websockets. >Robot may be programmed in C, C++, Python, Java, MATLAB or ROS with a simple API covering all the basic robotics needs.
>>1708 I apologize Anon, for not thanking you for this image when you posted it. I actually did appreciate it back then, but I was too distracted to share my gratitude at the time. So thanks! :^)
>>10073 We'd be interested to hear how both the training and your 'side-project' worked out Anon. Any news to share with us?
Open file (1.31 MB 2400x3600 Roastinator.png)
>>1092 >>1093 >>1101 Already been meme'd. I think something very similar to this was originally proposed as a "troll" on a certain chan, which literally led to some articles and "focus groups" that led to the creation of a monitoring group who has eyes on all "robot" threads/convos across the internet. > The roastie fears the robot-samurai.
Open file (56.08 KB 500x356 beewatcher_lg.gif)
>>11010 Haha, well that's interesting Anon! Can you give us more details on these 'watchers'? Somebody might need to keep eyes on them tbh. >And now all the Hawtchers who live in Hawtch-Hawtch are watching on watch watcher watchering watch , watch watching the watcher who's watching that bee. You're not a Hawtch-Watcher you're lucky you see!” >t― Dr . Seuss , Did I Ever Tell You How Lucky You Are?
>>11013 Hmmm. Use machines to keep an eye on machines? In a time of deepfake video and digitally altered footage, I just hope they believe the camera feeds they're watching. 😉
What looks to be a very useful header-only C++ wrapper around the OpenGL C API. I'll try to make some time to have a look at it over the summer. https://oglplus.org/oglplus/html/index.html https://github.com/matus-chochlik/oglplus
Open file (407.22 KB 892x576 MoxRigForBlender.png)
Open file (23.94 KB 566x698 momo_rig.png)
Here's something I found through our Japanese colleagues. MoxRig / MomoRig https://mox-motion.com/ - I didn't try it out, but it seems to be useful for animation of human-like movement and simulation of robots.
>>11497 Neat! Thanks Anon, I'll give it a lookover.
>>11022 >emoji friendly islamic reminder this is a chan don't use emoji please
Open file (1.18 MB 640x360 mujoco_02.webm)
Open file (574.04 KB 640x360 mujoco_05.webm)
Open file (837.88 KB 640x360 mujoco_06.webm)
Open file (567.63 KB 640x360 mujoco_04.webm)
MuJoCo's entire codebase has just been open-sourced: https://github.com/deepmind/mujoco This is the same physics engine OpenAI used to train a real robot hand to solve a Rubik's cube. https://www.youtube.com/watch?v=x4O8pojMF0w
>>16415 Amazing. Thanks very much Anon!
Open file (100.78 KB 1200x675 fb_habitat.jpg)
>>16415 Mujoco is state of the art in ~real-time jointed rigid body physics simulation, nice taste, anon. Still, it's not a complete environmental simulator, it is very useful for limited domain hard-dynamics manipulation and movement experiments. >155 I think FAIR's habitat simulation environment[1][2] is the most sensible choice for our needs. It's a complete system with physics, robot models, rendering stack and ML integrations. It would be of major help to the project if we developed a waifu-specific randomized (to facilitate sim2real generalization) sim-environment, and collected enough behavioral data traces to pinpoint the necessary behavioral patterns, similar to deepmind's recent imitation learning (IL) tour de force: https://www.deepmind.com/publications/creating-interactive-agents-with-imitation-learning If you choose this tool, feel free to ask for help if it somehow breaks. 1. https://github.com/facebookresearch/habitat-lab 2. https://github.com/facebookresearch/habitat-sim
>>16446 Thanks Anon! We'll have a look into it. >that ambient occlusion Nice. Cheers.
>>16446 So far, assimp is breaking the build. After recursive checkout, merging git submodule foreach git merge origin master led to these errors during that process CONFLICT (content): Merge conflict in code/Common/Version.cpp ... CONFLICT (modify/delete): assimpTargets-release.cmake.in deleted in origin and modified in HEAD. Version HEAD of assimpTargets-release.cmake.in left in tree. Pressing on with abandon, I did get a little way before it failed with: FAILED: deps/assimp/code/CMakeFiles/assimp.dir/Common/Version.cpp.o I wanted to give it a shot at least once, but ATM I can't afford the time required to dig in and try to fix such a complex system's build from source. But thanks Anon! It certainly looks interesting and I'm pleased to see a Globohomo behemoth such as F*cebook put out something this big with an MIT license. >=== -minor grammar edit
Edited last time by Chobitsu on 05/25/2022 (Wed) 09:37:45.
>>16453 I managed to build and run it on debian sid with a script inspired by this document https://github.com/facebookresearch/habitat-sim/blob/main/BUILD_FROM_SOURCE.md Basically you clone the repo, checkout to the latest stable tag, update submodules recursively via the usual command git submodule update --init --recursive. I had to comment out a section in setup.py that deals with cmake path to make it work, like this: #try: # import cmake # # If the cmake python package is installed, use that exe # CMAKE_BIN_DIR = cmake.CMAKE_BIN_DIR #except ImportError: CMAKE_BIN_DIR = "" Ensure you have the cmake, debian packages and python libraries they require, then do the python3 setup.py install --bullet. It should build a several hundered source files via cmake and install the package. I managed to avoid using the conda for this, it's simply installed system-wide. When I/we will run multiple data-generating simulations at scale, some form of automated reproducible build & distribution system will be necessary, such as nix/guix or a container/vm.
>>16462 Thanks! I appreciate you're avoided conda for this. I prefer to stick closer to the hardware when feasible. I'll give your instructions a shot at some point. I'm going to have to set up a dedicated machine at some point (hopefully soon). >nix/guix or a container/vm Do you have any preferences? I'm certainly averse to anything proprietary tbh. >palace Fancy rooms for fancy robowaifus! :^) <those portraits are fabulous
Open file (127.08 KB 769x805 GardevoirNierAutomata.jpg)
Found out about DALL·E mini: https://huggingface.co/spaces/dalle-mini/dalle-mini Can generate cute robowaifus. Like this example of Gardevoir in Nier Automata.
>>16648 Excellent find Pareto Frontier. Any advice on running a local instance?
>>16652 It's hard but doable, boils down to making this notebook work https://github.com/brokenmold/dalle-mini/blob/main/tools/inference/inference_pipeline.ipynb I don't have time to bring it up rn
>>16645 >>16648 Nice. Thanks Anon.
Opensimulator thread: >>12066 unreal engine file with the maidcom project manequin >>25776
Open file (63.55 KB 765x728 Screenshot_149.png)
>An encyclopedia of concept simulations that computers and humans can learn from. An experimental project in machine learning. https://concepts.jtoy.net >Examples of concepts that are grounded in the physical world: https://blog.jtoy.net/examples-of-concepts-that-are-grounded-in-the-physical-world/ Somewhat related: https://blog.jtoy.net/log-of-machine-learning-work-and-experiments/
I have a hunch that adding such patterns like shown in the video might be the key to simulate bodies, especially facial expressions: https://youtu.be/UOjVNT25eHE
>>29163 Thanks alot, Noido Dev. This is a gem. Blue Sky is one of the best studios out there. They have a ton of talented individuals.
I think I looked around long enough and will start small with the following plan for the next weeks: - explore the robotic learning kit provided by epic - write an interface to get data in and out from a running unreal application The robotic learning project seems to have some basic sensors and motors implemented, perfect stuff to develop the interface. After some googling it seems like a UDP/TCP connection will be the way to go. After that I'll figure out the next steps.
Today successfully wrote a TCP Client/Server as WindowsForm application and was able to setup a TCP connection to a running Unreal Project. For the Unreal TCP part I used this Plugin: https://github.com/getnamo/TCP-Unreal This way any other software can be connected via TCP (just IP and Port) to the running application. Next I will have a look into how to send data from and to unreal. Maybe JSON? I made a TCP Client/Server for debugging purposes. Those two are written in C#.
>>29242 >>29296 That's great. Keep us updated. Though, if you start creating something the new prototype thread might be the better place to post it: >>28715
>>29242 >>29296 That sounds very encouraging, SchaltkreisMeister! Good luck getting this system up and running successfully. Cheers. :^)
David Browne did some fast muscle design simulation: https://youtu.be/J7RxSPLLw-s
>>29390 Cool. I'm going to check this out NoidoDev, thanks!

Report/Delete/Moderation Forms
Delete
Report