/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality.

Build Back Better

More updates on the way. -r

Max message length: 6144

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

More

(used to delete files and postings)


Have a nice day, Anon!


OpenGL Resources Robowaifu Technician 09/12/2019 (Thu) 03:26:10 No.158
Good VR is one important interim goal for a fully-realized RoboWaifu, and it's also much easier to visualize an algorithm graphically at a glance than 'trying to second-guess a mathematical function'. OpenGL is by far the most portable approach to graphics atm.

>"This is an excellent site for starting with OpenGL from scratch. It covers from very introductory topics all the way to advanced topics like Deferred Shading. Liberal usage of accompanying images and code. Strongly recommended."

learnopengl.com/
https://archive.is/BAJ0e

Plus the two books related, but learnopengl.com is particularly well-suited for beginners.

www.openglsuperbible.com/
https://archive.is/NvdZQ

opengl-redbook.com/
https://archive.is/xPxml

www.opengl.org
https://archive.fo/EZA0p
Edited last time by Chobitsu on 09/26/2019 (Thu) 08:14:37.
Open file (16.54 KB 480x360 0.jpg)
Nominally geared for beginners, although I consider it very difficult to fit something this complex into a single video–even one over 3 hrs in length! Presented live at SIGGRAPH '13.

www.youtube.com/watch?v=6-9XFm7XAT8&index=6&list=PLUPhVMQuDB_aWSKj7L_-3Ot_nxBze_YMy
https://archive.is/J0YXd

https://www.invidio.us/watch?v=6-9XFm7XAT8
open.gl/transformations https://archive.is/wV8ak >=== -julay-era fmt bug edit
Edited last time by Chobitsu on 10/27/2023 (Fri) 05:33:45.
I think I'll take a shot at creating an OGL generator to take away some of the burden of dealing with the graphics and scene complexity as a learning exercise.

If it goes well, then I might try my hand at extending it to a full environment builder for the simulator idea in this thread.
>>155
>>683 Well it was surprisingly difficult, but I finally managed to get a system set up where I can make a simple OpenGL window with a spinning triangle. I got it working after switching to Manjaro (ARCH) Linux, and 'm using GLFW + GLAD. I had to install GLFW from the software manager, and I had cloned and built both GLFW and GLAD from github. I generated the GLAD files specific for my machine with the command: python main.py --generator c --no-loader --out-path output , and then copied the two include directories into /usr/include/ and the glad.c file into my project directory (as per the GLAD instructions). I used the code example from here: www.glfw.org/docs/latest/quick.html The image is a capp of the simple OpenGL window result. Here's the simplest blank window code (w/o animation) that I managed to get working: // based on code at: // http://www.glfw.org/docs/latest/quick.html #include <glad/glad.h> // NOTE: must be #include'd before glfw3.h #include <GLFW/glfw3.h> #include <iostream> //------------------------------------------------------------------------------ static void key_callback( GLFWwindow* window, int key, int scancode, int action, int mods) { if (key == GLFW_KEY_ESCAPE && action == GLFW_PRESS) glfwSetWindowShouldClose(window, GLFW_TRUE); } //------------------------------------------------------------------------------ int main() { if (!glfwInit()) { std::cerr << "Failed to init OpenGL\n"; return -1; } GLFWwindow* window = glfwCreateWindow( 640, 480, "Even simpler example", nullptr, nullptr); if (!window) { std::cerr << "Failed to create a window\n"; glfwTerminate(); return -1; } glfwSetKeyCallback(window, key_callback); glfwMakeContextCurrent(window); // NOTE: Leaving this directive out causes a failure with a return code 139 on // my box if (!gladLoadGLLoader((GLADloadproc)glfwGetProcAddress)) { std::cerr << "Failed to initialize GLAD\n"; glfwDestroyWindow(window); // NOTE: Are these needed here? glfwTerminate(); // return -1; } glfwSwapInterval(1); while (!glfwWindowShouldClose(window)) { // DESIGN: Any rendering code would go here: glfwSwapBuffers(window); glfwPollEvents(); } glfwDestroyWindow(window); glfwTerminate(); } Here's my basic CMakeLists.txt file to go with that code: cmake_minimum_required(VERSION 2.8) project(simple) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++14 -Wall -Wextra -Wno-unused-parameter") add_executable(simple main.cpp glad.c) find_package(PkgConfig REQUIRED) pkg_search_module(GLFW REQUIRED glfw3) target_link_libraries(simple ${GLFW_LIBRARIES} ) Simple I know, but hey we all have to start somewhere yea? Now I'll begin trying to figure out how to manage creating, saving and loading geometry data on the disk files from inside the code. That should let me start exploring how to create and display environment and object geometries to use in my OpenGL windows. Once that's basically working, I could then start thinking about things like better shaders, etc. But on my old notebook machine right now I only have OpenGL v 2.1 (an Intel Mobile 4 series integrated graphics controller) so I don't expect to do anything remarkable atm. OTOH, this also means my stuff should be compatible w/ basically everyone's graphics setup if it becomes something I package up sometime. Also, now that I have a reasonably usable OpenGL dev box I've begun working through this book: learnopengl.com/ >=== -julay fmt bug edit
Edited last time by Chobitsu on 10/27/2023 (Fri) 05:29:28.
>>684 >(as per the GLAD instructions) Actually, check that. I got the command instruction from the GLFW site instead. www.glfw.org/docs/latest/context_guide.html#context_glext_auto >=== -julay fmt bug edit
Edited last time by Chobitsu on 10/27/2023 (Fri) 05:29:46.
Modern computer graphics recommendation thread on HN.

news.ycombinator.com/item?id=14652936
Here's a WordPress w/ lots of D3D shudder and OGL stuff. fgiesen.wordpress.com/2011/07/09/a-trip-through-the-graphics-pipeline-2011-index/ >=== -julay-era fmt bug edit
Edited last time by Chobitsu on 10/27/2023 (Fri) 05:30:11.
Some srsly great shaders can be found here if you dig around.

www.shadertoy.com/
The Cinder library now has official support for Ubuntu Linux and Raspberry Pi 2 & 3 as of version 0.9.1
libcinder.org/download

I consider this the best general purpose library I've ever seen for the Creative Coding field for C++ devs. I've gotten it successfully built on Ubuntu and I'll be playing around with it a bit for various experiments related to visualizing my facial animation code. The results should be runnable on Windows, Linux, and Mac.

libcinder.org/gallery
>>684
So, I made this post a couple of years ago now, and I thought I'd do another one like it since I understand things a little more now and can probably explain it just a bit better (or maybe not heh). The next couple of posts are all about how I just made a simple (but working) OpenGL sample program on a fresh install of Manjaro Linux using the GLAD/GLFW sample code.
>>1723 First, I'll install GLAD. The repo is at: https://github.com/Dav1dde/glad but I'll install it using pip (as per the recommendation). First I need to make sure pip is installed, either pacman or the Pamac gui can be used. Install 'python-pip' from the official extra repo. >1 -Then I install the latest GLAD version directly from the repo sudo pip install --upgrade git+https://github.com/dav1dde/glad.git#egg=glad >2 -Now I'll generate OpenGL C-loader files using GLAD. Since the learnopengl.com book focuses on the OpenGL 3.3 core (for good reasons), I'll specify that in particular. python glad --generator=c --out-path=GL --api="gl=3.3" --profile=core -This will create 3 files in 3 different subdirectories under GL/. tree GL >3 -I'll copy the two include directories into the /usr/include/ directory for access by my own OpenGL code. sudo cp -r GL/include/* /usr/include/ -I can confirm the new directories are in place now. ls /usr/include/ | grep -e KHR -e glad >4 -The third file is under the GL/src directory, and is named 'glad.c' . It should be a little under 9'000 lines long. I'll copy this file into whatever directory where my own source code is located for each of my projects. It defines all the C-declarations describing the OpenGL 3.3 core spec, and makes sure they are all loaded properly for the program. Have a good look at the file, and you'll see that this is why I use the GLAD generator; b/c it saves metric shittons of time, effort, and fubar grief when dealing with OpenGL. >5 >=== -julay-era fmt bug edit
Edited last time by Chobitsu on 10/27/2023 (Fri) 05:30:34.
Open file (72.60 KB 922x572 pamac-manager_004.png)
Open file (55.21 KB 922x572 pamac-manager_005.png)
Open file (22.62 KB 385x350 Menu_007.png)
Open file (123.26 KB 1116x787 juCi++_008.png)
Open file (73.26 KB 1758x745 Selection_010.png)
>>1723 -Next, I'll install GLFW, and be sure to use the X11 version. I'll add the documents too. Here's the GLFW site. https://www.glfw.org/docs/latest/build_guide.html I'll just use Pamac. > 1, 2 -Now I'll create a simple test project, just using the one provided with the glad repo. Using my favorite editor Juci, I'll create a new C++ project, delete the default main.cpp file and copy the glad.c & hellowindow2.cpp files into the project directory. >3, 4 -Here's the contents of the new CMakeLists.txt file: cmake_minimum_required(VERSION 2.8) project(muh_ogl_test) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++1z -Wall -Wextra -Wno-unused-parameter") add_executable(muh_ogl_test hellowindow2.cpp glad.c) find_package(glfw3 3.3 REQUIRED) target_link_libraries(muh_ogl_test glfw dl) -Now, just pressing Ctrl+Return in Juci saves, builds and runs the OpenGL sample. >5 -Or, build and run from the terminal: g++ hellowindow2.cpp glad.c -lglfw -ldl && ./a.out -I made a zip for you to look at. https://files.catbox.moe/s54kmo.gz Hope that helps anon. Cheers. >=== -julay-era fmt bug edit
Edited last time by Chobitsu on 10/27/2023 (Fri) 05:30:49.
>>1725 Oops, I forgot to mention the 'clone the GLAD git repo first before installing it with pip' between the 'install pip' and 'install glad' steps. git clone https://github.com/Dav1dde/glad.git Apologies. >=== -julay-era fmt bug edit
Edited last time by Chobitsu on 10/27/2023 (Fri) 05:33:12.
>>1727
Double-duh. Actually I guess it's more about running the generator locally, rather than installing it. So, needed just before the generation step actually. Hmm, I suppose you could use the DLd repo from the pip install step too. IDK if you can direct that download to somewhere other than /tmp/ with pip.
>>1725 I guess to complete the circle I should go ahead and post a copy of the newer code as well. I also plan to wrap all this mess in a clean and simple-to-use muh_gl.h file later. #include <cstdlib> #include <iostream> #include <string> // GLAD #include <glad/glad.h> // NOTE: #include before glfw // GLFW #include <GLFW/glfw3.h> //------------------------------------------------------------------------------ using std::cerr; using std::cout; using std::exit; using std::string; //------------------------------------------------------------------------------ static void err_cb(int, const char*); static void key_cb(GLFWwindow* window, int key, int scancode, int action, int mode); //------------------------------------------------------------------------------ int main() { cout << "Starting GLFW context init, OpenGL 3.3 core..." << std::endl; const GLuint w{800}, h{600}; const string title{"Muh /robowaifu/ Simulator Title"}; // Init GLFW (& OGL) glfwSetErrorCallback(err_cb); if (!glfwInit()) { // Attempt OGL init cerr << "\nFailed to init GLFW/OGL\n"; exit(-1); } else { // Set all the required options for GLFW glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); glfwWindowHint(GLFW_RESIZABLE, GL_FALSE); } // Setup the window auto win{glfwCreateWindow(w, h, title.c_str(), nullptr, nullptr)}; if (!win) { // Failed cerr << "\nFailed to create OGL window\n"; glfwTerminate(); exit(-1); } else { // Attempt configs glfwMakeContextCurrent(win); if (!gladLoadGLLoader(GLADloadproc(glfwGetProcAddress))) { cerr << "\nFailed to get the OGL context process address\n"; glfwDestroyWindow(win); glfwTerminate(); exit(-1); } else { glfwSetKeyCallback(win, key_cb); glViewport(0, 0, w, h); } } cout << "GLFW Initialization complete, begin game loop..." << std::endl; //------------------- // Game Loop // while (!glfwWindowShouldClose(win)) { glfwPollEvents(); // Just clear the buffer for now glClearColor(0.2f, 0.3f, 0.3f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); glfwSwapBuffers(win); } // //------------------- cout << "Clean exit from game loop, cleaning up GLFW..." << std::endl; // Clean up glfwDestroyWindow(win); glfwTerminate(); } //------------------------------------------------------------------------------ void err_cb(int err, const char* desc) { cerr << '\n' << err << " error: '" << desc << "'\n"; } //------------------------------------------------------------------------------ void key_cb(GLFWwindow* win, int key, int scancode, int action, int mode) { if (action == GLFW_PRESS) { cout << "GLFW key code: " << key << " pressed." << std::endl; if (key == GLFW_KEY_ESCAPE) glfwSetWindowShouldClose(win, GL_TRUE); } } >//------------------------------ edit: added an over-abundance of diagnostic output. >=== -julay-era fmt bug edit
Edited last time by Chobitsu on 10/27/2023 (Fri) 05:31:04.
Obviously not OpenGL, but this seems like it might be of interest for onboard robowaifu graphical displays. ANSI C immediate-mode graphical toolkit. github.com/Immediate-Mode-UI/Nuklear >=== -julay-era fmt bug edit
Edited last time by Chobitsu on 10/27/2023 (Fri) 05:31:26.
Open file (232.93 KB 1920x1080 Workspace 1_007.png)
>>1731 >I also plan to wrap all this mess in a clean and simple-to-use muh_gl.h file later. Done. Here's the what the same code looks like now: #include <cstdlib> #include <iostream> #include "muh_gl.h" using std::cerr; using std::cout; using std::exit; //------------------------------------------------------------------------------ int main() { cout << "Starting GLFW context init, OpenGL 3.3 core..." << std::endl; if (!muh_gl::init_glfw()) exit(-1); auto win{muh_gl::init_win()}; cout << "GLFW Initialization complete, begin game loop..." << std::endl; //------------------- // Game Loop // while (!glfwWindowShouldClose(win)) { glfwPollEvents(); // Just clear the buffer with a background color for now glClearColor(0.2f, 0.3f, 0.3f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); glfwSwapBuffers(win); } //------------------- cout << "Clean exit from game loop, cleaning up GLFW..." << std::endl; // Clean up glfwDestroyWindow(win); glfwTerminate(); } >=== -julay-era fmt bug edit
Edited last time by Chobitsu on 10/27/2023 (Fri) 05:31:38.
TBH SMH FAM I just want to establish firmly here and now, that I'm the originator of this clever literary device, in fact of course. // Say Hi to the MRS. Anon, // Muh Robowaifu Simulator. //========================= // #include <iostream> #include "muh_gl.h" using std::cout; //------------------------------------------------------------------------------ int main() { const int w{800}, h{600}; auto win{muh_gl::init_win(w, h, "Muh Robowaifu Simulator")}; cout << "GLFW init successful, begin game loop..." << std::endl; //------------------- // Game Loop // while (!glfwWindowShouldClose(win)) { glfwPollEvents(); // Just clear the buffer with a background color for now glClearColor(0.2f, 0.3f, 0.3f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); glfwSwapBuffers(win); } //------------------- cout << "Clean exit from game loop, cleaning up GLFW...\n"; // Clean up glfwDestroyWindow(win); glfwTerminate(); } // Copyright (2019) // License (MIT) https://opensource.org/licenses/MIT :^) >=== -julay-era fmt bug edit
Edited last time by Chobitsu on 10/27/2023 (Fri) 05:31:52.
Went ahead and created a public repo for this work.
>>1814
Open file (236.01 KB 1920x1080 Workspace 1_012.png)
Well, it was kind of convoluted to work out all the details, but I've managed to add quality TTF text rendering to the MRS.
>>1815
BTW, I removed GLAD as an external dependency. There's an option to create all-local files, so I moved MRS to that approach. It means anons don't have to deal with GLAD now.
Added a void rndr_tick(Shader txt_shdr, const unsigned fps_tick) function to ensure the MRS was really getting an honest 60fps (it is). However, occasionally I'd notice a little artifact that made me think that tearing was happening even though I set the swap interval to 1: glfwSwapInterval(1); // 1 should prevent tearing Now I'm not so sure it's actually tearing; A) It only infrequently occurred B) It is likely limited to only the fast-transitioning digit at the end, afaict. This makes me think it's actually not tearing, but rather the screen capp program I'm using (Shutter) is simply 'snapping' the image right at the instant a digit is being redrawn. I'd welcome insight if anyone knows more about this stuff. >pic related >=== -julay-era fmt bug edit
Edited last time by Chobitsu on 10/27/2023 (Fri) 05:32:11.
>>1848
Unrelated: Does the system redraw from the bottom-up? For some reason I thought the drawing happened from the upper-left corner down, but apparently not. Given that the coordinate origin for the system starts in the lower-left corner (similar to a Cartesian system) I guess that makes sense. Live and learn.
>>26098 This here about the Lobster language is related to OpenGL. This is why it is fast, I don't think that it is compiled is much important for speed, though it might have other advantages. For now I'm trying to use Python with OpenGL to do the same thing. Not to confused with OpenCL which I also need to use. I found the term "delta compression" for calculating differences in frames. I hope I can make the animations smaller that way. My current way of "programming" this is asking ChatGPT for every step while learning about how it works. With basic knowledge of Python it works relatively well, even with GPT-3. I'm getting the terms I need to look for how to do things, and the code which needs some polishing.
>>26105 thats how most video codecs like webm work, where you keep only keyframes and replace whats in between them with only the transformation required for the next frames, there must be lots of libraries for this
Open file (18.98 KB 768x768 delta_frame_15.png)
Open file (33.69 KB 768x768 delta_frame_37.png)
>>26106 Thanks, that's what I asked ChatGPT so I had something to ask further. I need to make it interactive, though, or like very short gifs with sound.
>>26108 Okay, I fell into a rabbit hole here. It was interesting, but probably useless. I rather need a collection of GIFs or so. I wondered why no one else had this idea before, I'm starting to understand where the problem is and why it might be hard or impossible. [quote] You are correct. In a delta compression approach where you keep the original frames and use delta frames to reconstruct, the primary advantage is not in reducing storage requirements but rather in potentially reducing the processing load when displaying or working with the frames in real-time. The advantage lies in the efficiency of processing and transmitting the frames, especially when dealing with limited computational resources or bandwidth. Here's a clearer explanation of the advantage: 1. Reduced Processing Load: By storing and transmitting only the delta frames, you can reduce the amount of data that needs to be processed, especially when displaying or working with the frames in real-time. Instead of working with full frames, you process the smaller delta frames, which can be computationally less intensive. 2. Real-Time Efficiency: In applications where real-time processing or streaming is crucial, delta compression can be advantageous. It allows for quicker decoding and display of frames, which is important in video conferencing, surveillance, and interactive applications. 3. Bandwidth Efficiency: When transmitting video data over a network, delta compression can reduce the required network bandwidth, making it feasible to stream video even with limited bandwidth. However, it's important to note that you still need the original frames to apply the delta frames and reconstruct the complete frames. The advantage is in processing efficiency, not in storage efficiency. You trade off storage efficiency for computational and bandwidth efficiency. If your priority is purely reducing storage requirements and you don't need real-time processing or streaming, then traditional video codecs that achieve high compression ratios while storing complete frames might be more suitable for your use case.[/quote]

Report/Delete/Moderation Forms
Delete
Report