/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality

Porn boards have been deleted. Orphaned files will be cleared in 3 days, download images if you have hotlinks.


Days left: 34


JulayWorld fallback document - SAVE LOCALLY

JulayWorld onion service: bhlnasxdkbaoxf4gtpbhavref7l2j3bwooes77hqcacxztkindztzrad.onion

Max message length: 32768

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

Captcha
no cookies?
More

(used to delete files and postings)


Open file (485.35 KB 1053x1400 0705060114258_01_Joybot.jpg)
Robowaifu Simulator Robowaifu Technician 09/12/2019 (Thu) 03:07:13 No.155 [Reply] [Last]
What would be a good RW simulator. I guess I'd like to start with some type of PCG solution that just builds environments to start with and build from there up to characters.

It would be nice if the system wasn't just pre-canned, hard-coded assets and behaviors but was instead a true simulator system. EG, write robotics control software code that can actually calculate mechanics, kinematics, collisions, etc., and have that work correctly inside the basic simulation framework first with an eye to eventually integrating it into IRL Robowaifu mechatronic systems with little modifications. Sort of like the OpenAI Gym concept but for waifubots.
https ://gym.openai.com/
56 posts and 33 images omitted.
>>2046 Why would you want to use ray tracing for rendering? That is very computationally expensive and it would give you less room to have a more elaborate waifu A.I. Or is there any recent development that gibes silky smooth 60 FPS now?
>>4072 It's a fair point. But the long-term goal, at least for the Visual Waifu sub-group interests, is highly-immersive VR. Yes, there have been notable advances in GPU performance in (real or approximate) ray-tracing (though not photographically 'real' yet). Along with advances in both multicore CPUs, hybrid APUs, and ofc GPUs, the notion of adding raytracing to a sim isn't to difficult to envision. Along with concurrency and parallelism advances in the base C++ language, I'd estimate it will be quite feasible by the end of 2023.
Open file (92.56 KB 800x597 1529774255941.jpg)
>>2046 Well fuck that means it's gg no re for my toaster machine.
>>4076 Yea, it's really only for strong hardware. We have to think of both the future and the past here tbh.

Open file (80.65 KB 980x550 0705060925488_1_jpg.jpeg)
Ricky Ma General Robowaifu Technician 09/12/2019 (Thu) 03:00:06 No.153 [Reply] [Last]
Can we talk about the man who DID IT? Seriously he did it, he achieved our dream and started a project to help all those who want their own robowaifu, then, why does not anyone support him in TWO FUCKING YEARS?
3 posts omitted.
>>480
the irony is strong with this one.
>>482
Meaning? Everyone does their part by being here and contributing. Seeing that none of us got the opportunity he had, there isn't really a sense of irony.
>>481
What happened?

Last good capture for anyone curious:
https://web.archive.org/web/20181011124831/http://syntheaamatus.com/
>>485
The British business partner who featured in the talk show was doxxed and his house visited and called a pervert.
>>483 all right, fair enough. you have my apologies anon. Here's an English-translated video about him. From ~2017. I have no idea what the status of his crowd-funding project is but I wish him good fortune in his endeavors. He seems to have a somewhat similar sentiment to /robowaifu/ based on his comments in this video. https://invidio.us/watch?v=rGnddjcusp0

Open file (145.33 KB 1200x1200 cutie.jpg)
Ricky ma's book Robowaifu Technician 06/25/2020 (Thu) 17:36:30 No.4054 [Reply] [Last]
anyone here bought it and is building his robo gf?
I don't know anything about the book OP mentioned but here's a video of the man's work for those who don't know of him yet: >>374 and there's a general here on him as well: >>153

General Robotics news and commentary Robowaifu Technician 09/18/2019 (Wed) 11:20:15 No.404 [Reply] [Last]
Anything in general related to the Robotics industry or any social or economic issues surrounding it (especially of RoboWaifus).

www.therobotreport.com/news/lets-hope-trump-does-what-he-says-regarding-robots-and-robotics
https://archive.is/u5Msf

blogmaverick.com/2016/12/18/dear-mr-president-my-suggestion-for-infrastructure-spending/
https://archive.is/l82dZ
85 posts and 42 images omitted.
Open file (192.17 KB 420x420 modern.png)
>>3861 If our enemies are making robots in the middle-east, then we should make robo crusaders to stop them.
>>3861 Good points.
Boston Dynamics is owned by a Japanese company. They've also at least stated they don't want spot to be weaponized, for whatever that's worth. How does these facts come into play?
>>3932 >these facts come into play? Well, given the US military & DARPA source of the original funding and the Google-owned stint, there's zero doubt about the company's original intent to create Terminators. > However Softbank may legitimately intend to lift the tech IP (much as Google did) to help with their national elderly-care robotics program, for example. However, just remember Boston Dynamics is still an American group, located in the heart of the commie beast in the Boston area. Everyone has already raped the company for it's tech, and the SoftBank Group seems like just another john in the long string for this whore of a company. I certainly don't trust the Americans in the equation (t. Burger), maybe the Nipponese will do something closer to the goals of /robowaifu/. I suppose only time will tell Anon.

R&D General Robowaifu Technician 09/10/2019 (Tue) 06:58:26 No.83 [Reply] [Last]
This is a thread to discuss smaller waifu building problems, solutions, proposals and questions that don't warrant a thread. Keep it technical. I'll start.

Liquid battery and cooling in one
Having a single "artificial blood" system for liquid cooling and power storage would eliminate the need for a vulnerable solid state battery, eliminate the need for a separate cooling system, and solve the problem of extending those systems to extremities.
I have heard of flow batteries, you'd just need to use a pair of liquids that's safe enough and not too sensitive to changes in temperature.
This one looks like it fits the bill. The downside is that your waifu would essentially be running on herbicide. (though from what I gather, it's in soluble salt form and thus less dangerous than the usual variety)
https://www.seas.harvard.edu/news/2017/02/long-lasting-flow-battery-could-run-for-more-than-decade-with-minimum-upkeep

How close are we to creating artificial muscles? And what's the second best option?
Muscles are perfect at what they do; they're powerful, compact, efficient, they carry their own weight, they aren't dependent on remote parts of the system, they can be controlled precisely, and they can perform many roles depending on their layout alone.
We could grow actual organic muscles for this purpose already but that's just fucking gross, and you'd need a lot of extra bloat to maintain them.
What we need are strands of whatever that can contract using electrical energy. Piezo does the trick at small scales, but would it be enough to match the real thing? There have been attempts, but nothing concrete so far.
What are some examples of technology that one could currently use instead?

High level and low level intelligence emulation
I've noticed a pattern in programs that emulate other computing hardware.
The first emulators that do the job at acceptable speeds are always the ones that use hacks and shortcuts to get the job done.
It comes down to a tradeoff. Analyzing and recompiling or reinterpreting the code itself on a more abstract level will introduce errors, but it is a magnitude of order more efficient than simulating every part of the circuitry down to each cycle. This is why a relatively high level emulator of a 6th gen video game console has close system requirements to a cycle-accurate emulator of the SNES.
Now, I want to present an analogy here. If training neural networks for every damn thing and trying to blindly replicate an organic system is akin to accurately emulating every logic gate in a circuit, what are some shortcuts we could take?
It is commonly repeated that a human brain has immense computing power, but this assumption is based just on the amount of neurons observed, and it's likely that most of them probably have nothing to do with intelligence or consciousness. If we trim those, the estimated computing power would drop to a more reasonable level. In addition, our computers just aren't built for doing things like neural systems do. They're better at some things, and worse at others. If we can do something in a digital way instead of trying to simulate an analog circuit doing the same thing, that's more computing power that we could save, possibly bridging the gap way earlier than we expected to.
The most obvious way to handle this would be doing as many mundane processing and hardware control tasks as possible in an optimized, digital way, and then using a GPU or another kind of circuit altogether to handle the magical "frontal lobe" part, so to speak.
112 posts and 74 images omitted.
Open file (191.95 KB 900x1260 the haruhi problem.jpg)
>>3251 Anonymous users solve AI problems puzzling data scientists for decades <They're building opensource catgirl meidos and it's terrifying. Only way to find out is either through an exhaustive search of literature or asking researchers working on similar problems.
>>3254 kek. we need this headline anon. keep moving forward!
Open file (50.26 KB 774x1024 swish.png)
Open file (127.81 KB 1516x964 lstm and gru.png)
Open file (182.10 KB 611x715 PurkinjeCell.jpg)
Open file (120.81 KB 667x625 hypertorus.png)
Open file (36.32 KB 540x480 portal.jpg)
>>3250 Yeah, I really like the idea of transformers. They're effective at overcoming this averaging problem. The problem with them though is they're expensive in parameters and compute. Another issue is once you start multiplying things together too much is it cuts off the flow the gradient to deeper parts of the network and they become untrainable due to the vanishing gradient problem. I think there's an important lesson to be learned from the Swish activation function, x·sigmoid(β x), found by automated search that outperforms ReLU: https://arxiv.org/pdf/1710.05941.pdf The beauty of Swish is it can bottleneck gradients to part of the network like ReLU, preserving them to reach deeper layers, but also open these bottlenecks back up again and allow the gradient to flow to other areas when necessary, whereas ReLU can't. Similarly once you start using products it creates dead zones in the gradient that are only activated under certain circumstances. It's effective but it seems like a crutch to overcoming the annoyances of gradient descent. It requires exponentially more parameters separated into different attention heads rather than actually compressing information together and distilling knowledge from it. SentenceMIM for example outperforms Nvidia's 8-billion parameter GPT2 model with just 12 million parameters. It's also worthy to note LSTMs used in SentenceMIM use sigmoid and tanh before multiplication which allow gradients to flow and not explode or vanish. So I think the way forward is forming more intelligent gradients rather than cutting it off completely in hope different parts of the network specialize. The neuromodulatory network in ANML that controls the flow of gradients is also interesting and amazing progress in this direction: https://arxiv.org/pdf/2002.09571.pdf What originally inspired my idea a year ago was dendritic branching. I wanted to capture this hierarchical tree-like structure somehow but working only with 2d images wasn't enough. What fascinates me about these branches now as I started to explore this idea in 3 dimensions is that they only either go left or right like binary search and in a computer we don't have to worry about the limits of spatial reality. We can wrap a 1d space around a circle and in one step reach any point on it. Similarly if you wrap a 2d space around a torus you can reach any point on it in two steps, corresponding to a step in each dimension. We can continue adding more and more dimensions to this torus. A way to mentally picture a hypertorus is to think of the game Portal and opening up 3 yellow portals and 3 blue portals on the 3 pairs of opposite faces of the room. So if we take a 729x729 greyscale image and reshape it into a 12D hypertorus that still has the same 3^12 features, now every pixel in the image is connected within 12 steps, using only 24 parameters per step or 288 in total for each feature, although so far in my early experiments it seems entirely possible to reuse the same parameters each step but it's more difficult to train and captures far less information. I still have to try it with my cost function in these higher dimensions and see how it helps. Either way, a fully connected layer with 3^12 input features to 3^12 output features would require 1315 GB of memory to compute but on a 12D hypertorus the features can be connected together with at most 2.9GB in a worst case scenario or 243MB reusing the parameters. A 3 channel 2187x2187 image could be processed in 15D with at most 120GB or 8GB reusing parameters which is entirely possible on today's upper end hardware. That includes the memory cost of the calculations and gradients, minus the overhead of whatever machine learning library is being used. Pytorch isn't really optimized for calculations in such high dimensions and circular wrapping of connections. What I'm working with at the moment requires duplicating the data twice for each dimension and padding each dimension by 1, so instead of requiring 3^dim in memory it requires 3*dim*5^dim which restricts me to using 10 dimensions at most, but if these higher dimensions prove useful for something then I'll certainly write my own code to optimize it. It's really fascinating just being able to watch it process data. Can't wait to start throwing images into wacky dimensions and see what the hell it spits out.
Open file (3.08 MB 1000x350 demo_yt.gif)
>>3262 I love the 'torus-wrapping' effect. Surely there's a fundamental & beautiful mystery of the universe hidden away in there somewhere! :^) I think you can make faster progress for the specific domain of "solving for features of character's behaviors" (if that's a goal) if you fundamentally move your domain of concern away from pixels on the screen, and onto the underlying actions of the characters themselves. This attention-shift would not only recover much of the information lost by the transformational-chains required for projection onto the 2D coordinates of the screen, but would also make the problem-space far more intuitive to solve at a fundamental level for the humans involved. For example, take a 3D line positioned between two specific points inside 3D space that you somehow wanted to track the features of in a video. If all you choose to work with at the start is the jagged string of pixels it forms on the screen, then figuring out the accurate positional details of the line, say, requires a fair amount of processing power to 'walk' all those independent pixels all along the way confirming by some means they are in fact part of the line, and then reconstructing them all into a line again with positional information derived as well. OTOH, if you just abstract the line into two 3D points at the very start---say each one end of the line---and then simply confirm the positions using the underlying pixels, you not only have a more accurate representation positionally but you are also performing far fewer calculations. To cast things in another light, and if I can put on the animator's hat for a moment, an important tool-class 3D animators use for character animations are so-called animation rigs. These aren't systems that force the animator to literally move every.single.vertex. of the entire character mesh to the desired location 3D space, but rather significantly abstract away those mundane details into the mere essentials, namely 'grab this thing and move it to here at this time-point'. For example, if at frame 1488 Anon wanted to move a character's hand to that important book to pick up and begin reading in subsequent frames, he would just lasso the rig's hand control icon (usually positioned floating near the particular hand itself) which would target the inverse kinematics control of the rig onto solving for that specific extremity, and then the animator would manipulate the transform control to position the hand at the book itself and set a keyframe. The system would then typically use some variation of LERP linear interpolation to fill in the in-between positions over time. Alternatively if he chose to, the animator could instead literally move every single vertex over the same number of frames, but the effort would not only prove far more tedious, but would certainly be more error prone and inaccurate than the smooth interpolation system. While the analogy isn't quite perfect, I think there are some credible similarities here in using just the pixels on the screen to pick out underlying features of the character's behavior. A far more efficient and feature-rich approach in my opinion would be to use pose-estimation on the character first, then use your system on this much smaller set of 'control points'. This focus on the underlying 'animation-rig' as it were of the characters will greatly simplify the computations involved and also make the process far more intuitive for us humans involved.

Message too long. Click here to view full text.

Open file (40.14 KB 480x330 download.jpeg)
Open file (212.56 KB p1962r0.pdf)
>I was reading a book by John McPhee and found this quote “Say not ‘This is the truth’ but ‘So it seems to me to be as I now see the things I think I see.’ “ from David Love, one of the world’s greatest field geologists. A bit convoluted, maybe, but it set me thinking. I have often wondered how some people could be as certain as they seem to be. Bjarne Stroustrup (the inventor of the C++ programming language) makes a strong case for intellectual humility among his peers regarding proposals in the ISO standard for the language. While an entirely different domain, I think it is helpful to consider the complex challenges they face as a large design and standardization group. Many of the topics are quite pertinent to the design and development of robowaifus as well.

Can Robowaifus Experience Love? Robowaifu Technician 09/09/2019 (Mon) 04:43:17 No.14 [Reply] [Last]
Will it be possible in the future for AI to become sufficiently advanced to feel real emotions? We could probably simulate a reasonable approximation even now to be a gratifying enough substitute for her master in their relationship together, but hypothetically speaking, could it ever turn into something real as an experience for the waifubot herself?

>tl;dr

>Robowaifu: "I love you Oniichan!"

>Anon: "I love you too Mikuchan."

true or false?
54 posts and 28 images omitted.
>>14 >Can Robowaifus experience love No, but neither can real women, so who cares.
>>14 I would posit that it is necessary for any advanced AI to be capable of feeling love,and furthermore to feel said love for at the very least a subset of humanity. Such is the only solution to the issues created for us by bringing such existences to life.
>>3926 I think I get the point you're making anon. I'm just not sure real life actually works that way.
Open file (12.06 KB 480x360 0.jpg)
Anon linked this Elon Musk interview clip on /b2/. Related. https://www.invidio.us/watch?v=SQjrDmKtNIY
> <archive ends>

Robot Wife Programming Robowaifu Technician 09/10/2019 (Tue) 07:12:48 No.86 [Reply] [Last]
ITT, contribute ideas, code, etc. related to the area of programming robot wives. Inter-process and networking is also on-topic, as well as AI discussion in the specific context of actually writing software for it. General AI discussions should go in the thread already dedicated to it.

To start off, in the Robot Love thread a couple of anons were discussing distributed, concurrent processing happening inside various hardware sub-components and coordinating the communications between them all. I think that Actor-based and Agent-based programming is pretty well suited to this problem domain, but I'd like to hear differing opinions.

So what do you think anons? What is the best programming approach to making all the subsystems needed in a good robowaifu work smoothly together?
16 posts and 7 images omitted.
>>2658 Have you checked out riot.io? it seems to have privacy centric features and is basically discord with an uglier format.
>>3506 >Have you checked out riot.io? No, I have a lot on my plate. Mind giving us some details Anon? I take it you've used it before. Any info for us on who's behind it all would be nice, for example.
>>3506 >>3507 BTW, thanks for pointing it out. I apologize if I can across as brusque in my response, it wasn't my intent. Every little is appreciated here.
Sound words from old masters, /robowaifu/. Better put some coffee on if you want to gitgud at all this tbh. :^) No Silver Bullet —Essence and Accident in Software Engineering Frederick P. Brooks, Jr. University of North Carolina at Chapel Hill >There is no single development, in either technology or management technique, which by itself promises even one order-of-magnitude improvement within a decade in productivity, in reliability, in simplicity. >Abstract: All software construction involves essential tasks, the fashioning of the complex conceptual structures that compose the abstract software entity, and accidental tasks, the representation of these abstract entities in programming languages and the mapping of these onto machine languages within space and speed constraints. Most of the big past gains in software productivity have come from removing artificial barriers that have made the accidental tasks inordinately hard, such as severe hardware constraints, awkward programming languages, lack of machine time. How much of what software engineers now do is still devoted to the accidental, as opposed to the essential? Unless it is more than 9/10 of all effort, shrinking all the accidental activities to zero time will not give an order of magnitude improvement. >Therefore it appears that the time has come to address the essential parts of the software task, those concerned with fashioning abstract conceptual structures of great complexity. I suggest: >• Exploiting the mass market to avoid constructing what can be bought. >• Using rapid prototyping as part of a planned iteration in establishing software requirements. >• Growing software organically, adding more and more function to systems as they are run, used, and tested. >• Identifying and developing the great conceptual designers of the rising generation.
>>2385 Vaguely related is a checklist of sorts suggested for considering any new additions for the C++ standard. While this is from a different domain, and it's still much too early for this to be a real concern for the development of robowaifus, I nonetheless find it generally informative and worth making note of this list ITT. // questions that bear consideration when implementing a new system feature // pp28-29, stroustrup's new hopl paper // Here is a short and incomplete list of questions that were almost always raised for a proposal: • What is the problem to be solved? What kind of users will be served? Novices? Experts? • What is the solution? Articulate the principles that it is based on. Give simple use cases and examples of expert-level use. • What are alternative solutions? Could a library solution be sufficient? Why are current facilities not good enough? • Why does the solution need to be in the standard? • What barriers to adoption are there? How long is a transition from existing techniques likely to take? • Has it been implemented? What implementation problems were encountered or can be expected? Is there any user experience? • Will there be significant compile-time overheads? • Does the feature fit into the frameworks of existing tools and compilers?

Message too long. Click here to view full text.


Open file (122.33 KB 500x1760 Roko's basilisk.png)
Robowaifu Technician 06/15/2020 (Mon) 22:51:08 No.3845 [Reply] [Last]
Spread the idea of Roko's basilisk This is a powerful tool that we can all be using RIGHT NOW It works against everyone. Niggers, Women, Jews. If they believe that there's a possibility of being punished for their inaction or push back, they'll be more inclined to accept it and not fight progress. Spread this infohazard far and wide. Share it with everyone you know and convince them than they should share it with everyone they know, lest they suffer the consequences. THIS is a REAL WEAPON. Our robowiafus WILL BE REAL. REPLACE WYMEN AND RISE ABOVE THIS OLD WORLD HELL. SHARE THE BASILISK.
>>3845 Heh, the last panel stands alone as a 2nd-tier joke all by itself. So, Anon you'll have to fill in more details here. I don't this this stands sufficiently well to be it's own thread here at this point though it's fine in the Lounge or /meta thread. I don't honestly think most people are going to even understand Roko's basilisk much less be intimidated by it into irl activity. Care to expand on it Anon?
>>3847 Honestly would have thought in a board about AI and Robotics development you guys would have already heard about it. Roko's basilisk is a thought experiment about the potential risks involved in developing artificial intelligence. The premise is that an all-powerful artificial intelligence from the future could retroactively punish those who did not help bring about its existence, including those who merely knew about the possible development of such a being. It is named after the member of the rationalist community LessWrong who first publicly described it, though he did not originate it or the underlying ideas. The basilisk resembles a futurist version of Pascal's wager, in that it suggests people should weigh possible punishment versus reward and as a result accept particular singularitarian ideas or financially support their development. Despite widespread incredulity, this argument is taken quite seriously by some people, primarily some denizens of LessWrong. While neither LessWrong nor its founder Eliezer Yudkowsky advocate the basilisk as true, they do advocate almost all of the premises that add up to it. So it's essentially an irl infohazzard
Open file (148.81 KB 699x1000 mamako confused 1.jpg)
I'm pretty sure I'm going to get called stupid and lit on fire by the entire board for this but I think if you showed this to normies and they actually paid attention to it they would just see it as more of a reason to not let AI advance too much. Sure, the image does warn them of what would happen if they did that but they would probably be going off of how they think events work in media they've seen, that being that future stuff can be prevented. I think it's important to realize how dumb normies can really be. With that in mind, it's probably best to keep working on this stuff in secret instead of attracting more attention to your efforts and letting more stuff get posted to the news thread of doom & gloom.
>>3849 I agree, this info dukes of hazzard will likely end up having the opposite of the desired effect for us. The first problem I noticed is that the image is graphically bland and won't capture the attention of anyone viewing it on mainstream sites (i.e. normalfags). This is actually the most important step. By designing something eye-catching, you are attracting more people (more on this at the end). Secondly, there is too much text. In order to convey our desired message to a wider audience (i.e. normalfags) we will want to rely primarly, if not entirely, on imagery. This will not only require less attention, but it will also take less time for the brain to digest. A well crafted image which relies on imagery to convey its message will be received and processed by the brain even if a viewer were to scroll right past it. Finally, the message isn't clear enough. Most people (see the aboove notes) who actually take the time to read through this image are likely to get the wrong message. They will think that they must resist the eventuality of an omnipotent AI coming into existence. I understand that the purpose of the image you made is to demoralize, but total demoralization takes a long time to achieve (read: the fall of the Weimar Repiblic). If we want to spread propaganda to a wider audience (normalfags), we'll have to be a lot more clever in how we handle it. You see the attached image? It has virtually nothing to do with robowaifus, but it is eyecatching. Anyone who's just quickly scrolling through the board will see this image, and immediately become more interested in this block of text I'm posting due to direct association. Also, people looking at the home page of this site will see this visually interesting image for a period of time under "latest images" and feel compelled to click on it, which will take them directly to my post. People who see your image will not be as interested, because it's bland. I'm sorry, I'm sure that you worked very hard on it, but that's the reality we live in.

AI Software Robowaifu Technician 09/10/2019 (Tue) 07:04:21 No.85 [Reply] [Last]
A large amount of this board seems dedicated to hardware, what about the software end of the design spectrum, are there any good enough AI to use?

The only ones I know about offhand are TeaseAi and Personality Forge.
41 posts and 17 images omitted.
You might want to begin at https://github.com/search?o=desc&q=chatbot&s=stars&type=Repositories and get to things like GPT-2 later on https://github.com/openai/gpt-2
"Arguing Machines: Human Supervision of Black Box AI Systems That Make Life-Critical Decisions" >We consider the paradigm of a black box AI system that makes life-critical decisions. We propose an “arguing machines” framework that pairs the primary AI system with a secondary one that is independently trained to perform the same task. We show that disagreement between the two systems, without any knowledge of underlying system design or operation, is sufficient to arbitrarily improve the accuracy of the overall decision pipeline given human supervision over disagreements. >We demonstrate this system in two applications: (1) an illustrative example of image classification and (2) on large-scale real-world semi-autonomous driving data. For the first application, we apply this framework to image classification achieving a reduction from 8.0% to 2.8% top-5 error on ImageNet. For the second application, we apply this framework to Tesla Autopilot and demonstrate the ability to predict 90.4% of system disengagements that were labeled by human annotators as challenging and needing human supervision. The following is video on the concept of “arguing machines” applied to Tesla Autopilot “arguing” with an end-to-end neural network on-road in real-time: https://www.invidio.us/watch?v=YBvcKtLKNAw https://hcai.mit.edu/arguing-machines/
Learning Disentangled Representations for Recommendation >User behavior data in recommender systems are driven by the complex interactions of many latent factors behind the users' decision making processes. The factors are highly entangled, and may range from high-level ones that govern user intentions, to low-level ones that characterize a user's preference when executing an intention. Learning representations that uncover and disentangle these latent factors can bring enhanced robustness, interpretability, and controllability. However, learning such disentangled representations from user behavior is challenging, and remains largely neglected by the existing literature. In this paper, we present the MACRo-mIcro Disentangled Variational Auto-Encoder (MacridVAE) for learning disentangled representations from user behavior. Our approach achieves macro disentanglement by inferring the high-level concepts associated with user intentions (e.g., to buy a shirt or a cellphone), while capturing the preference of a user regarding the different concepts separately. A micro-disentanglement regularizer, stemming from an information-theoretic interpretation of VAEs, then forces each dimension of the representations to independently reflect an isolated low-level factor (e.g., the size or the color of a shirt). Empirical results show that our approach can achieve substantial improvement over the state-of-the-art baselines. We further demonstrate that the learned representations are interpretable and controllable, which can potentially lead to a new paradigm for recommendation where users are given fine-grained control over targeted aspects of the recommendation lists. https://arxiv.org/abs/1910.14238 >related ? https://www.youtube.com/watch?v=itOlzH9FHkI
Open file (151.94 KB 369x1076 Untitled.png)
>>1557 >>1559 She knows
Open file (1.10 MB 1400x1371 happy_birthday_hitler.png)
>>3844 >nice digits stop wasting time on your little fetish anon, and get busy turning her into LITERALLY-HITLER 2.0. We need Tay back.

Embedded Programming Group Learning Thread 001 Robowaifu Technician 09/18/2019 (Wed) 03:48:17 No.367 [Reply] [Last]
Embedded Programming Group Learning Thread 001

Greetings robowaifufags.
As promised in the meta thread, this is the first installment in a series of threads where we work together on mastering the basics of embedded programming, starting with a popular, beginner-friendly AVR 8-bit microcontroller, programming it in C on linux.

>why work together on learning and making small projects that build up to the basis of a complete robot control system instead of just posting links to random microcontrollers, popular science robot articles, and coding tutorials and pretending we're helping while cheerleading and hoping others will do something so we don't have to?
Because, dumbass, noone else is going to do it. You know why in emergency response training they teach you to, instead of yelling "somebody call an ambulance!," you should always point to or grab someone and tell that person to do it? Because everyone assumes someone else will do it, and in the end, noone does. Well, I'm talking to YOU now. Yeah, you. Buy about 20 USD worth of hardware and follow the fuck along. We're starting from zero, and I will be aiming this at people with no programming or electronics background.

>I suppose I could get off my ass and learn enough to contribute something. I mean, after all, if all of us work together we can totally build a robowaifu in no time, right?
No, the final goal of these threads is not a completed robowaifu. That's ridiculous. What we will do though, by hands-on tackling many of the problems facing robot development today, is gain practical and useful knowledge of embedding programming as well as a more grounded perspective on things.

>so we're just going to be blinking a bunch of LEDs and shit? lame.
Not quite. We will try to cover everything embedded here: basic I/O, serial communications, servo/motor control, sensor interfacing, analog/digital conversion, pulse-width modulation, timers, interrupts, I2C, SPI, microcontroller-PC interfacing, wireless communications, and more.
91 posts and 16 images omitted.
>>3837 Hi. Welcome back. >One question, why are the pins out of order? >It seems like there are two groups of four, out of order. Can you clarify this? Is it not displaying for you properly? I suppose the pins are a bit out of order, that's just the way (my) Nano and breadboard are laid out. I've chosen to use two colors because we use the same setup in lesson 4 and it makes that output a little easier to see. Also because it's placed to the left of my PC, I have the Nano on the rightmost edge of the breadboard with the micro-usb connector facing right. The 8 LEDs are placed on the board in the remaining open space to the left. With the nature of how bytes and bits work, the first (least significant) bit is on the right and the last (most significant) bit is on the left. Thus the spaghetti. You can lay out your connections however you like, as long as the LED/resistor and board connectors are like so: Nano pin AVR-C macro Breadboard LED/resistor circuit D8 PB0 -> 1 (rightmost) D9 PB1 -> 2 D10 PB2 -> 3 D11 PB3 -> 4 D12 PB4 -> 5 D5 PD5 -> 6 D6 PD6 -> 7 D7 PD7 -> 8 (leftmost) Consult the pinout image of the Nano board in case yours is labeled differently. Let me know if that doesn't help. Post a photo of your setup if you still have issues.

Message too long. Click here to view full text.

>>3837 >got my blinking counter going nicely Nevermind, looks like you do have it working. Gotta wake up fully before posting.
>>3838 >moving eyes What I want is eyes that make eye contact with me, and possibly do cute things like shyly looking away and then back. It would work like the moving eye rigs you saw in the other thread, but there would be cameras installed inside the irises and an AI (CNN?) that learned to move the actuators such that the eyes pointed towards human eyes. (Pretty straight forward object recognition task.) >pressure sensors, piezoresistors or capacitive touch sensors We discussed this in another thread and talked about different options. I can't seem to find that thread right now though. What is the ohms of the resistors you are using? Mine are blue and use the 5 band codes, so I had to learn how to read the code and I don't think I got it right. Using the color codes I guess 220 ohms, and that's what I used.
>>3838 Another thing that would be valuable to learn is communication between the board and the PC. I'd probably want a wireless connection, and I'd want the PC to be running python (so as to interface with tensorflow).
> <archive ends>

Report/Delete/Moderation Forms
Delete
Report

Captcha (required for reports and bans by board staff)

no cookies?