/robowaifu/ - DIY Robot Wives

Advancing robotics to a point where anime catgrill meidos in tiny miniskirts are a reality

Porn boards have been deleted. Orphaned files will be cleared in 3 days, download images if you have hotlinks.


Days left: 34


JulayWorld fallback document - SAVE LOCALLY

JulayWorld onion service: bhlnasxdkbaoxf4gtpbhavref7l2j3bwooes77hqcacxztkindztzrad.onion

Max message length: 32768

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

Captcha
no cookies?
More

(used to delete files and postings)


/robowaifu/meta-2: Electric Boogaloo Robowaifu Technician 05/13/2020 (Wed) 16:22:03 No.3108 [Reply] [Last]
Please Note: I'm going to be continuing with occasionally repopulating our board again with old posts. Feel free to post responses to them regardless, as I will generally be watching and can often respond appropriately to your posts. But be aware these will be 'log dumps' as it were from our original board. Specifically, these bump-bot re-posts will all have rw@bump.bot as the email sig. Cheers. >--- This thread is for off-topic and general discussions. FYI, our bunker is at https://anon.cafe/robowaifu/catalog.html probably should bookmark that now Anon. :^) Also, let's discuss ways to get more people involved. How can we grow this board? As well, let's share general robowaifu ideas, etc., to help inspire each other. This thread is meant to improve /robowaifu/ in general way and be a place to hangout with loosely off-topic talk. >previous /meta thread: >>38
Edited last time by Chobitsu on 06/07/2020 (Sun) 07:18:28.
27 posts and 10 images omitted.
>>4049 Alright, thanks I'll have a look at it. Have a good one, see you again soon I suppose.
See now we're talking right here >>4047 So I thought about it last night and actually the thing /robowaifu/ needs most is to make some paper. That's the flag to rally around. Everything that could fix this board stems from that, and the lack of obvious business opportunities with a good barrier to entry (too high for individuals, low enough for a distributed development team) is thus the sole preoccupation >>4050 so if you installed the libraries the only thing to do is grab cmake lists and main.cpp and mountainCar.cpp/.h there's really not much to look at, I threw it together in like 45 minutes then added some comments and uploaded it here's an interesting facet: you can actually turn the learning off in the step(cs, ...) function and the network will actually still improve on novel tasks; this is more apparent in the ALE anyway I have a lot of work to do catch y'all later
>>4051 Thanks for the details, Iggy. >the network will actually still improve on novel tasks; this is more apparent in the ALE That's interesting. I wonder how that works.
>>4051 >So I thought about it last night and actually the thing /robowaifu/ needs most is to make some paper. That's the flag to rally around. Everything that could fix this board stems from that, and the lack of obvious business opportunities with a good barrier to entry (too high for individuals, low enough for a distributed development team) is thus the sole preoccupation So take up the mantle of 'Business Manager' or something for /robowaifu/ Iggy. This board has always been about herding cats in a general direction. Anons will work on a subset they are currently interested in. For me right now, that means mastering concurrency and parallelism using C++20. For you atm, that apparently means finding business opportunities for the community. "See a need, fill a need" as the old saying goes. Good luck. Whoever can capitalize on this progression to a full-fledged robowaifu market will become a very wealthy man indeed.
brb hell

/robowaifu/ Embassy Thread Chobitsu Board owner 05/08/2020 (Fri) 22:48:24 No.2823 [Reply] [Last]
This is the /robowaifu/ embassy thread. It's a place where Anons from all other communities can congregate and talk with us about their groups & interests, and also network with each other and with us. ITT we're all united together under the common banner of building robowaifus as we so desire. Welcome. Since this is the ambassadorial thread of /robowaifu/, we're curious what other communities who know of us are up to. So w/o doxxing yourselves, if this is your first time posting ITT please tell us about your home communities if you wouldn't mind please Anons. What do you like about them, etc? What brought you to /robowaifu/ today? The point here is to create a connection and find common-ground for outreach with each other's communities. Also, if you have any questions or concerns for us please feel free to share them here as well.
Edited last time by Chobitsu on 05/23/2020 (Sat) 23:13:16.
26 posts and 16 images omitted.
>>3743 Oh nice. I was thinking about reloading another one of the bigger threads from days of yore (I worked my way up from smallest to largest). Any requests from the catalog you'd like to see again /f/? >best gravity falls episode my dude.
Open file (139.97 KB 515x301 bionicfeel.png)
>>3744 IIRC, the Can Robowaifus Experience Love? thread had some great discussion, which I had a small part in. Of course, that topic has been touched upon in other threads to varying extents as well. Alternatively, the old Visual Waifu thread was massive, and also had a lot of valuable discussion. I've never actually seen Gravity Falls. Is it any good?
>>3745 >Can Robowaifus Experience Love? >Visual Waifu got it. i'll try to have them back up within a few days Anon. it's a tedious process... >gravity falls yea, i like it. but there's no accounting for taste haha. generally, it's a light-hearted and unpretentious show.
>>3746 Thanks fam, I appreciate all your hard work.
>>3745 >>3747 >Can Robowaifus Experience Love? >>14 >Visual Waifu >>240 There you go, fam.

Welcome to /robowaifu/ Anonymous 09/09/2019 (Mon) 00:33:54 No.3 [Reply] [Last]
Why Robowaifu? Most of the world's modern women have failed their men and their societies, feminism is rampant, and men around the world have been looking for a solution. History shows there are cultural and political solutions to this problem, but we believe that technology is the best way forward at present – specifically the technology of robotics. We are technologists, dreamers, hobbyists and geeks looking forward to a day when any man can build the companionship he desires in his own home. Not content to wait for the future, however, we are bringing that day forward. We are creating an active hobbyist scene of builders, programmers, artists and designers, using the technology of today, not tomorrow. Join us! NOTES & FRIENDS > Notes: -This is generally a SFW board, given our engineering focus primarily. On-topic NSFW content is OK, but please spoiler it. -Our bunker is located at: https://anon.cafe/robowaifu/catalog.html Please make note of it. > Friends: -/clang/ - currently at TBA, toaster-love NSFW. Metal clanging noises in the night. -/monster/ - currently at https://smuglo.li/monster/, bizarre NSFW. Respect the robot. -/tech/ - currently at >>>/tech/, installing Gentoo Anon? They'll fix you up. -/britfeel/ - currently at https://anon.cafe/britfeel/, some good lads. Go share a pint! -/server/ - currently at https://anon.cafe/server/, multi-board board. Eclectic thing of beauty. -/f/ - currently at https://anon.cafe/f/res/4.html#4, doing flashtech old-school. What is a Robowaifu?

Message too long. Click here to view full text.

Edited last time by Chobitsu on 06/13/2020 (Sat) 19:08:23.

Visual Waifus Robowaifu Technician 09/15/2019 (Sun) 06:40:42 No.240 [Reply] [Last]
Thoughts on waifus which remain 2D but have their own dedicated hardware. This is more on the artistry side though ai is still involved. An example of an actual waifu product being the Gatebox.
gatebox.ai/sp/

My favorite example is Ritsu, she's a cute ai from assassination classroom who's body is a giant screen on wheels.
127 posts and 70 images omitted.
>>4135 I suppose I should clarify this a bit better. The account owner that posted that video is a poseur, he didn't author the software, nor is it his project. Here's the most current sauce I've found, posted by Jacco Bikker (also apparently the author of the software): https://archive.org/details/WinAliceVersion2.2 The original projects was a research tool done at Carnegie-Mellon: https://en.wikipedia.org/wiki/Artificial_Linguistic_Internet_Computer_Entity Led by Richard Wallace: https://en.wikipedia.org/wiki/Richard_Wallace_(scientist) I'll assume that clarifies things Anon.
>>4136 Nice research agent. From a quick glance the program performs worse than the GPT 2 based chatbot, it only good merit is that has very good performance and doesn't take about 10-30~ seconds to respond though given its size its seems to be obvious that its vocabulary capabilities is severely limited. >The account owner that posted that video is a poseur, he didn't author the software, nor is it his project. So then all he did was modifying the lines to make it more of a typical anime character based then, I looked through its text files and oh man does it look like hell to edit it. >>4135 > I'm going to leave it up for now unless something intentionally exploitative is discovered about it later on, which I don't really anticipate with it ATP. Well that clarified then since the source code of this program is available, though I have no idea how the hell those .aiml files are used, and the readme.md the author provides is highly informative.
>>4137 Yeah, it's a throwback to the old-school 'expert systems' approach (thus probably why it was basically abandoned). The reason it performs quicker with fewer resources is that it's mostly relying on pre-architected, canned responses with very little by way of statistical processing. Which brings me to your next point: >though I have no idea how the hell those .aiml files are used That's the actual encoding mechanism for these pre-canned responses. It's an XML markup variant created by this professor to support his research project with ALICE. https://en.wikipedia.org/wiki/AIML https://github.com/drwallace/aiml-en-us-foundation-alice IMO, this entire approach is mostly a dead-end from the dark ages of AI, unless some automated way was devised to program these AIML files in advance--or some hyper-autist literally spent most of his entire life devoted to building 100'000s of response variations. Statistical approaches already are the 'future' today, and particularly once we can integrate neuromorphics along with the NLP processes.
>>4139 >It's an XML markup variant created by this professor to support his research project with ALICE. >XML Big gay. >Yeah, it's a throwback to the old-school 'expert systems' approach (thus probably why it was basically abandoned). The reason it performs quicker with fewer resources is that it's mostly relying on pre-architected, canned responses with very little by way of statistical processing. Figures it, so that's why its "intelligence" is severely limited, it's a surprise that this AI even won 3x prize for that. So if I get it right it just quickly finds a pattern of the user response and then scans over its own text files to find closet match and make a response based on that, so in short a very primitive form of chatbot. >or some hyper-autist literally spent most of his entire life devoted to building 100'000s of response variations. Sounds like a waste of time, it would be probably better to devise some kind of algorithm or uh malleable objects/entity component system that defines several aspects of how the AI should respond, or whatever those fancy terms is being used by the likes of GPT, BERT and so on. It sounds like madness to me editing thousand upon thousand of text files just to have more varied responses.
>>4140 Yep, you pretty much understand it all Anon. > it's a surprise that this AI even won 3x prize for that. It just shows you where the state of AI research in NLP was before 2005. GPGPU was just becoming an idea forming in the minds of researchers, and Nvidia hadn't released it's ground-breaking CUDA toolkits yet either. Once the iPhone opened up the smartphone market ugghh the demand for high-efficiency computation performance really began picking up steam where today TensorFlow is basically the state of the art. As is easy to tell, we still have a ways to go yet, but things are dramatically different now than the days of yore when Chomsky's ideas ruled the roost.

Robowaifu Systems Engineering Robowaifu Technician 09/11/2019 (Wed) 01:19:46 No.98 [Reply] [Last]
Creating a functional Robowaifu is a yuge Systems Engineering problem. It is arguably the single most complex technical engineering project in history bar none, IMO. But don't be daunted by he scale of the problem anon (and you will be if you actually think deeply about it for long, hehe), nor discouraged. Like every other major technical advance, it's a progressive process. A little here, a little there. In the words of Sir Isaac Newton, "If I have seen further it is by standing on the shoulders of Giants." Progress in things like this happen not primarily by leaps of genius--though ofc that also occurs--but rather chiefly comes by incremental steps towards the objective. If there's anything I'm beginning to recognize in life it's that the key to success lies mainly in one unwavering agenda for your goals: Just don't quit.

>tl;dr
Post SE and Integration resources ITT.

www.nasa.gov/sites/default/files/atoms/files/nasa_systems_engineering_handbook.pdf
Edited last time by Chobitsu on 09/26/2019 (Thu) 11:46:43.
>>98
From §1.1:
>"This handbook should be used as a companion for implementing NPR 7123.1, Systems Engineering Processes and Requirements,…"
Version 1B, Effective Date: April 18, 2013, is here:
snebulos.mit.edu/projects/reference/NASA-Generic/NPR_7123_1B.pdf

>pic sauce
linuxgizmos.com/linux-based-robonaut-2-preps-for-active-iss-duty/
Edited last time by Chobitsu on 09/26/2019 (Thu) 11:47:46.
>Systems Engineering
The kikepedia article is a pretty good intro to why SE is it's own discipline.
en.wikipedia.org/wiki/Systems_engineering
Edited last time by Chobitsu on 09/26/2019 (Thu) 11:48:14.
Open file (31.59 KB 318x400 mCRL2book.jpg)
Open file (78.64 KB 786x812 ClipboardImage.png)
One of the goals we need to be striving for is effective modeling and behavioral analysis of the complex & interdependent systems within robowaifus. There is already a large body of literature on this topic. After a little research I've settled on mCRL2 as a good place to start, as well as the Actor-based approach to behavior. I'll be spending some time over the summer seeing about modeling some behavior like a robowaifu arm/hand/ipcnet subsystem as a test case. Pics related is the book & site available for the tool designed to facilitate creating and simulating Communicating Processes Algebra. https://www.mcrl2.org/web/user_manual/index.html https://github.com/mCRL2org/mCRL2 I built the tool successfully on Manjaro Linux from the repo, and there are some pre-builts available in different package managers like Ubuntu's. https://en.wikipedia.org/wiki/Algebra_of_Communicating_Processes https://en.wikipedia.org/wiki/Process_calculus https://en.wikipedia.org/wiki/Calculus_of_communicating_systems https://en.wikipedia.org/wiki/Calculus_of_broadcasting_systems https://en.wikipedia.org/wiki/Actor_model
>>3720 Book related contains a paper outlining both the mCRL2 Toolset language and the workflow, starting on p21.
Interesting ideas about using ML to automate optimizations of various systems resources, primarily related to compute cores. >Abstract—Efficient sharing of system resources is critical to obtaining high utilization and enforcing system-level performance objectives on chip multiprocessors (CMPs). Although several proposals that address the management of a single microarchitectural resource have been published in the literature, coordinated management of multiple interacting resources on CMPs remains an open problem. >We propose a framework that manages multiple shared CMP resources in a coordinated fashion to enforce higher-level performance objectives. We formulate global resource allocation as a machine learning problem. At runtime, our resource management scheme monitors the execution of each application, and learns a predictive model of system performance as a function of allocation decisions. By learning each application’s performance response to different resource distributions, our approach makes it possible to anticipate the system-level performance impact of allocation decisions at runtime with little runtime overhead. As a result, it becomes possible to make reliable comparisons among different points in a vast and dynamically changing allocation space, allowing us to adapt our allocation decisions as applications undergo phase changes. >Our evaluation concludes that a coordinated approach to managing multiple interacting resources is key to delivering high performance in multiprogrammed workloads, but this is possible only if accompanied by efficient search mechanisms. We also show that it is possible to build a single mechanism that consistently delivers high performance under various important performance metrics.

AI, chatbots, and waifus Robowaifu Technician 09/09/2019 (Mon) 06:16:01 No.22 [Reply] [Last]
What resources are there for decent chatbots? Obviously I doubt there would be anything the Turing Test yet. Especially when it comes to lewd talking. How close do you think we are to getting a real life Cortana? I know a lot of you guys focus on the physical part of robo-waifus, but do any of you have anything to share on the intelligence part of artificial intelligence?
138 posts and 71 images omitted.
>>4121 Glad to hear you cleaned things up. I too started on Linux Mint when I escaped the MicroShat/NSA Wangblows Gulag. It was a huge relief to leave that bondage behind tbh. Now, I've since moved on to Manjaro+XFCE and my toaster box runs much faster for it. >how do I use it? I think the link I gave you from the PyTorch github gives the example. IIRC, it starts like conda with an argument or two. >Is it going to be only for C++? Well the point isn't necessarily to exclude any other languages from being used (for example C is already used on these hardware) but simply to enable a single language with great abstraction and performance characteristics to be used everywhere. Right now such a thing doesn't exist at all, which (indirectly) is one important reason why you and I are having a hard time getting things working correctly because of so many different dependencies, slow languages, different standards, etc. etc. Once C++ can run literally everywhere on everything, then it will make things like this go much smoother in the future (and be cheaper too).
Open file (1.50 MB 1920x1080 Stalker.jpg)
>>4122 (checked) >Glad to hear you cleaned things up. I too started on Linux Mint when I escaped the MicroShat/NSA Wangblows Gulag. It was a huge relief to leave that bondage behind tbh. Now, I've since moved on to Manjaro+XFCE and my toaster box runs much faster for it. I made the switch to Linux Mint when I was still using Windows 7 and that Pajeetsoft were developing Windows 8 at that time, which I think is about 2.5 years ago. It took me around 2 weeks to get used how Linux worked and I messed my system only once. The main gripe I have with Linux is that Wine sometimes requires tons of fiddling of values to get a game to work properly and the common issue I had with it that Wine fucked up the game window position/size which I think is probably related to shitty X11 coding. Its a shame that Linux Mint decided to follow head of switching to systemdicks instead of using alternative init system, it would have been proven more valuable as a viable alternative to Ubuntu. The second issue is that Linux provides no support whatsoever to have older version of a program which sometimes can be necessary. >I think the link I gave you from the PyTorch github gives the example. IIRC, it starts like Ah right, I thought conda is a package manager or something. Is there even any benefit using anaconda over python(3)? Also for some unknown reason the train.sh batch script is working now but I forget to leave my computer on last night so I didn't get to check the result with gpt2-medium, a quick test with distilgpt2 seems to work... Till it crashes with the keyword error: "Loss". >Right now such a thing doesn't exist at all, which (indirectly) is one important reason why you and I are having a hard time getting things working correctly because of so many different dependencies, slow languages, different standards, etc. etc. Once C++ can run literally everywhere on everything, then it will make things like this go much smoother in the future (and be cheaper too). Hmm that's a shame, I hope in the future when such technology is available it won't leave toaster machines behind. I find it weird that out of all the programming language to exist the data scientist decided to use Python for their heavy duty programming instead of using a faster scripting language. >AMD rocm Ah hell, I got only a (((Ivy Bridge i7-3770K))) CPU which is a 3rd generation class clocked at 3.50GHz, and rocm requires at least 4th generation one, so its gg no re for me with HPI processing support, feels 2012 toaster tier man. The damn warning message should have reflected that my CPU/Mainboard is unsupported instead of shitting out a foggy memory allocation error. Welp all that effort to get it working is in vain.
>>4126 >Till it crashes with the keyword error: "Loss". Well, just searching KeyError: 'loss' led me to understand it's probably python having a dictionary lookup issue: https://wiki.python.org/moin/KeyError and that maybe that has something to with the Epoch, maybe? https://stackoverflow.com/questions/56847576/keyerror-val-loss-when-training-model >I find it weird that out of all the programming language to exist the data scientist decided to use Python for their heavy duty programming instead of using a faster scripting language. Very few scientists are actually coders. They just want to use something simple that allows them to move forward with their research so they can publish their papers (if they want to stay alive as a scientist). Python is pretty easy in general, so lots of no-dev sort of gravitated towards it I suppose. Do that for a couple of decades and you have the current situation today I suppose. But yea, we need to optimize everything to even have a narrow chance at succeeding building our own robowaifus. Power consumption and compute efficiency are certainly near the top of the stack for that need, and C++ was (and will be even moreso) the best all-around choice for us in general. Heh, microcontrollers are even less powerful than your toaster, and there will probably be at least a dozen of them inside a robowaifu kit. >Welp all that effort to get it working is in vain. Sorry to hear it Anon. If it's any consolation your processor is better than my Atom processor w/ integrated graphics. :^)
Open file (57.62 KB 565x542 you-235.jpg)
Open file (207.12 KB 1400x1000 154823489234.jpg)
>>819 Well shit, I tried out this program using Renamon voice from zandronum forum, dl link: https://files.catbox.moe/qedypl.wad (can be opened with slade) and all I got is just robotic gibberish, using the xbox hueg datasets is not improving the output . I should probably scavenge more voices of her from the series to get better result I suppose. Within 6 months. Also did the author on purpose by not including the ability to save the result in a .ogg file? At least running the GUI program of it there wasn't a button available. Meh it doesn't matter that much considering it takes good amount of time to generate the TTS with only a few lines of text so using it in combination with talktowaifu program is out of option, unless a anon got a NASA computer, heh. >>4127 >Well, just searching KeyError: 'loss' led me to understand it's probably python having a dictionary lookup issue: >and that maybe that has something to with the Epoch, maybe? Could be, I guess when Kokubunji comes back he will be able to chime in and have a more elaborate response for this problem. I don't know how to fiddle around the python script to fix it because I'm too inexperienced with the language. >Power consumption and compute efficiency are certainly near the top of the stack for that need, Why not Uranium-235 powered Waifubots? High power with only 3.6 roentgens, the radiation level is nothing to worry about as the human body is capable of getting used to it and with some vodka it can be reduced even more :-----DD, just ask any stalker for further information. >But yea, we need to optimize everything to even have a narrow chance at succeeding building our own robowaifus. I would be content if I have a robowaifu in form of a virtual desktop AI assistant that is capable of doing several task of whatever is needed to further enhance the operation of a operating system, including text-to-speech support and random chatter. >If it's any consolation your processor is better than my So does this make my computer the kang of toasters? :^)
>>4132 >I would be content if I have a robowaifu in form of a virtual desktop AI assistant that is capable of doing several task of whatever is needed to further enhance the operation of a operating system, including text-to-speech support and random chatter. yeah I think we all basically came to roughly that conclusion the Visual Waifu thread. >>4132 >So does this make my computer the kang of toasters? :^) sure absolutely!

Open file (259.83 KB 1024x576 2-9d2706640db78d5f.png)
Single board computers & micro-controllers Robowaifu Technician 09/09/2019 (Mon) 05:06:55 No.16 [Reply] [Last]
Robotic control and data systems can be run by very small and inexpensive computers today. Please post info on SBCs & micro-controllers.

en.wikipedia.org/wiki/Single-board_computer
https://archive.is/0gKHz

beagleboard.org/black
https://archive.is/VNnAr
24 posts and 20 images omitted.
Open file (108.92 KB 911x1277 scheme.png)
>>3851 That does look interesting Anon, thanks. A bit pricey atm for our purposes IMO, but still worth a look-see.
I suppose this is the most relevant thread atm on /robowaifu/ to post this. There has been a slow push for open sauce RISC chip designs & manufacturing. This movement has recently gotten a potentially big boost from Google what could possibly go wrong, Anon? with their Skywater PDK (process design kit). https://github.com/google/skywater-pdk https://en.wikipedia.org/wiki/Process_design_kit > via nano /g/4908
Open file (11.66 KB 305x155 Selection_117.png)
>>4128 >yfw the main tech company Google is fronting on this project starts the actual skynet
>>4128 related. https://invidio.us/watch?v=EczW2IWdnOM fair warning, this group he represents are hyper CoC-suckers

C++ General Robowaifu Technician 09/09/2019 (Mon) 02:49:55 No.12 [Reply] [Last]
C++ Resources general

The C++ programming language is currently the primary AI-engine language in use.

isocpp.org/get-started
https://archive.is/hp4JR

stackoverflow.com/questions/388242/the-definitive-c-book-guide-and-list
https://archive.is/OHw9L

en.cppreference.com/w/
https://archive.is/gt73H

www.youtube.com/user/CppCon
https://archive.is/QGynC

BTW if you're new to C++ and you're stuck on Windows (either you can't or won't upgrade to Linux) then you can at least incorporate a good, open shell into your system to begin with so you can follow along. Start at this link, and if you have any questions just ask ITT:
www.msys2.org/
https://archive.fo/p3EUc
Edited last time by Chobitsu on 10/05/2019 (Sat) 20:16:32.
119 posts and 46 images omitted.
>>3855 >Actually, this looks to be an addition to ACM's next HOPL, which is an important and relatively rare (the next HOPL will be only the 4th in the ACM's 70+ years) event. If so, and if I'm not entirely mistaken, then that puts Bjarne into the sole unique club of having 3 entries in HOPL. Along with his Turing Award that marks quite a legacy for his career. Both now confirmed. >"...With this paper, C++ becomes the first and only language to be presented three times at HOPL and I become the first and only person to present three times. HOPL happens every 15 years. The other papers are now accessible as the Proceedings of the ACM on Programming Languages, Volume 4, Issue HOPL, June 2020: https://dl.acm.org/toc/pacmpl/" http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p2184r0.pdf
>>4116 final edit here. >
How are you supposed to run this on a microcontroller inside a robots head?
>>4123 Hey Anon, so we're kind of divided as a group about where to place the compute resources for a robowaifu. Some of us think we should keep the sensitive SBCs inside a shielded little 'breadbox' inside the robowaifu's chest to keep it safe from RF interference and physical shock/damage. Others of us think the biomemetic approach is best and want to put the SBCs inside the head. The little microcontrollers can (and will) be distributed around the robowaifu's body generally near where they are needed, and all networked together. Just fyi, we have a thread about these devices. >>16 >Single board computers & micro-controllers
>>648 Not sure why I didn't just post the book itself.

New machine learning AI released Robowaifu Technician 09/15/2019 (Sun) 10:18:46 No.250 [Reply] [Last]
OPEN AI/ GPT-2 This has to be one of the biggest breakthroughs in deep learning and AI so far. It's extremely skilled in developing coherent humanlike responses that make sense and I believe it has massive potential, it also never gives the same answer twice. >GPT-2 generates synthetic text samples in response to the model being primed with an arbitrary input. The model is chameleon-like—it adapts to the style and content of the conditioning text. This allows the user to generate realistic and coherent continuations about a topic of their choosing >GPT-2 displays a broad set of capabilities, including the ability to generate conditional synthetic text samples of unprecedented quality, where we prime the model with an input and have it generate a lengthy continuation. In addition, GPT-2 outperforms other language models trained on specific domains (like Wikipedia, news, or books) without needing to use these domain-specific training datasets. Also the current public model shown here only uses 345 million parameters, the "full" AI (which has over 4x as many parameters) is being witheld from the public because of it's "Potential for abuse". That is to say the full model is so proficient in mimicking human communication that it could be abused to create new articles, posts, advertisements, even books; and nobody would be be able to tell that there was a bot behind it all. <AI demo: talktotransformer.com/ <Other Links: github.com/openai/gpt-2 openai.com/blog/better-language-models/ huggingface.co/

Message too long. Click here to view full text.

Edited last time by robi on 03/29/2020 (Sun) 17:17:27.
36 posts and 19 images omitted.
>>2073 it was just ironic shitposting anon. we appreciate the input. i was merely poking fun at their choice of names and thematics.
>>2037 >Textual Entailment A human reading some text inferring that a hypothesis is most likely true is textual entailment. It's different from logical consequence in that it's just a hypothesis. If an anon was working on a robowaifu with big tiddies, you might hypothesize he's a tiddie man. Robowaifus need this to gain insight from text and process it to summarize information and answer questions. Typically chatbots emulate this by predicting things from the semantics they've been trained on but this is not true textual entailment. People have the ability to imagine and hypothesize things they've never seen or even thought about before. Progress in curious AI that can imagine possibilities will help with this. >Semantic Similarity This is the meaningful relationships between concepts. Steering wheel and car are closer together physically than cat and car, but cat and car are much more similar in spelling. Robowaifus need this for understanding context, metaphors and euphemisms. Usually this is implemented by creating embeddings for words, giving each a vector of continuous values. Each dimension in the vector separates words by their most gross common differences first and moves towards learning the more subtle and uncommon nuances. In my opinion this is going to be a dead end though because it isn't really how the brain connects concepts. We can invent completely new concepts with original differences and already know how similar other concepts are to it because our brains our densely connected in intricate interrelated networks where not only the connections are important but also the timing of firings. I expect progress to come in this from applying spiking neural networks to natural language processing. >Reading Comprehension Is the ability to read text and integrate it with what you already know to grasp its meaning. It requires being able to know the meaning of the words and understand all the relations between them. If you read a book when you're young and enjoy it one way then read it when you're older and enjoy it on a much deeper level, that's increased reading comprehension. This is important for robowaifus to grasp deeper meanings, such as for a research assistant reading difficult texts to gain insights. Most chatbots have no reading comprehension. They're just making statistical predictions instead of processing and reasoning about what they're reading. I feel this could be improved in the short-term by giving algorithms some agency over the text it chooses to read and time to process and lower its uncertainty before outputting a prediction. Unfortunately most NLP approaches are trained in a way that makes them extremely fragile to small changes and they aren't capable of doing online learning to quickly absorb information in one shot. Online learning in NLP hasn't received much research attention yet because large-scale differentiable memory hasn't been feasible until recently, so there should be some exciting progress in this coming in the next few years. >Commonsense Reasoning Similar to textual entailment. It's based on common experience. If you're holding an object and let go of it, it's common sense that it's going to fall. Robowaifus need this to make predictions about the world from their experiences. A robowaifu playing and learning about the world needs to be able to intuit that letting go of a grasped object causes it to fall. Very little AI research has gone into this but a major breakthough was made with hindsight experience replay that can continuously learn from all its experiences. >Sentiment Analysis This is being able to grasp the emotion of text and understand if it's positive, neutral or negative, or if it's angry, sad, ironic, happy, excited, etc. Troll farms use this to find sites and posts speaking against the things they're being paid to defend and to discover tensions within a community to split it apart. Social 'scientists' also use it to study and critique internet communities. With sentiment analysis robowaifus can understand the emotional context of what you're saying and respond appropriately, knowing when to give you hugs and when to tell you you're being a wimp. >Linguistic Acceptability Just a fancy term for grammaticality. Robowaifus have to understand the rules of a language to construct grammatically correct sentences for communicating clearly with others. Most sentences people write are completely new but we can make sense of what others are saying because we follow agreed upon rules. Like this if talking started I did. It becomes much more difficult to understand what I'm trying to say. A symbolic approach to this is identifying the parts being said, deconstructing it into a sentence tree and checking that structure is following grammar rules. Most approaches don't even care about this. They just leave it to the language model to figure out what to pay attention to and estimate what should be the next word.
>>2220 Sorry I never got back to thanking you for this detailed response Anon. At first I wanted to wait until I had studied everything you mentioned in depth so I would have a cogent response without being embarrassing. Then I plainly forgot about the post among the other distractions here and IRL. Obviously this was rude of me, and even though I still don't have a cogent response ready, at the least I'd like to thank you since I just rediscovered my oversight. Cheers.
>>2220 >>4084 Well I guess it can be screencapped at least for posterity purpose, when other anons are coming in and asking a similar question.
>>4106 yes, good thinking. we'll be making a general glossary type thread as well, so we can add this to it.

Mycroft: Open Source Alexa Robowaifu Technician 09/18/2019 (Wed) 11:18:41 No.402 [Reply] [Last]
We could install a modified version of Mycroft as the personalities of our waifus, at least until we get something better:

www.youtube.com/watch?v=Ud3XLEGIu8U

A Raspberry Pi and a display can be gotten for $70:
www.amazon.com/LANDZO-Touch-Screen-320480-Raspberry/dp/B01IGBDT02/

Of course the battery is another ~$50.
8 posts omitted.
>>4074 >what a let down. It's kind of a botnet if it's relying on a cloud anyway, so yea we'll need to devise our own Anon.
Open file (12.45 KB 796x128 YOU HAVE NOTHING.png)
>>4078 I agree, it seems to be a bit useless at its current stage and the "chatbot" if it can be called as it is even worse than the ParlAI I tested. I tried following the steps that removes this cloud feature and it doesn't seem to be working, welp.
>>4080 You might have a look at the TacoTron thread Anon. The more current version (not sure if it's linked in that thread yet) seems to be doing a breddy gud job of TTS. I think a simple patch to say TalkToWaifu to turn it's output into speech should be both doable and would be a nice addition to what we currently have going here. Good luck Anon.
>>4081 >You might have a look at the TacoTron thread Anon. The more current version (not sure if it's linked in that thread yet) seems to be doing a breddy gud job of TTS. Alright I check that one out soon. > I think a simple patch to say TalkToWaifu to turn it's output into speech should be both doable and would be a nice addition to what we currently have going here. Good luck Anon. That would be definitely great, though I'm myself just a programmer pleb and I am busy doing other project types which is more text game related not (robo-)waifu, so I cannot add those functionalities myself.
>>4082 >so I cannot add those functionalities myself. perfectly ok, we're all busy with our own interests here. don't worry about being 'pleb', it's a skill that gets better with practice. just don't quit is how you get better all the time.

Report/Delete/Moderation Forms
Delete
Report

Captcha (required for reports and bans by board staff)

no cookies?