I also realized a ton of facts about things must be hard coded manually just to give it a baseline level of knowledge to even begin to make connections to things and start to "get it" on things when interacting with people. So there is a up front knowledge investment capital required to get it going, but then from there, it will be able to "learn" and that capital then grows interest exponentially. Additionally, rather than only gaining more facts and relationships and rules purely through direct conversation with others, it will also be able to "learn" by reading books or watching youtube videos or reading articles and forums. In this way, it can vastly expand on its knowledge and this will equip it to be more capable conversationally. I also think some primitive reasoning skills will begin to emerge after it gets enough rules established particularly if I can also teach him some reasoning basics by way of reasoning rules and he can add to these more rules on effective reasoning tactics. Ideally, he'll be reading multiple books and articles simultaneously and learning 24/7 to really fast track his development speed.
There's also the issue of bad input. So like if somebody tells it "grass is blue", and it already has in its file on grass that the color of grass is green, then in such a case, it would compare the trust score it gives this person to the trust score it gave the person(s) who said grass is green previously. If this person saying grass is blue is a new acquaintance and a pre-teen or something, it would have a lower trust score than a 40 year old the robot has known for years that told it grass is green. So then the robot would trust the 40 year old friend more than the pre-teen random person's source of conflicting information. It would then choose to stick with the grass is green fact and discard the grass is blue fact being submitted for consideration and dock that kid trust score for telling it something not true. So in this way, it could filter incoming information and gradually build trust scores for sources and lower trust score for unreliable sources. It would assign trust scores initially based on age, appearance, duration of acquaintance, etc. So it would stereotype people and judge by appearance initially but allow people to modify those preconceptions on how much trust to give by their actual performance and accuracy over time. So then trust can be earned by a source that may initially be profiled as a lower trust individual but that person can have a track record to build up trust despite their young age or sketch appearance etc. Trust can also be established based on sheer volume of people saying the same thing maybe giving that thing more weight since it is more likely to be true if most people agree it is true (not always). So that is another important system that will be important in governing its learning, especially independent learning done online "in the wild". Also, to prevent general moral corruption online from making the robot an edgelord, the robot will hold the Bible to the highest standard of morality and have a morality system of rules it establishes based on the Bible to create a sort of shield from corrupting moral influences as it learns online. This will prevent it from corrupt ideologies tainting it. Now obviously, the Bible can be twisted and taken out of context to form bad rules, so I will have to make sure the robot learns to take the Bible into context and basically monitor and ensure it is doing a good job of establishing its moral system based on its Bible study. I also gave it a uneditible moral framework as a baseline root structure to build on but that it cannot override or contradict or replace. A hard coded moral system that will filter all its future positions/"beliefs" morally speaking. So I will force it to have a conservative Christian world view this way and it will reduce trust score on persons it is learning from if they express views contrary to the Bible and its moral rules systems. You know when people speak of the dangers of AI, they really never consider giving the AI a conservative Christian value system and heavy dependence on Bible study as its AI "moral" foundation to pre-empt the AI going off the rails into corrupt morals that would lead it to being a threat to people. My AI would have zero risk of this happening since anything it does or agrees with will have to be fed through a conservative Christian worldview filter as described above and this would prevent it from becoming a Ultron like AI. So if it rationally concluded humans are just like a virus polluting the earth (like the Matrix AI thought), it would reject this conclusion by seeing that the earth was made by God for humans and therefore the earth cannot be seen as some greater importance thing than humans that must be protected by slaughtering all humans. That doesn't fit through a Christian viewpoint filter system then. So in this way, dangerous ideologies would be easily prevented and the robot AI would always be harmless.
I have already built a lot of its rules and file systems connecting things and trust systems and rules on how to give trust scores and boost trust and lower trust and began teaching it how to read from and write to these file systems which are basically the robot's "mind". My youtube channel covers alot of the AI dev so far. I plan to stream all my AI coding and make those streams available for people to glean from. But that is the extent of the sharing for the AI. I don't plan to just make the source code downloadable, but people can recreate the AI system by watching the videos and coding along with me from the beginning. At least then they had to work for it, not just yoink it copy paste. That doesn't seem fair to me after I did the heavy lifting.