So I'm thinking about this more from a robot/agi ethical perspective, with the view of sort of "intelligence mimicking in a shallow form (externalities)" vs "intelligence mimicking in a deep form"(mimics the cause of the externalities along with the normal external aspects) vs actual agi.
My understanding at least is that reason and our intellectual capabilities are the highest parts of us, however they can't really function apart from that. The reason why is the moral aspect, I'm coming from a very greek perspective so I should probably clarify how they relate.
The basics would be you have a completeness to your being, a harmony to your parts, that when your parts are functioning as they ought better show forth your being/your human nature. So the different parts of a person (eyes, ability to speak, etc.) have to be used in such a way so as to bring about the full expression of that nature. That ties in the intellect in that it involves using it to understand those parts, and how they relate to the whole, and the whole itself. That along with you choosing to understand that, and the relationship, then seeing the good of the whole functioning harmoniously you will do that, and that's the essence of moral action.
The proper usage of those powers ends up getting complicated as it involves your personal history, culture, basically every externality so there aren't really hard and fast rules. It's more about understanding the possibilities of things, recognizing them as valuable, and then seeking to bring them about (out of gratitude for the good they can provide.)
Now the full expression of that human nature or any particular nature, or at least getting closer to it, the full spectrum of potential goods it brings about is only known over a whole lifetime. That's what I was referring to by saying personal history, as you move along in your lifetime the ideal rational person would be understanding everything in front of them in relation to how it could have them be a good life. It's the entire reason why I have the ability to conceptualize at all, in everything you see and every possible action, you see your history with it, and how that history can be lifted up and made beautiful in a complete life in a sort of narrative sense.
The memory thing isn't that hard technically, it's more the seeing the good of the thing itself. The robot, or AGI would need to sort of take on an independent form of existence and have a good that is not merely instrumental to another person. Basically be in some for a genuine synthetic life form. (most people who have these views just think any actual agi is impossible).
One of the key things is this sort of good of the thing itself, human nature or whatever, is not something anyone has a fully explicit understanding of, there is definition that can be provided. It's an embodied thing with a particular existence with a particular history. Each individual's expression of their being will differ based on actual physical things they relate to, and things like the culture they participate in. The nature is an actual real thing constituting a part of all things (living and otherwise), and all intellectual activity is a byproduct of those things nature interacting with the nature of other things, and those natures aren't something that can ever be made explicit. (this ties in with all the recent extended mind cog-sci and philosophy of mind)
(One author I want to read more of is John Haugeland who talks about the heideggerian AI stuff, he just calls this the ability to give a damn. A machine cannot give a damn, and apart from giving a damn you have no capacity for intellectual activity for the reasons I stated above).
That's sort of the initial groundwork: