TBH, I can only really speak from practical experience on this in the context of C++ primarily. The answer that actually works is simple: Just don't leak resources anon
. I know that may sound trite, but honestly it's really the actual answer that has been hard-won through over 4 decades of low-level systems programming by the C & C++ communities. IMO C still doesn't have a solid approach to this need, but at least C++ does, namely RAII
It's really hard to fail gracefully when resources become exhausted or other error conditions happen, but RAII + exceptions is at least an approach that has the potential to deal with all or most of the issues in a robust (and basically simple) way. Making error handler calls from within try/catch blocks, and relying on RAII to automatically destruct objects in a robust fashion is really about the only practical way I can think of for dealing with resource exhaustion in a general sense anon.
One of the trickiest parts is where a system has 'painted itself into a corner' but is still limping along OK apparently (but is actually creating error conditions under the surface that will lead to a system crash). There are straightforward approaches from within C++ (using the standard library containers like std::vector for example) that practically eliminate these 'insidious hidden problem' issues. For example, if any of the containers can't allocate properly, the template mechanism fails in an obvious and immediate way, it doesn't just blindly go on about it's business in the manner that wrongly allocating a C array might do.
I hope I answered your question understandably enough if you're not a C++ programmer. If not, I'll be happy to try again just ask.
As far as an abstract architectural paradigm, yes, I think having multiple processing systems all running side-by-side and checking up on each other is a reasonable if costly approach. In fact it's a common scenario in life-critical aerospace systems like fly-by-wire-controls, etc.