Looking to the future, if we delegate responsibility for creating engaging, motivating, entertaining, "fun" experiences to intelligent computational systems, we must also ask: what happens if the system fails? As designers, can we trust systems to reason about fun on behalf of the user? Can we build systems that can be trusted to always "do the right thing" or at least fail gracefully?
In learning systems, trust is an issue of quality assurance. A system fails to educate or train, or fails to motivate someone to learn or change their behavior, requires human monitoring (which could be an issue for scalability).