In a probabilistic (i.e. real) world, the agent will maximise expected utility
Even if the utility function is not explicit, any agent that has a rational behavior must
behave as if it had one.
#Challenges:
- How to define the model, and the expected utility of actions
- How to choose the utility maximizing actions
- How to define the unity function correctly