Agents are rational, and not omniscient (i.e. knowing the actual outcome of actions)
- We only maximize expected performance
- We only know that percept sequence to data (not the future)
- But the percept sequence needs to be generated rationally
- Omniscience is impossible in reality
#Information Gathering
- Doing actions in order to modify the future percept sequence
- e.g. opening a door to see what’s behind it
- The goal is to shape future percepts so decisions maximize expected utility
- Exploration (trying unknown actions to discover outcomes) is an example of this
#Learning
- Incorporate knowledge acquired through perception
- Non-learning agents lack autonomy (ability to compensate for partial or incorrect knowledge)