Agents must reason and act in an uncertain world. There may be uncertainty about the state of the world, uncertainty about the effects of actions, and uncertainty about other agents' actions. Since agents simply cannot address all possible situations that might occur, they must make defeasible assumptions, that is, assumptions that may turn out to be false. In most situations, we do not focus on the agents' defeasible assumptions, but rather on their consequences. We call the logical consequences of defeasible assumptions beliefs. We say that an agent believes $\phi$ if she acts as though $\phi$ is true. As time passes and new evidence is observed, changes in an agent's defeasible assumptions lead to changes in her beliefs. Thus, the question of belief change---that is, how beliefs change over time---is a central one for understanding systems that can make and modify defeasible assumptions.
In this dissertation, we propose a new approach to the question of belief change. This approach is based on developing a semantics for beliefs. This semantics is embedded in a framework that models agents' knowledge (or information) as well as their beliefs, and how these change in time. We argue, and demonstrate by examples, that this framework can naturally model any dynamic system (\eg agents and their environment). Moreover, the framework allows us to consider what the properties of well-behaved belief change should be.
As we show, such a framework can give us a much deeper understanding of how and why beliefs change. In particular, we can gain a better understanding of the current approaches to belief change---belief revision and belief update. Roughly, revision treats a surprising observation (one that is inconsistent with the agent's current beliefs) as a sign that the beliefs are incorrect, while update treats a surprising observation as an indication that the world has changed. We show how belief revision and belief update can be captured in the proposed framework. This allows us to compare the assumptions made by each method and to better understand the principles underlying them.
This analysis shows that revision and update are only two points on a spectrum. In general, we would expect that an agent making an observation may both want to revise some earlier beliefs and assume that some change has occurred in the world. We describe a novel approach to belief change that allows us to do this, by applying ideas from probability theory in qualitative settings. This approach is based on a qualitative analogue of the Markov assumption, which gives us a well-behaved notion of belief change, without making the occasionally unreasonable assumptions made by belief revision and update. In particular, it allows a user to weigh the relative plausibility that a given observation is due to a change in the world or due to an inaccuracy in previous beliefs.