Students who have completed this course master the core theory of dynamic decision making under uncertainty, with and without strategic interactions between decision makers (“agents”). They understand and can apply some standard econometric methods for estimating the unknown parameters of such decision problems and games using data on actual decisions and outcomes. They can numerically solve decision problems that are quantified using such empirical analysis and/or expert opinion. They can apply the theory and methods taught to actual decision problems in economics and business.|
We will focus on Markov decision problems and stochastic (Markov) games in discrete time. In these workhorse models, agents’ choices affect future outcomes through state variables that follow a controlled (by the agents’ choices) Markov process. Their stochastic dynamics are sufficiently rich for most economic and business applications and sufficiently structured to facilitate powerful theoretical, empirical, and numerical analysis. Throughout the course, we will motivate models by, and apply methods (using MATLAB and/or R) to, practical examples of economic and business decision problems and games. These may include (but are not limited to)
- life cycle analysis of human capital formation, savings, and pensions;
- dynamic demand analysis;
- dynamic treatment choice and evaluation;
- investment decision making using (binomial) decision trees;
- the analysis of learning, competition, and antitrust policy in the market for wide-bodied commercial aircraft;
- welfare analysis of environmental regulation of oligopolies;
- the analysis of sunk costs, barriers to entry, and toughness of competition in dynamic oligopolies from market level panel data; and
- the analysis of optimal advertising dynamics.
We will first consider individual decision problems in which agents choose from a finite set of actions and state variables are finite (or, in the case of econometric models that need to have continuous errors, only have continuous components that enter in a sufficiently simple way). We will study
- Markov decision rules, Bellman’s equation and principle of optimality, the contraction mapping theorem, and Blackwell’s sufficient conditions;
- two solution methods, value iteration (successive approximations) and policy iteration;
- the extent to which various types of data— for example, data on actual choices and outcomes in a similar decision problem— can be used to learn about the unknown parameters of the decision problem (identification); and
- a maximum likelihood estimation procedure and (briefly) some alternative methods for the empirical analysis of dynamic discrete choice problems.
Next, we will consider dynamic games in which each agent solves such a dynamic decision problem and has payoffs that depend on other agents’ choices. We will discuss
- payoff-relevant variables, Markov strategies, Markov perfect equilibrium, and the one-shot deviation principle (the principle of optimality for games);
- a theoretical and computational problem that is specific to games, equilibrium multiplicity; and
- computational and empirical methods that are closely related to those for individual decision problems, in the context of a simple example of dynamic oligopolistic competition.
To the extent time permits, we will study extensions to large and continuous action and state spaces. These may include (but are not necessarily limited to)
- generalized method-of-moments estimation of continuous choice models using their intertemporal first-order conditions (stochastic Euler equations) as moments;
- discrete and smooth approximation methods to solve continuous choice problems; and
- computational methods (neural networks, reinforcement learning, etcetera) for handling problems with large action and state spaces.
Students are expected to actively prepare for and participate in all
sessions (lectures and tutorials). The course will be taught in twelve weeks, with two sessions of two hours each week. During these twelve weeks, students are expected to work in small groups on four computational problem sets (for 4x5%=20% of the final grade). The final exam will be a closed-book sit-in written exam (for 80% of the final grade), covering all material discussed in the lectures and tutorials, the computational problem sets, and any required reading. Following current standards of the MSc EME program, we will use MATLAB and/or R for all computational work (but we should be able to support students who wish to work in e.g. Python).
Required and recommended reading will be announced on Blackboard and will involve online MATLAB and/or R tutorials, lecture notes, selected papers, and (possibly) a text book. Representative text books on individual decision problems include
- Adda, J., and R.W. Cooper (2003): Dynamic Economics: Quantitative Methods and Applications, Cambridge: MIT Press.
- Stachurski, J. (2009): Economic Dynamics: Theory and Computation, Cambridge: MIT Press.
An example of the theory of and methods for stochastic games that will be taught can be found in
- Abbring, J.H., J.R. Campbell, J. Tilly, and N. Yang (2018): “Very Simple Markov-Perfect Industry Dynamics: Theory,” Econometrica, 86, 721-735.
- Abbring, J.H., J.R. Campbell, J. Tilly, and N. Yang (2017): “Very Simple Markov-Perfect Industry Dynamics: Empirics,” Discussion Paper 2017-021, CentER, Tilburg University.
Examples of documented MATLAB code (with a stronger focus on econometric methods than this course) can be found at http://ddc.abbring.org
|Course available for exchange students|
|Master level, conditions apply|
|Written test opportunities|
|Written exam / Written exam||EXAM_01||SM 2||1||09-06-2021|
|Written exam / Written exam||EXAM_01||SM 2||2||07-07-2021|
|Written test opportunities (HIST)|
|Description||Test||Block||Opportunity||Date||Required materials-Recommended materials-Tests|