Agents have objectives they value
Agents face constraints
Make tradeoffs to maximize objectives within constraints
Agents have objectives they value
Agents face constraints
Make tradeoffs to maximize objectives within constraints
Agents compete with others over scarce resources
Agents adjust behaviors based on prices
Stable outcomes when adjustments stop
Traditional economic models are often called “Decision theory”:
Optimization models ignore all other agents and just focus on how can you maximize your objective within your constraints
Outcome: optimum: decision where you have no better alternatives
Traditional economic models are often called “Decision theory”:
Equilibrium models assume that there are so many agents that no agent’s decision can affect the outcome
Outcome: equilibrium: where nobody has any better alternative
Game theory models directly confront strategic interactions between players
Outcome: Nash equilibrium: where nobody has a better strategy given the strategies everyone else is playing
What is a player's objective in a game?
“De gustibus non est disputandum”
We need to know a player's preferences over game outcomes
Choose: < a consumption bundle >
In order to maximize: < utility >
Subject to: < income and market prices >
Choose: < output >
In order to maximize: < profits >
Choose: < inputs >
In order to minimize: < cost >
Subject to: < producing the optimal output >
Example: Between any two outcomes (a,b):
a≻b: (Strictly) prefer a over b
a≺b: (Strictly) prefer b over a
a≻b: (Strictly) prefer a over b
a≺b: (Strictly) prefer b over a
a∼b: Indifferent between a and b
a≻b: (Strictly) prefer a over b
a≺b: (Strictly) prefer b over a
a∼b: Indifferent between a and b
See my ECON 306 class on preferences for more.
Long ago (1890s), utility considered a real, measurable, cardinal scale†
Utility thought to be lurking in people's brains
Obvious problems
† "Neuroeconomics" & cognitive scientists are re-attempting a scientific approach to measure utility
More plausibly infer people's preferences from their actions!
Principle of Revealed Preference: if a person chooses x over y, and both are affordable, then they must prefer x⪰y
Flawless? Of course not. But extremely useful approximation!
A utility function u(⋅)† represents preference relations (≻,≺,∼)
Assign utility numbers to bundles, such that, for any bundles a and b: a≻b⟺u(a)>u(b)
† The ⋅ is a placeholder for whatever goods we are considering (e.g. x, y, burritos, lattes, dollars, etc)
Example: Imagine three alternative bundles of (x,y): a=(1,2)b=(2,2)c=(4,3)
Example: Imagine three alternative bundles of (x,y): a=(1,2)b=(2,2)c=(4,3)
u(⋅) |
---|
u(a)=1 |
u(b)=2 |
u(c)=3 |
Example: Imagine three alternative bundles of (x,y): a=(1,2)b=(2,2)c=(4,3)
u(⋅) |
---|
u(a)=1 |
u(b)=2 |
u(c)=3 |
Example: Imagine three alternative bundles of (x,y): a=(1,2)b=(2,2)c=(4,3)
u(⋅) | v(⋅) |
---|---|
u(a)=1 | v(a)=3 |
u(b)=2 | v(b)=5 |
u(c)=3 | v(c)=7 |
Utility numbers have an ordinal meaning only, not cardinal
Both are valid utility functions:
Only the ranking of utility numbers matters!
We want to apply utility functions to the outcomes in games, often summarized as “payoff functions”
Using the ordinal interpretation of utility functions, we can rank player preferences over game outcomes
Take a prisoners' dilemma and consider the payoffs to Player 1
u1(D,C)≻u1(C,C)
Take a prisoners' dilemma and consider the payoffs to Player 2
u2(C,D)≻u2(C,C)
This is the same game
This is the same game
This is the same game, so long as a>b>c>d
We commonly assume, for a game:
Players understand the rules of the game
Players behave rationally: try to maximize payoff
Game theory does not permit us to consider true uncertainty
But we can talk about risk: distribution of outcomes occurring with some known probability
In such cases, what do players maximize in the presence of risk?
E[p]=n∑i=1πipi
pa≻pb⟺E[u(pa)]>E[u(pb)]
Allows for different risk attitudes:
makes utility functions cardinal (but still not measurable!)
Von Neumann & Morgenstern (vNM)'s Theory of Games and Economic Behavior (1944) establishes "Game theory"
Solve for outcomes only of 2-player zero-sum games
Minimax method (we'll see below)
John Forbes Nash
1928—2015
Economics Nobel 1994
Nash's Non-Cooperative Games (1950) dissertation invents idea of "(Nash) Equilibrium"
John Forbes Nash
1928—2015
Economics Nobel 1994
A Pure Strategy Nash Equilibrium (PSNE) of a game is a set of strategies (one for each player) such that no player has a profitable deviation from their strategy given the strategies played by all other players
Each player's strategy must be a best response to all other players' strategies
N.E. ≠ the “best” or optimal outcome
Game may have multiple N.E.
Game may have no N.E. (in “pure” strategies)
All players are not necessarily playing the same strategy
Each player makes the same choice each time the game is played (possibility of mixed strategies)
Suppose we start from some initial allocation (A)
Pareto Improvement: at least one party is better off, and no party is worse off
Suppose we start from some initial allocation (A)
Pareto Improvement: at least one party is better off, and no party is worse off
Pareto optimal/efficient: no possible Pareto improvements
†I’m simplifying...for full details, see class 1.8 appendix about applying consumer theory!
Take the prisoners’ dilemma
Nash Equilibrium: (Defect, Defect)
Why can’t they both cooperate?
Main feature of prisoners’ dilemma: the Nash equilibrium is Pareto inferior to another outcome (Cooperate, Cooperate)!
How can we ever get rational cooperation?
This is far from the last word on solution concepts, or even Nash equilibrium!
But sufficient for now, until we return to simultaneous games
Next week, sequential games!
Keyboard shortcuts
↑, ←, Pg Up, k | Go to previous slide |
↓, →, Pg Dn, Space, j | Go to next slide |
Home | Go to first slide |
End | Go to last slide |
Number + Return | Go to specific slide |
b / m / f | Toggle blackout / mirrored / fullscreen mode |
c | Clone slideshow |
p | Toggle presenter mode |
t | Restart the presentation timer |
?, h | Toggle this help |
o | Tile View: Overview of Slides |
Esc | Back to slideshow |
Agents have objectives they value
Agents face constraints
Make tradeoffs to maximize objectives within constraints
Agents have objectives they value
Agents face constraints
Make tradeoffs to maximize objectives within constraints
Agents compete with others over scarce resources
Agents adjust behaviors based on prices
Stable outcomes when adjustments stop
Traditional economic models are often called “Decision theory”:
Optimization models ignore all other agents and just focus on how can you maximize your objective within your constraints
Outcome: optimum: decision where you have no better alternatives
Traditional economic models are often called “Decision theory”:
Equilibrium models assume that there are so many agents that no agent’s decision can affect the outcome
Outcome: equilibrium: where nobody has any better alternative
Game theory models directly confront strategic interactions between players
Outcome: Nash equilibrium: where nobody has a better strategy given the strategies everyone else is playing
What is a player's objective in a game?
“De gustibus non est disputandum”
We need to know a player's preferences over game outcomes
Choose: < a consumption bundle >
In order to maximize: < utility >
Subject to: < income and market prices >
Choose: < output >
In order to maximize: < profits >
Choose: < inputs >
In order to minimize: < cost >
Subject to: < producing the optimal output >
Example: Between any two outcomes (a,b):
a≻b: (Strictly) prefer a over b
a≺b: (Strictly) prefer b over a
a≻b: (Strictly) prefer a over b
a≺b: (Strictly) prefer b over a
a∼b: Indifferent between a and b
a≻b: (Strictly) prefer a over b
a≺b: (Strictly) prefer b over a
a∼b: Indifferent between a and b
See my ECON 306 class on preferences for more.
Long ago (1890s), utility considered a real, measurable, cardinal scale†
Utility thought to be lurking in people's brains
Obvious problems
† "Neuroeconomics" & cognitive scientists are re-attempting a scientific approach to measure utility
More plausibly infer people's preferences from their actions!
Principle of Revealed Preference: if a person chooses x over y, and both are affordable, then they must prefer x⪰y
Flawless? Of course not. But extremely useful approximation!
A utility function u(⋅)† represents preference relations (≻,≺,∼)
Assign utility numbers to bundles, such that, for any bundles a and b: a≻b⟺u(a)>u(b)
† The ⋅ is a placeholder for whatever goods we are considering (e.g. x, y, burritos, lattes, dollars, etc)
Example: Imagine three alternative bundles of (x,y): a=(1,2)b=(2,2)c=(4,3)
Example: Imagine three alternative bundles of (x,y): a=(1,2)b=(2,2)c=(4,3)
u(⋅) |
---|
u(a)=1 |
u(b)=2 |
u(c)=3 |
Example: Imagine three alternative bundles of (x,y): a=(1,2)b=(2,2)c=(4,3)
u(⋅) |
---|
u(a)=1 |
u(b)=2 |
u(c)=3 |
Example: Imagine three alternative bundles of (x,y): a=(1,2)b=(2,2)c=(4,3)
u(⋅) | v(⋅) |
---|---|
u(a)=1 | v(a)=3 |
u(b)=2 | v(b)=5 |
u(c)=3 | v(c)=7 |
Utility numbers have an ordinal meaning only, not cardinal
Both are valid utility functions:
Only the ranking of utility numbers matters!
We want to apply utility functions to the outcomes in games, often summarized as “payoff functions”
Using the ordinal interpretation of utility functions, we can rank player preferences over game outcomes
Take a prisoners' dilemma and consider the payoffs to Player 1
u1(D,C)≻u1(C,C)
Take a prisoners' dilemma and consider the payoffs to Player 2
u2(C,D)≻u2(C,C)
This is the same game
This is the same game
This is the same game, so long as a>b>c>d
We commonly assume, for a game:
Players understand the rules of the game
Players behave rationally: try to maximize payoff
Game theory does not permit us to consider true uncertainty
But we can talk about risk: distribution of outcomes occurring with some known probability
In such cases, what do players maximize in the presence of risk?
E[p]=n∑i=1πipi
pa≻pb⟺E[u(pa)]>E[u(pb)]
Allows for different risk attitudes:
makes utility functions cardinal (but still not measurable!)
Von Neumann & Morgenstern (vNM)'s Theory of Games and Economic Behavior (1944) establishes "Game theory"
Solve for outcomes only of 2-player zero-sum games
Minimax method (we'll see below)
John Forbes Nash
1928—2015
Economics Nobel 1994
Nash's Non-Cooperative Games (1950) dissertation invents idea of "(Nash) Equilibrium"
John Forbes Nash
1928—2015
Economics Nobel 1994
A Pure Strategy Nash Equilibrium (PSNE) of a game is a set of strategies (one for each player) such that no player has a profitable deviation from their strategy given the strategies played by all other players
Each player's strategy must be a best response to all other players' strategies
N.E. ≠ the “best” or optimal outcome
Game may have multiple N.E.
Game may have no N.E. (in “pure” strategies)
All players are not necessarily playing the same strategy
Each player makes the same choice each time the game is played (possibility of mixed strategies)
Suppose we start from some initial allocation (A)
Pareto Improvement: at least one party is better off, and no party is worse off
Suppose we start from some initial allocation (A)
Pareto Improvement: at least one party is better off, and no party is worse off
Pareto optimal/efficient: no possible Pareto improvements
†I’m simplifying...for full details, see class 1.8 appendix about applying consumer theory!
Take the prisoners’ dilemma
Nash Equilibrium: (Defect, Defect)
Why can’t they both cooperate?
Main feature of prisoners’ dilemma: the Nash equilibrium is Pareto inferior to another outcome (Cooperate, Cooperate)!
How can we ever get rational cooperation?
This is far from the last word on solution concepts, or even Nash equilibrium!
But sufficient for now, until we return to simultaneous games
Next week, sequential games!