What is dolo ?¶
Dolo is a tool to describe and solve economic models. It provides a simple classification scheme to describe many types of models, allows to write the models as simple text files and compiles these files into efficient Python objects representing them. It also provides many reference solution algorithms to find the solution of these models under rational expectations.
Dolo understand several types of nonlinear models with occasionnally binding constraints (with or without exogenous discrete shocks), as well as local pertubations models, like Dynare. It is a very adequate tool to study zero-lower bound issues, or sudden-stop problems, for instance.
Sophisticated solution routines are available: local perturbations up to third order, perfect foresight solution, policy iteration, value iteration. Most of these solutions are either parallelized or vectorized. They are written in pure Python, and can easily be inspected or adapted.
Thanks to the simple and consistent Python API for models, it is possible to write models in pure Python, or to implement other solution algorithms on top it.
Getting started¶
Installation¶
A scientific Python environement is required to run dolo
, for instance Anaconda Python.
In order to install the last stable version of dolo
and its dependencies, open a command-line and run:
`pip install dolo`
It is also possible to install the development version directly from Github with:
`pip install git+git://github.com/econforge/dolo.git`
Step-by-step instructions on windows¶
Download the Anaconda installer (choose the 64 bits/python 2.7 version)
Install it for the current user only, so that you will be able to install to update Python packages easily.
Open a powershell console, and type
pip install dolo
then Enter. When connected to the net, this command pulls and install the last stable version
Running dolo¶
After dolo is installed, try to solve a model by typing the following commands in an IPython shell:
from dolo import * # load the library
model = yaml_import("https://raw.githubusercontent.com/EconForge/dolo/master/examples/models/rbc.yaml")
# import the model
display(model) # display the model
dr = time_iteration(model, verbose=True) # solve
sim = simulate(model, dr) # simulate
Setting up a work environement¶
Anylising dolo models, is usually done by editing a model file with an (.yaml
) extension, then running and formating the calculations inside a Jupyter notebook. There are many other worflows, but Jupyter notebooks are becoming a de facto standard in opensource computational research, so that we strongly advise to try them first. Some chapters of this documentation are actually written as notebook, can be downloaded and run interactively.
The only step to setup the environment consists in choosing a folder to store the model and the notebooks. Then open a terminal in this folder and launch the notebook server using:
`jupyter notebook`
A browser window should popup. It is Jupyter’s dashboard.
It lists the files in that folder. Clicking on a model file (with a .yaml
extension), opens it in a new tab.
Note
Despite the fact that the files are edited in the browser through a local webserver, the files are still regular files on the harddrive. In particular, it is possible to edit them directly, using a good text editor (vim, emacs, atom…)
To create a new notebook click on “..” and choose IPython. This creates a new tab, containing the notebook ready to be edited and run. It consists in a succession of cells that can be run in any order by pressing Shift+Enter after one of them has been selected. More information [TODO: link]
The dolo language¶
The easiest way to code a model in dolo consists in using specialized Yaml files also referred to as dolo model files.
YAML format¶
YAML stands for Yet Another Markup Language. It is a serialization language that allows complex data structures in a human-readable way. Atomic elements are floats, integers and strings. An ordered list can be defined by separating elements with commas and enclosing them with squere brackets:
[1,2,3]
Equivalently, it can be done on several line, by prepending - to each line
- 'element'
- element # quotes are optional there is no ambiguity
- third element # this is interpreted as ``'third element'``
Associative arrays map keys to (simple strings to arbitrary values) as in the following example:
{age: 18, name: peter}
Mappings can also be defined on severaly lines, and as in Python, structures can be nested by using indentation (use spaces no tabs):
age: 18
name: peter
occupations:
- school
- guitar
friends:
paula: {age: 18}
The correspondance between the yaml definition and the resulting Python object is very transparent. YAML mappings and lists are converted to Python dictionaries and lists respectiveley.
Note
In dolo, we use the additional convention that a dictionary key is interpreted as a Python objects if:
it begins with an uppercase
it is at least two characters long
its the only key in its dictionary.
For instance,
- AR1:
rho: 0.9
sigma: 0.1
- TakeAList:
- 0
- 1
- notanobject:
a: 1
b: 2
will be interpreted in Python as:
list(
AR1(rho=0.9, sigma=0.1),
TakeAList(0,1),
{'notanobject':
'a': 1,
'b': 2
}
)
Any model file must be syntactically correct in the Yaml sense, before the content is analysed further. More information about the YAML syntax can be found on the YAML website, especially from the language specification.
Example¶
Here is an example model contained in the file examples\models\rbc.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 | name: Real Business Cycle
symbols:
exogenous: [e_z]
states: [z, k]
controls: [n, i]
expectations: [m]
values: [V]
parameters: [beta, sigma, eta, chi, delta, alpha, rho, zbar, sig_z]
rewards: [u]
definitions:
y: exp(z)*k^alpha*n^(1-alpha)
c: y - i
rk: alpha*y/k
w: (1-alpha)*y/n
equations:
arbitrage:
- chi*n^eta*c^sigma - w | 0.0 <= n <= inf
- 1 - beta*(c/c(1))^(sigma)*(1-delta+rk(1)) | 0.0 <= i <= inf
transition:
- z = rho*z(-1) + e_z
- k = (1-delta)*k(-1) + i(-1)
value:
- V = c^(1-sigma)/(1-sigma) - chi*n^(1+eta)/(1+eta) + beta*V(1)
felicity:
- u = c^(1-sigma)/(1-sigma) - chi*n^(1+eta)/(1+eta)
expectation:
- m = beta/c(1)^sigma*(1-delta+rk(1))
direct_response:
- n = ((1-alpha)*exp(z)*k^alpha*m/chi)^(1/(eta+alpha))
- i = exp(z)*k^alpha*n^(1-alpha) - (m)^(-1/sigma)
calibration:
# parameters
beta : 0.99
phi: 1
delta : 0.025
alpha : 0.33
rho : 0.8
sigma: 5
eta: 1
sig_z: 0.016
zbar: 0
chi : w/c^sigma/n^eta
c_i: 1.5
c_y: 0.5
e_z: 0.0
# endogenous variables
n: 0.33
z: zbar
rk: 1/beta-1+delta
w: (1-alpha)*exp(z)*(k/n)^(alpha)
k: n/(rk/alpha)^(1/(1-alpha))
y: exp(z)*k^alpha*n^(1-alpha)
i: delta*k
c: y - i
V: log(c)/(1-beta)
u: c^(1-sigma)/(1-sigma) - chi*n^(1+eta)/(1+eta)
m: beta/c^sigma*(1-delta+rk)
exogenous: !UNormal
σ: 0.01
domain:
z: [-2*sig_z/(1-rho^2)^0.5, 2*sig_z/(1-rho^2)^0.5]
k: [ k*0.5, k*1.5]
options:
grid: !Cartesian
n: [20, 20]
# options:
# grid: !Smolyak
# mu: 3
# # orders: [5, 50]
|
This model can be loaded using the command:
model = yaml_import(`examples/models/rbc.yaml`)
The function yaml_import (cross) will raise errors until the model satisfies basic compliance tests. [more of it below]. In the following subsections, we describe the various syntaxic rules prevailing while writing yaml files.
Sections¶
A dolo model consists in the following 4 or 5 parts:
a symbols section where all symbols used in the model must be defined
an equations containing the list of equations
a calibration section providing numeric values for the symbols
a domain section, with the information about the solution domain
an options section containing additional informations
an exogenous section where exogenous shocks are defined.
These section have context dependent rules. We now review each of them in detail:
Declaration section¶
This section is introduced by the symbols keyword. All symbols appearing in the model must be defined there.
Symbols must be valid Python identifiers (alphanumeric not beginning with a number) and are case sensitive. Greek letters (save for lambda which is a keyword) are recognized. Subscripts and superscripts can be denoted by _ and __ respectively. For instance beta_i_1__d will be pretty printed as \(beta_{i,1}^d\).
Symbols are sorted by type as in the following example:
symbols:
states: [a, b]
controls: [u, v]
exogenous: [e]
parameters: [rho]
Note that each type of symbol is associated with a symbol list (as [a,b]).
Note
A common mistake consists in forgetting the commas, and use spaces only. This doesn’t work since two symbols are recognized as one.
The exact list of symbols to declare depends on which algorithm is meant to be used. In general, one needs to supply at least states (endogenous states), exogenous (for exogenous shocks), controls for decision variables, and parameters for scalar parameters, that the model can depend on.
Declaration of equations¶
The equations section contains blocks of equations sorted by type.
Expressions follow (roughly) the Dynare conventions. Common arithmetic operators (+,-,*,/,^) are allowed with conventional priorities as well as usual functions (sqrt, log, exp, sin, cos, tan, asin, acos, atan, sinh, cosh, tanh, asinh, acosh, atanh). The definitions of these functions match the definitions from the numpy package. All symbols appearing in an expression must either be declared in the symbols section or be one of the predefined functions. Any symbol s that is not a parameter is assumed to be considered at date t. Values at date t+1 and t-1 are denoted by s(1) and s(-1) respectively.
All equations are implicitly enclosed by the expectation operator \(E_t\left[\cdots \right]\). Consequently, the law of motion for the capital
is written as:
k = (1-delta)*k(-1) + i(-1)
while the Euler equation
is translated by:
1 = beta*(c/c(1))^(sigma)*(1-delta+rk(1))
Note
In Python, the exponent operator is denoted by ** while the caret operator ^ represents bitwise XOR. In dolo models, we ignore this distinction and interpret both as an exponent.
Note
The default evaluator in dolo preserves the evaluation order. Thus (c(1)/c)^(-gamma)
is not evaluated in the same way (and is numerically more stable) than c(1)^(-gamma)/c^(-gamma)
. Currently, this is not true for symbolically computed derivatives, as expressions are automatically simplified, implying that execution order is not guaranteed. This impacts only higher order perturbations.
An equation can consist of one expression, or two expressions separated by =. There are two types of equation blocks:
condition blocks
In these blocks, each equation
lhs = rhs
define the scalar value(rhs)-(lhs)`
. A list of of such equations, i.e a block, defines a multivariate function of the appearing symbols. Certain condition blocks, can be associated with complementarity conditions separated by|
as inrhs-lhs | 0 < x < 1
. In this case it is advised to omit the equal sign in order to make it easier to interpret the complementarity. Also, when complementarity conditions are used, the ordering of variables appearing in the complementarities must match the declaration order (more in section Y).definition blocks
Definition blocks differ from condition blocks in that they define a group of variables (
states
orauxiliaries
) as a function of the right hand side.
The types of variables appearing on the right hand side depend on the block type. The variables enumerated on the left hand-side must appear in the declaration order.
Note
In the RBC example, the auxiliary
block defines variables (y,c,rk,w
) that can be directly deduced from the states and the controls:
auxiliary:
- y = z*k^alpha*n^(1-alpha)
- c = y - i
- rk = alpha*y/k
- w = (1-alpha)*y/w
Note that the declaration order matches the order in which variables appear on the left hand side. Also, these variables are defined recursively: c
, rk
and w
depend on the value for y
. In contrast to the calibration block, the definition order matters. Assuming that variables where listed as (c,y,rk,w
) the following block would provide incorrect result since y
is not known when c
is evaluated.
auxiliary:
- c = y - i
- y = z*k^alpha*n^(1-alpha)
- rk = alpha*y/k
- w = (1-alpha)*y/w
Calibration section¶
The role of the calibration section consists in providing values for the parameters and the variables. The calibration of all parameters appearing in the equation is of course strictly necessary while the the calibration of other types of variables is useful to define the steady-state or an initial guess to the steady-state.
The calibrated values are also substituted in other sections, including the shocks and options section. This is particularly useful to make the covariance matrix depend on model parameters, or to adapt the state-space to the model’s calibration.
The calibration is given by an associative dictionary mapping symbols to define with values. The values can be either a scalar or an expression. All symbols are treated in the same way, and values can depend upon each other as long as there is a way to resolve them recursively.
In particular, it is possible to define a parameter in order to target a special value of an endogenous variable at the steady-state. This is done in the RBC example where steady-state labour is targeted with n: 0.33
and the parameter phi
calibrated so that the optimal labour supply equation holds at the steady-state (chi: w/c^sigma/n^eta
).
All symbols that are defined in the symbols section but do not appear in the calibration section are initialized with the value nan without issuing any warning.
Note
No clear policy has been established yet about how to deal with undeclared symbols in the calibration section. Avoid them.
Domain section¶
The domain section contains boundaries for each endogenous state as in the following example:
domain:
k: [0.5*k, 2*k]
z: [-σ_z*3, σ_z*3]
Note
In the above example, values can refer to the calibration dictionary. Hence, 0.5*k means 50% of steady-state capital. Keys, are not replaced.
Exogenous shocks specification¶
Note
This section is out-of-date. Many more shocks options are allowed. See [dolark documentation](http://www.econforge.org/dolark/shocks/) for a more recent description of the shocks.
The type of exogenous shock associated to a model determines the kind of decision rule, whih will be obtained by the solvers. Shocks can pertain to one of the following categories: continuous i.i.d. shocks (Normal law), continous autocorrelated process (VAR1 process) or a discrete markov chain. The type of the shock is specified using yaml type annotations (starting with exclamation mark)
The exogenous shock section can refer to parameters specified in the calibration section. Here are some examples
for each type of shock:
Normal¶
For Dynare and continuous-states models, one has to specifiy a multivariate distribution of the i.i.d. process for the vector of shocks
(otherwise shocks are assumed to be constantly 0). This is done in the distribution section. A gaussian distrubution (only one supported so far), is specified by supplying the covariance matrix as a list of list as in the following example.
exogenous: !Normal:
Sigma: [ [sigma_1, 0.0],
[0.0, sigma_2] ]
Note
The shocks syntax is currently rather unforgiving. Normal shocks expect a covariance matrix (i.e. a list of list) and the keyword is Sigma, not sigma.
Markov chains¶
Markov chains are constructed by providing a list of nodes and a transition matrix.
exogenous: !MarkovChain
values: [[-0.01, 0.1],[0.01, 0.1]]
transitions: [[0.9, 0.1], [0.1, 0.9]]
It is also possible to combine markov chains together.
exogenous: !Product
- !MarkovChain
values: [[-0.01, 0.1],[0.01, 0.1]]
transitions: [[0.9, 0.1], [0.1, 0.9]]
- !MarkovChain
values: [[-0.01, 0.1],[0.01, 0.1]]
transitions: [[0.9, 0.1], [0.1, 0.9]]
Options¶
The options section contains all informations necessary to solve the model. It can also contain arbitrary additional informations. The section follows the mini-language convention, with all calibrated values replaced by scalars and all keywords allowed.
Global solutions require the definition of an approximation space. The lower, upper bounds and approximation orders (number of nodes in each dimension) are defined as in the following example:
options:
grid: !Cartesian
n: [10, 50]
arbitrary_information
Model API¶
For numerical purposes, models are essentially represented as a set of symbols,
calibration and functions representing the various equation types of the model.
This data is held in a NumericalModel
object whose API is described in this chapter. Models are usually created by writing a Yaml files as described in the the previous chapter, but as we will see below, they can also be written directly.
Numerical Model Object¶
As previously, let’s consider, the Real Business Cycle example, from the introduction. The model object can be created using the yaml file:
model = yaml_import('models/rbc.yaml')
The object contains few meta-data:
display( model.name ) # -> Real Business Cycles
display( model.model_type ) # -> `dtcc`
display( model.model_specs ) # -> `(f,g,v)`
The model.name
field contains a possibly long string identifying the model.
The 'model.model_features'
field summarizes which equations types are provided which determines the solution algorithms that can be used to solve the model. Here (f,g,v)
means that arbitrage
(short f
), transition
(short g
) and value
equations were provided meaning that time-iteration or value function iteration can both be used to solve the model. When using a yaml files, the model_type` and ``model_specs
properties are automatically set.
..note:
``model_type`` field is now always ``dtcc``. Older model types (``'dtmscc'``, ``'dtcscc'``, ``'dynare'``) are not used anymore.
The various attributes of the model directly echo the sections from the Yaml file.
Symbols¶
Symbols are held in the model.symbols dictionary, with each symbol type mapping to a list of symbol strings, that will be used in equations. Although these symbols are not needed stricto sensu for computations, they are very useful to calibrate the steady-state or to label the graphs and simulations
display(model.symbols)
Note
Although dictionaries read from the yaml file are unordered, the structure representing them in Python is actually an OrderedDict rather than a dict object. This is to allow for more predictability and conistency in outputs. The order is conventional and the keys are ordered after the list ‘variables, states, controls, auxiliaries, values, parameters’ (missing types are omitted from the list).
Calibration¶
Each models stores calibration information as model.calibration. It is a special dictionary-like object, which contains calibration information, that is values for parameters and initial values (or steady-state) for all other variables of the model.
It is possible to retrieve one or several variables calibrations:
display( model.calibration['k'] ) # -> 2.9
display( model.calibration['k', 'delta'] # -> [2.9, 0.08]
When a key coresponds to one of the symbols group, one gets one or several vectors of variables instead:
model.calibration['states'] # - > np.array([2.9, 0]) (values of states [z, k])
model.calibration['states', 'controls'] # -> [np.array([2.9, 0]), np.array([0.29, 1.0])] (values of states [z,k] and controls [i,n])
To get regular dictionary mapping states groups and vectors, one can use the attributed .grouped The values are vectors (1d numpy arrays) of values for each symbol group. For instance the following code will print the calibrated values of the parameters:
for (variable_group, variables) in model.calibration.items():
print(variables_group, variables)
In order to get a (key,values)
of all the values of the model, one can call model.calibration.flat
.
for (variable_group, variables) in model.calibration.items():
print(variables_group, variables)
One uses the model.set_calibration()
routine to change the calibration of the model. This one takes either a dict as an argument, or a set of keyword arguments. Both calls are valid:
model.set_calibration( {'delta':0.01} )
model.set_calibration( {'i': 'delta*k'} )
model.set_calibration( delta=0.08, k=2.8 )
This method also understands symbolic expressions (as string) which makes it possible to define symbols as a function of other symbols:
model.set_calibration(beta='1/(1+delta)')
print(model.get_calibration('beta')) # -> nan
model.set_calibration(delta=0.04)
print(model.get_calibration(['beta', 'delta'])) # -> [0.96, 0.04]
Under the hood, the method stores the symbolic relations between symbols. It is precisely equivalent to use the set_calibration
method
or to change the values in the yaml files. In particular, the calibration order is irrelevant as long as all parameters can be deduced one from another.
Functions¶
A model of a specific type can feature various kinds of functions. For instance, a continuous-states-continuous-controls models, solved by iterating on the Euler equations may feature a transition equation \(g\) and an arbitrage equation \(f\). Their signature is respectively \(s_t=g(s_{t-1},x_{t-1},e_t)\) and \(E_t[f(s_t,x_t,s_{t+1},x_{t+1})]\), where \(s_t\), \(x_t\) and \(e_t\) respectively represent a vector of states, controls and shocks. Implicitly, all functions are also assumed to depend on the vector of parameters \(p\).
These functions can be accessed by their type in the model.functions dictionary:
g = model.functions['transition']
f = model.functions['arbitrage']
Let’s call the arbitrage function on the steady-state value, to see the residuals at the deterministic steady-state:
m = model.calibration['exogenous']
s = model.calibration['states']
x = model.calibration['controls']
p = model.calibration['parameters']
res = f(m,s,x,m,s,x,p)
display(res)
The output (res
) is two element vector, representing the residuals of the two arbitrage equations at the steady-state. It should be full of zero. Is it ? Great !
By inspecting the arbitrage function ( f?
), one can see that its call api is:
f(m,s,x,M,S,X,p,diff=False,out=None)
Since m
, s
and x
are the short names for exogenous shocks, states and controls, their values at date \(t+1\) is denoted with S
and X
. This simple convention prevails in most of dolo source code: when possible, vectors at date t
are denoted with lowercase, while future vectors are with upper case. We have already commented the presence of the paramter vector p
.
Now, the generated functions also gives the option to perform in place computations, when an output vector is given:
out = numpy.ones(2)
f(m,s,x,m,s,x,p,out) # out now contains zeros
It is also possible to compute derivatives of the function by setting diff=True
. In that case, the residual and jacobians with respect to the various arguments are returned as a list:
r, r_m, r_s, r_x, r_M, r_S, r_X = f(m,s,x,m,s,x,p,diff=True)
Since there are two states and two controls, the variables r_s, r_x, r_S, r_X
are all 2 by 2 matrices.
The generated functions also allow for efficient vectorized evaluation. In order to evaluate the residuals \(N\) times, one needs to supply matrix arguments, instead of vectors, so that each line corresponds to one value to evaluate as in the following example:
N = 10000
vec_m = m[None,:].repeat(N, axis=0) # we repeat each line N times
vec_s = s[None,:].repeat(N, axis=0) # we repeat each line N times
vec_x = x[None,:].repeat(N, axis=0)
vec_X = X[None,:].repeat(N, axis=0)
vec_p = p[None,:].repeat(N, axis=0)
# actually, except for vec_s, the function repeat is not need since broadcast rules apply
vec_s[:,0] = linspace(2,4,N) # we provide various guesses for the steady-state capital
vec_S = vec_s
out = f(vec_m, vec_s,vec_x,vec_M, vec_S,vec_X,vec_p) # now a 10000 x 2 array
out, out_m, out_s, out_x, out_M, out_S, out_X = f(vec_m, vec_s,vec_x, vec_m, vec_S,vec_X,vec_p)
The vectorized evaluation is optimized so that it is much faster to make a vectorized call rather than iterate on each point. By default, this is achieved by using the excellent numexpr library.
Note
In the preceding example, the parameters are constant for all evaluations, yet they are repeated. This is not mandatory, and the call f(vec_m, vec_s, vec_x, vec_M, vec_S, vec_X, p)
should work exactly as if p had been repeated along the first axis. We follow there numba’s guvectorize
conventions, even though they slightly differ from numpy’s ones.
Exogenous shock¶
TODO: expand
The exogenous field contains information about the driving process. To get its default, discretize version, one can call model.exogenous.discretize().
Options structure¶
The model.options
structure holds an information required by a particular solution method. For instance, for global methods, model.options['grid']
is supposed to hold the boundaries and the number nodes at which to interpolate.
display( model.options['grid'] )
Model Specification¶
Variable types¶
The following types of variables can be used in DTCSCC models:
shocks
(e
) (can be autocorrelated)
states
(s
)
controls
(x
)
auxiliaries
(y
)
rewards
(r
)
values
(v
)
expectations
(z
)
parameters
(p
)
Symbol types that are present in a model are always listed in that order.
State-space¶
The unknown vector of controls \(x\) is a function \(\varphi\) of the states, both exogenous (:math`e`) and endogenous (:math`s`) In general we have:
The function \(\varphi\) is typically approximated by the solution algorithm. It can be either a Taylor expansion, or an intepolating object (splines, smolyak). In both cases, it behaves like a numpy gufunc and can be called on a vector or a list of points:
# for an iid model
dr = approximate_controls(model)
s0 = model.calibration['states']
dr(s0) # evaluates on a vector
dr(s0[None,:].repeat(10, axis=0) ) # works on a list of points too
Valid equations¶
The various equations that can be defined using these symbol types is summarized in the following table. They are also reviewed below with more details.
When present these functions can be accessed from the model.functions
dictionary by using the standard name. For instance to compute the auxiliary variables at the steady-state one can compute:
# recover steady-state values
e = model.calibration['exogenous']
s = model.calibration['states']
x = model.calibration['controls']
p = model.calibration['parameters']
# compute the vector of auxiliary variables
a = model.functions['auxiliary']
y = a(e,s,x,p)
# it should correspond to the calibrated values:
calib_y = model.calibration['auxiliaries']
assert( abs(y - calib_y).max() < 0.0000001 )
Transitions¶
- name: `transition`
- short name: `g`
Transitions are given by a function \(g\) such that at all times:
where \(\epsilon_t\) is a vector of i.i.d. shocks.
Note
In the RBC model, the vector of states is \(s_t=(a_t,k_t)\). The transitions are:
\[a_t = \rho a_{t-1} + \epsilon_t k_t = (1-\delta)*k_{t-1} + i_{t-1}\]
The yaml file is amended with:
symbols:
states: [a,k]
controls: [i]
shocks: [epsilon]
...
equations:
transition:
a = rho*a(-1) + e
k = k(-1)*(1-delta) + i(-1)
Note that the transitions are given in the declaration order.
Auxiliary variables¶
- name: `auxiliary`
- short name: `a`
In order to reduce the number of variables, it is useful to define auxiliary variables \(y_t\) using a function \(a\) such that:
When they appear in an equation they are automatically substituted by the corresponding expression in \(s_t\) and \(x_t\). Note that auxiliary variables are not explicitely listed in the following definition. Implicitly, wherever states and controls are allowed with the same date in an equation type, then auxiliary variable are also allowed with the same date.
Note
In the RBC model, three auxiliary variables are defined \(y_t, c_t, r_{k,t}\) and \(w_t\). They are a closed form function of \(a_t, k_t, i_t, n_t\). Defining these variables speeds up computation since they are don’t need to be solved for or interpolated.
Utility function and Bellman equation¶
- name: `utility`
- short name: `u`
The (separable) value equation defines the value \(v_t\) of a given policy as:
This gives rise to the Bellman equation:
\[v_t = \max_{x_t} \left( u(s_t,x_t) + \beta E_t \left[ v_{t+1} \right] \right)\]
These two equations are characterized by the reward function \(u\) and the discount rate \(\beta\). Function \(u\) defines the vector of symbols rewards
.
Since the definition of \(u\) alone is not sufficient, the parameter used for the discount factor must be given to routines that compute the value. Several values can be computed at once, if \(U\) is a vector function and \(\beta\) a vector of discount factors, but in that case in cannot be used to solve for the Bellman equation.
Note
Our RBC example defines the value as \(v_t = \frac{(c_t)^{1-\gamma}}{1-\gamma} + \beta E_t v_{t+1}\). This information is coded using: ## TODO add labour to utility
symbols:
...
rewards: [r]
equations:
...
utility:
- r = c^(1-gamma)/(1-gamma)
calibration:
...
beta: 0.96 # beta is the default name of the discount
Value¶
- name: `value`
- short name: `w`
A more general updating equation can be useful to express non-separable utilities or prices. the vector of (generalized) values \(v^{*}\) are defined by a function w
such that:
As in the separable case, this function can either be used to compute the value of a given policy \(x=\varphi()\) or in order solve the generalized Bellman equation:
Note
Instead of defining the rewards of the RBC example, one can instead define a value updating equation instead:
symbols:
...
values: [v]
equations:
...
value:
- v = c^(1-gamma)/(1-gamma)*(1-n...) + beta*v(1)
Boundaries¶
- name: `controls_lb` and `controls_ub`
- short name: `lb` and `ub`
The optimal controls must also satisfy bounds that are function of states. There are two functions \(\underline{b}()\) and \(\overline{b}()\) such that:
Note
In our formulation of the RBC model we have excluded negative investment, implying \(i_t \geq 0\). On the other hand, labour cannot be negative so that we add lower bounds to the model:
equations:
...
controls_lb:
i = 0
n = 0
Specifying the lower bound on labour actually has no effect since agents endogeneously choose to work a positive amount of time in order to produce some consumption goods.
As for upper bounds, it is not necessary to impose some: the maximum amount of investment is limited by the Inada conditions on consumption. As for labour n
, it can be arbitrarily large without creating any paradox. Thus the upper bounds are omitted (and internally treated as infinite values).
Euler equation¶
- name: `arbitrage` (`equilibrium`)
- short name: `f`
A general formulation of the Euler equation is:
Note that the Euler equation and the boundaries interact via “complementarity equations”. Evaluated at one given state, with the vector of controls \(x=(x_1, ..., x_i, ..., x_{n_x})\), the Euler equation gives us the residuals \(r=(f_1, ..., f_i, ..., f_{n_x})\). Suppose that the \(i\)-th control \(x_i\) is supposed to lie in the interval \([ \underline{b}_i, \overline{b}_i ]\). Then one of the following conditions must be true:
\(f_i\) = 0
\(f_i<0\) and \(x_i=\overline{b}_i\)
\(f_i>0\) and \(x_i=\underline{b}_i\)
By definition, this set of conditions is denoted by:
\(f_i = 0 \perp \underline{b}_i \leq x_i \leq \overline{b}_i\)
These notations extend to a vector setting so that the Euler equations can also be written:
Specifying the boundaries together with Euler equation, or providing them separately is exactly equivalent. In any case, when the boundaries are finite and occasionally binding, some attention should be devoted to write the Euler equations in a consistent manner. In particular, note that the Euler equations are order-sensitive.
The Euler conditions, together with the complementarity conditions typically often come from Kuhn-Tucker conditions associated with the Bellman problem, but that is not true in general.
Note
The RBC model has two Euler equations associated with investment and labour supply respectively. They are added to the model as:
arbitrage:
- 1 - beta*(c/c(1))^(sigma)*(1-delta+rk(1)) | 0 <= i <= inf
- w - chi*n^eta*c^sigma | 0 <= n <= inf
Putting the complementarity conditions close to the Euler equations, instead of entering them as separate equations, helps to check the sign of the Euler residuals when constraints are binding. Here, when investment is less desirable, the first expression becomes bigger. When the representative is prevented to invest less due to the constraint (i.e. \(i_t=0\)), the expression is then positive consistently with the complementarity conventions.
Expectations¶
- name: `expectation`
- short name: `h`
The vector of explicit expectations \(z_t\) is defined by a function \(h\) such that:
In the RBC example, one can define. the expected value tomorrow of one additional unit invested tomorrow:
.. math::
m_t=\beta*(c_{t+1}^(-\sigma)*(1-\delta+r_{k,t+1})
It is a pure expectational variable in the sense that it is solely determined by future states and decisions. In the model file, it would be defined as:
.. code: yaml
symbols:
...
expectations: [z]
equations:
...
- z = beta*(c(1))^(-sigma)*(1-delta+rk(1))
Generalized expectations¶
- name: `expectation_2`
- short name: `h_2`
The vector of generalized explicit expectations \(z_t\) is defined by a function \(h^{\star}\) such that:
Euler equation with expectations¶
- name: `arbitrage_2` (`equilibrium_2`)
- short name: `f_2`
If expectations are defined using one of the two preceding definitions, the Euler equation can be rewritten as:
Note
Given the definition of the expectation variable \(m_t\), today’s consumption is given by: \(c_t = z_t^(-\frac{1}{sigma})\) so the Euler equations are rewritten as:
arbitrage_2:
- 1 - beta*(c)^(sigma)/m | 0 <= i <= inf
- w - chi*n^eta*c^sigma | 0 <= n <= inf
Note the type of the arbitrage equation (arbitrage_2
instead of arbitrage
).
However \(c_t\) is not a control itself,
but the controls \(i_t, n_t\) can be easily deduced:
..math:
n_t = ((1-alpha)*z_t*k_t^alpha*m_t/chi)^(1/(eta+alpha))
i_t = z_t*k_t^\alpha*n_t^(1-\alpha) - (m_t)^(-1/sigma)
This translates into the following YAML code:
equations:
- n = ((1-alpha)*a*k^alpha*m/chi)^(1/(eta+alpha))
- i = z*k^alpha*n^(1-alpha) - m^(-1/sigma)
Direct response function¶
- name: `direct_response`
- short name: `d`
In some simple cases, there a function \(d()\) giving an explicit definition of the controls:
Compared to the preceding Euler equation, this formulation saves computational time by removing the need to solve a nonlinear system to recover the controls implicitly defined by the Euler equation.
Terminal conditions¶
- name: `terminal_condition`
- short name: `f_T`
When solving a model over a finite number \(T\) of periods, there must be a terminal condition defining the controls for the last period. This is a function \(f^T\) such that:
Terminal conditions¶
- name: `terminal_condition`
- short name: `f_T_2`
Sometimes the terminal condition is given as an explicit choice for the controls in the last period. This defines function \(f^{T,\star}\) such that:
Solution algorithms¶
Steady-state¶
The deterministic state of a model corresponds to steady-state values \(\overline{m}\) of the exogenous process. States and controls satisfy:
\(\overline{s} = g\left(\overline{m}, \overline{s}, \overline{x}, \overline{m} \right)\)
\(0 = \left[ f\left(\overline{m}, \overline{s}, \overline{x}, \overline{m}, \overline{s}, \overline{x} \right) \right]\)
where \(g\) is the state transition function, and \(f\) is the arbitrage equation. Note that the shocks, \(\epsilon\), are held at their deterministic mean.
The steady state function consists in solving the system of arbitrage equations for the steady state values of the controls, \(\overline{x}\), which can then be used along with the transition function to find the steady state values of the state variables, \(\overline{s}\).
-
dolo.algos.steady_state.
residuals
(model: dolo.compiler.model.Model, calib=None) → Dict[str, List[float]]¶
Perturbation¶
-
dolo.algos.perturbation.
perturb
(model, verbose=False, steady_state=None, eigmax=0.999999, solve_steady_state=False, order=1, details=True)¶ Compute first order approximation of optimal controls
Parameterized expectations¶
We consider an fgh model, that is a model with the form:
\(s_t = g\left(s_{t-1}, x_{t-1}, \epsilon_t \right)\)
\(0 = f\left(s_{t}, x_{t}, E_t[h(s_{t+1}, x_{t+1})] \right)\)
where \(g\) is the state transition function, \(f\) is the arbitrage equation, and \(h\) is the expectations function (more accurately, it is the function over which expectations are taken).
The parameterized expectations algorithm consists in approximating the expectations function, \(h\), and solving for the associated optimal controls, \(x_t = x(s_t)\).
- At step \(n\), with a current guess of the control, \(x(s_t) = \varphi^n(s_t)\), and expectations function, \(h(s_t,x_t) = \psi^n(s_t)\) :
Compute the conditional expectation \(z_t = E_t[\varphi^n(s_t)]\)
Given the expectation, update the optimal control by solving \(0 = \left( f\left(s_{t}, \varphi^{n+1}(s), z_t \right) \right)\)
Given the optimal control, update the expectations function \(\psi^{n+1}(s_t) = h(s_t, \varphi^{n+1}(s_t))\)
Perfect foresight¶
Consider a series for the exogenous process \((m_t)_{0 \leq t \leq T}\). The perfect foresight problem consists in finding the path of optimal controls \((x_t)_{0 \leq t \leq T}\) and corresponding states \((s_t)_{0 \leq t \leq T}\) such that:
\(s_t = g\left(m_{t-1}, s_{t-1}, x_{t-1}, \epsilon_t \right)\)
\(0 = E_t \left( f\left(m_{t}, s_{t}, x_{t}, m_{t+1}, s_{t+1}, x_{t+1}\right) \right) \ \perp \ \underline{u} <= x_t <= \overline{u}\)
Special conditions apply for the initial state and controls. Initial state \(s_0\) is given exogenously. Final states and controls are determined by assuming the exogenous process satisfies \(m_t=m_T\) for all \(t\leq T\) and optimality conditions are satisfied in the last period:
\(f(m_T, s_T, x_T, m_T,s_T, x_T) \perp \underline{u} <= x_T <= \overline{u}\).
We assume that \(\underline{u}\) and \(\overline{u}\) are constants. This is not a big restriction since the model can always be reformulated in order to meet that constraint, by adding more equations.
The stacked system of equations satisfied by the solution is:
\(s_0 = s_0\)
\(f(s_0, x_0, s_1, x_1) \perp \underline{u} <= x_0 <= \overline{u}\)
\(s_1 = g(s_0, x_0, \epsilon_1)\)
\(f(s_1, x_1, s_2, x_2) \perp \underline{u} <= x_1 <= \overline{u}\)
\(s_T = g(s_{T-1}, x_{T-1}, \epsilon_T)\)
\(f(s_T, x_T, s_T, x_T) \perp \underline{u} <= x_T <= \overline{u}\)
The system is solved using a nonlinear solver.
Note
TODO: explain the subtle timing convention converning the exogenous shock.
-
dolo.algos.perfect_foresight.
deterministic_solve
(model, shocks=None, s1=None, T=100, ignore_constraints=False, maxit=100, initial_guess=None, verbose=True, solver='ncpsolve', tol=1e-06)¶ Computes a perfect foresight simulation using a stacked-time algorithm.
The initial state is specified either by providing a series of exogenous shocks and assuming the model is initially in equilibrium with the first value of the shock, or by specifying an initial value for the states.
- Parameters
- modelModel
Model to be solved
- shocksarray-like, dict, or pandas.DataFrame
A specification of the shocks to the model. Can be any of the following (note by “declaration order” below we mean the order of model.symbols[“shocks”]):
A 1d numpy array-like specifying a time series for a single shock, or all shocks stacked into a single array.
A 2d numpy array where each column specifies the time series for one of the shocks in declaration order. This must be an N by number of shocks 2d array.
A dict where keys are strings found in model.symbols[“shocks”] and values are a time series of values for that shock. For model shocks that do not appear in this dict, the shock is set to the calibrated value. Note that this interface is the most flexible as it allows the user to pass values for only a subset of the model shocks and it allows the passed time series to be of different lengths.
A DataFrame where columns map shock names into time series. The same assumptions and behavior that are used in the dict case apply here
If nothing is given here, shocks is set equal to the calibrated values found in model.calibration[“shocks”] for all periods.
If the length of any time-series in shocks is less than T (see below) it is assumed that that particular shock will remain at the final given value for the duration of the simulaiton.
- s1ndarray or dict
a vector with the value of initial states
- Tint
horizon for the perfect foresight simulation
- maxitint
maximum number of iteration for the nonlinear solver
- verboseboolean
if True, the solver displays iterations
- tolfloat
stopping criterium for the nonlinear solver
- ignore_constraintsbool
if True, complementarity constraints are ignored.
- Returns
- pandas dataframe
a dataframe with T+1 observations of the model variables along the simulation (states, controls, auxiliaries). The first observation is the steady-state corresponding to the first value of the shocks. The simulation should return to a steady-state corresponding to the last value of the exogenous shocks.
Time iteration¶
We consider a model with the form:
\(s_t = g\left(m_{t-1}, s_{t-1}, x_{t-1}, m_t \right)\)
\(0 = E_t \left[ f\left(m_t, s_{t}, x_{t}, m_{t+1}, s_{t+1}, x_{t+1} \right) \right]\)
where \(g\) is the state transition function, and \(f\) is the arbitrage equation.
The time iteration algorithm consists in approximating the optimal controls as a function of exogenous and endogenous controls \(x_t = \varphi(m_t,s_t)\).
At step \(n\), the current guess for the control, \(x(s_t) = \varphi^n(m_t, s_t)\), serves as the control being exercised next period :
Given current guess, find the current period’s \(\varphi^{n+1}(m_t,s_t)\) controls for any \((m_t,s_t)\) by solving the arbitrage equation : \(0 = E_t \left[ f\left(m_t, s_{t}, \varphi^{n+1}(m_t, s_t), g(s_t, \varphi^{n+1}(m_t, s_t)), \varphi^{n}(m_{t+1},g(s_t, \varphi^{n+1}(s_t))) \right) \right]\)
-
dolo.algos.time_iteration.
time_iteration
(model, dr0=None, dprocess=None, with_complementarities=True, verbose=True, grid={}, maxit=1000, inner_maxit=10, tol=1e-06, hook=None, details=False)¶ Finds a global solution for
model
using backward time-iteration.This algorithm iterates on the residuals of the arbitrage equations
- Parameters
- modelModel
model to be solved
- verboseboolean
if True, display iterations
- dr0decision rule
initial guess for the decision rule
- dprocessDiscretizedProcess (model.exogenous.discretize())
discretized process to be used
- with_complementaritiesboolean (True)
if False, complementarity conditions are ignored
- grid: grid options
overload the values set in options:grid section
- maxit: maximum number of iterations
- inner_maxit: maximum number of iteration for inner solver
- tol: tolerance criterium for successive approximations
- hook: Callable
function to be called within each iteration, useful for debugging purposes
- Returns
- decision rule :
approximated solution
-
dolo.algos.time_iteration.
residuals_simple
(f, g, s, x, dr, dprocess, parms)¶
Value function iteration¶
-
dolo.algos.value_iteration.
evaluate_policy
(model, mdr, tol=1e-08, maxit=2000, grid={}, verbose=True, dr0=None, hook=None, integration_orders=None, details=False, interp_type='cubic')¶ Compute value function corresponding to policy
dr
-
dolo.algos.value_iteration.
value_iteration
(model, grid={}, tol=1e-06, maxit=500, maxit_howard=20, verbose=False, details=True)¶ Solve for the value function and associated Markov decision rule by iterating over the value function.
Inspecting the solution¶
The output of most solution methods is a decision rule for the controls as a
function of the exogenous and endogenous states: dr
. This decision rule
can be called using one of the following methods:
dr.eval_s(s: array)
: function of endogenous state. Works only if exgogenous process is i.i.d.dr.eval_ms(m: array,s: array)
: function of exogenous and endogenous values. Works only if exogenous process is continuous.dr.eval_is(i: int,s: array)
: function of exognous index and endogenous values. Works only if some indexed discrete values are associated with exogenous process.
There is also a __call__ function, which tries to make the sensible call based on argument types. Hence dr(0, s)
will behave as the third example.
Tabulating a decision rule¶
Dolo provides a convenience function to plot the values of a decision rule against different values of a state:
-
dolo.algos.simulations.
tabulate
(model, dr, state, bounds=None, n_steps=100, s0=None, i0=None, m0=None, **kwargs)¶
Stochastic simulations¶
Given a model object and a corresponding decision rule, one can get a N
stochastic simulation for T
periods,
using the simulate
function. The resulting object is an 3-dimensional DataArray, with the following labelled axes:
- T: date of the simulation (range(0,T)
)
- N: index of the simulation (range(0,N)
)
- V: variables of the model (model.variables
)
-
dolo.algos.simulations.
simulate
(model, dr, process=None, N=1, T=40, s0=None, i0=None, m0=None, driving_process=None, seed=42, stochastic=True)¶ Simulate a model using the specified decision rule.
- Parameters
- model: Model
- dr: decision rule
- process:
- s0: ndarray
initial state where all simulations start
- driving_process: ndarray
realization of exogenous driving process (drawn randomly if None)
- N: int
number of simulations
- T: int
horizon for the simulations
- seed: int
used to initialize the random number generator. Use it to replicate exact same results among simulations
- discard: boolean (False)
if True, then all simulations containing at least one non finite value are discarded
- Returns
- xarray.DataArray:
- returns a
T x N x n_v
array wheren_v
is the number of variables.
- returns a
Impulse response functions¶
For continuously valued exogenous shocks, one can perform an impulse response function:
-
dolo.algos.simulations.
response
(model, dr, varname, T=40, impulse: float = None)¶
Graphing nonstochastic simulations¶
Given one or many nonstochstic simulations of a model, obtained with response
, or deterministic_solve
it is possible to quickly create an irf for multiple variables.
-
dolo.misc.graphs.
plot_irfs
(sims, variables=None, titles=None, layout=None, horizon=None, figsize=None, plot_options={}, line_options=None)¶
Examples¶
Sudden Stop Model¶
In this notebook we replicate the baseline model exposed in
From Sudden Stops to Fisherian Deflation, Quantitative Theory and Policy
by Anton Korinek and Enrique G. Mendoza
The file sudden_stop.yaml
which is printed below, describes the
model, and must be included in the same directory as this notebook.
importing necessary functions¶
%pylab inline
Populating the interactive namespace from numpy and matplotlib
from dolo import *
from dolo.algos.dtmscc.time_iteration import time_iteration
from dolo.algos.dtmscc.simulations import plot_decision_rule, simulate
writing the model¶
cd ../models
C:UsersPabloDocumentsGitHubdoloexamplesmodels
filename = 'https://raw.githubusercontent.com/EconForge/dolo/master/examples/models/compat/sudden_stop.yaml'
filename = 'sudden_stop.yaml'
# the model file is coded in a separate file called sudden_stop.yaml
# note how the borrowing constraint is implemented as complementarity condition
pcat(filename)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 | # This file adapts the model described in # "From Sudden Stops to Fisherian Deflation, Quantitative Theory and Policy" # by Anton Korinek and Enrique G. Mendoza name: Sudden Stop (General) model_spec: mfga symbols: markov_states: [y] states: [l] controls: [b, lam] auxiliaries: [c] values: [V, Vc] parameters: [beta, R, sigma, a, mu, kappa, delta_y, pi, lam_inf] equations: transition: - l = b(-1) arbitrage: - lam = b/c - beta*(c(1)/c)^(-sigma)*R - 1 | lam_inf <= lam <= inf auxiliary: - c = 1 + y + l*R - b value: - V = c^(1.0-sigma)/(1.0-sigma) + beta*V(1) - Vc = c^(1.0-sigma)/(1.0-sigma) discrete_transition: MarkovChain: - [[ 1.0-delta_y ], # bad state [ 1.0 ]] # good state - [[ 0.5, 1-0.5 ], # probabilities [p(L|L), p(H|L)] [ 0.5, 0.5 ]] # probabilities [p(L|H), p(H|H)] calibration: beta: 0.95 R: 1.03 sigma: 2.0 a: 1/3 mu: 0.8 kappa: 1.3 delta_y: 0.03 pi: 0.05 lam_inf: -0.2 y: 1.0 c: 1.0 + y b: 0.0 l: 0.0 lam: 0.0 V: c^(1.0-sigma)/(1.0-sigma)/(1.0-beta) Vc: c^(1.0-sigma)/(1.0-sigma) options: approximation_space: a: [-1.0] b: [ 1.0] orders: [10] |
importing the model¶
Note, that residuals, are not zero at the calibration we supply. This is because the representative agent is impatient and we have \(\beta<1/R\). In this case it doesn’t matter.
By default, the calibrated value for endogenous variables are used as a (constant) starting point for the decision rules.
model = yaml_import('sudden_stop.yaml')
model
Model type detected as 'dtmscc'
Model object:
------------
- name: "Sudden Stop (General)"
- type: "dtmscc"
- file: "sudden_stop.yaml
- residuals:
transition
1 : 0.0000 : l = b(-1)
arbitrage
1 : 0.0000 : lam = b/c
2 : [31m-0.0215[0m : beta*(c(1)/c)**(-sigma)*R - 1 | lam_inf <= lam <= inf
auxiliary
1 : 0.0000 : c = 1 + y + l*R - b
value
1 : 0.0000 : V = c**(1.0-sigma)/(1.0-sigma) + beta*V(1)
2 : 0.0000 : Vc = c**(1.0-sigma)/(1.0-sigma)
# to avoid numerical glitches we choose a relatively high number of grid points
mdr = time_iteration(model, verbose=True, orders=[1000])
Solving WITH complementarities.
------------------------------------------------
| N | Error | Gain | Time | nit |
------------------------------------------------
| 1 | 5.014e-01 | nan | 1.878 | 7 |
| 2 | 1.600e-01 | 0.319 | 0.235 | 6 |
| 3 | 7.472e-02 | 0.467 | 0.221 | 6 |
| 4 | 4.065e-02 | 0.544 | 0.198 | 5 |
| 5 | 2.388e-02 | 0.587 | 0.204 | 5 |
| 6 | 1.933e-02 | 0.809 | 0.354 | 9 |
| 7 | 1.609e-02 | 0.832 | 0.234 | 6 |
| 8 | 1.370e-02 | 0.852 | 0.200 | 5 |
| 9 | 1.187e-02 | 0.867 | 0.148 | 4 |
| 10 | 1.049e-02 | 0.883 | 0.112 | 3 |
| 11 | 9.381e-03 | 0.894 | 0.138 | 3 |
| 12 | 8.467e-03 | 0.903 | 0.120 | 3 |
| 13 | 7.711e-03 | 0.911 | 0.126 | 3 |
| 14 | 7.060e-03 | 0.916 | 0.123 | 3 |
| 15 | 6.503e-03 | 0.921 | 0.078 | 2 |
| 16 | 6.016e-03 | 0.925 | 0.102 | 2 |
| 17 | 4.611e-03 | 0.766 | 0.083 | 2 |
| 18 | 8.356e-04 | 0.181 | 0.101 | 2 |
| 19 | 8.879e-05 | 0.106 | 0.056 | 1 |
| 20 | 1.449e-05 | 0.163 | 0.060 | 1 |
| 21 | 2.483e-06 | 0.171 | 0.056 | 1 |
| 22 | 2.605e-07 | 0.105 | 0.056 | 1 |
------------------------------------------------
Elapsed: 4.91300010681 seconds.
------------------------------------------------
# produce the plots
n_steps = 100
figure(figsize(10,6))
subplot(121)
plot_decision_rule(model, mdr, 'l', 'b', i0=0, n_steps=n_steps, label='$b_t$ (bad state)' )
plot_decision_rule(model, mdr, 'l', 'b', i0=1, n_steps=n_steps, label='$b_t$ (good state)' )
plot_decision_rule(model, mdr, 'l', 'l', i0=1, n_steps=n_steps, linestyle='--', color='black' )
#plot(df['l'], df['l'], linestyle='--', color='black')
# to plot the borrowing limit, we produce a dataframe df which contains all series
# (note that we don't supply a variable name to plot, only the state 'l')
lam_inf = model.get_calibration('lam_inf')
df = plot_decision_rule(model, mdr, 'l', i0=0, n_steps=n_steps)
plot(df['l'], lam_inf*df['c'], linestyle='--', color='black')
xlabel('$l_t$')
legend(loc= 'upper left')
subplot(122)
plot_decision_rule(model, mdr, 'l', 'c', i0=0, n_steps=n_steps, label='$c_t$ (bad state)' )
plot_decision_rule(model, mdr, 'l', 'c', i0=1, n_steps=n_steps, label='$c_t$ (good state)' )
legend(loc= 'lower right')
xlabel('$l_t$')
suptitle("Decision Rules")
<matplotlib.text.Text at 0x179751d0>
## stochastic simulations
i_0 = 1 # we start from the good state
sim = simulate(model, mdr, i_0, s0=0.5, n_exp=1, horizon=100) # markov_indices=markov_indices)
subplot(211)
plot(sim['y'])
subplot(212)
plot(sim['b'])
[<matplotlib.lines.Line2D at 0x18f07668>]
Sensitivity analysis¶
Here we want to compare the saving behaviour as a function of risk aversion \(\sigma\). We contrast the baseline \(\sigma=2\) with the high aversion scenario \(\sigma=16\).
# we solve the model with sigma=16
model.set_calibration(sigma=16.0)
mdr_high_gamma = time_iteration(model, verbose=True, orders=[1000])
Solving WITH complementarities.
------------------------------------------------
| N | Error | Gain | Time | nit |
------------------------------------------------
| 1 | 5.133e-01 | nan | 0.395 | 10 |
| 2 | 1.703e-01 | 0.332 | 0.295 | 8 |
| 3 | 8.435e-02 | 0.495 | 0.284 | 7 |
| 4 | 5.005e-02 | 0.593 | 0.277 | 7 |
| 5 | 3.292e-02 | 0.658 | 0.281 | 7 |
| 6 | 2.313e-02 | 0.703 | 0.281 | 7 |
| 7 | 1.702e-02 | 0.736 | 0.268 | 7 |
| 8 | 1.295e-02 | 0.761 | 0.267 | 7 |
| 9 | 1.011e-02 | 0.780 | 0.286 | 7 |
| 10 | 8.045e-03 | 0.796 | 0.271 | 7 |
| 11 | 6.501e-03 | 0.808 | 0.283 | 7 |
| 12 | 5.316e-03 | 0.818 | 0.268 | 7 |
| 13 | 4.387e-03 | 0.825 | 0.249 | 6 |
| 14 | 3.647e-03 | 0.831 | 0.294 | 7 |
| 15 | 3.048e-03 | 0.836 | 0.279 | 7 |
| 16 | 2.558e-03 | 0.839 | 0.256 | 6 |
| 17 | 2.206e-03 | 0.863 | 0.235 | 6 |
| 18 | 2.010e-03 | 0.911 | 0.334 | 6 |
| 19 | 1.842e-03 | 0.916 | 0.330 | 5 |
| 20 | 1.699e-03 | 0.922 | 0.307 | 5 |
| 21 | 1.580e-03 | 0.930 | 0.314 | 5 |
| 22 | 1.472e-03 | 0.932 | 0.316 | 5 |
| 23 | 1.374e-03 | 0.933 | 0.302 | 5 |
| 24 | 1.289e-03 | 0.938 | 0.303 | 5 |
| 25 | 1.210e-03 | 0.939 | 0.316 | 5 |
| 26 | 1.137e-03 | 0.940 | 0.310 | 5 |
| 27 | 1.073e-03 | 0.944 | 0.263 | 4 |
| 28 | 1.013e-03 | 0.944 | 0.259 | 4 |
| 29 | 9.575e-04 | 0.945 | 0.202 | 3 |
| 30 | 9.075e-04 | 0.948 | 0.204 | 3 |
| 31 | 8.600e-04 | 0.948 | 0.194 | 3 |
| 32 | 8.166e-04 | 0.950 | 0.211 | 3 |
| 33 | 7.764e-04 | 0.951 | 0.185 | 3 |
| 34 | 7.384e-04 | 0.951 | 0.186 | 3 |
| 35 | 7.035e-04 | 0.953 | 0.204 | 3 |
| 36 | 6.705e-04 | 0.953 | 0.145 | 2 |
| 37 | 6.396e-04 | 0.954 | 0.150 | 2 |
| 38 | 6.108e-04 | 0.955 | 0.152 | 2 |
| 39 | 5.835e-04 | 0.955 | 0.142 | 2 |
| 40 | 5.579e-04 | 0.956 | 0.138 | 2 |
| 41 | 5.338e-04 | 0.957 | 0.153 | 2 |
| 42 | 5.110e-04 | 0.957 | 0.134 | 2 |
| 43 | 4.895e-04 | 0.958 | 0.151 | 2 |
| 44 | 4.691e-04 | 0.958 | 0.135 | 2 |
| 45 | 4.499e-04 | 0.959 | 0.149 | 2 |
| 46 | 4.316e-04 | 0.959 | 0.135 | 2 |
| 47 | 4.143e-04 | 0.960 | 0.138 | 2 |
| 48 | 3.978e-04 | 0.960 | 0.143 | 2 |
| 49 | 3.821e-04 | 0.961 | 0.152 | 2 |
| 50 | 3.598e-04 | 0.941 | 0.133 | 2 |
| 51 | 3.132e-04 | 0.871 | 0.151 | 2 |
| 52 | 2.476e-04 | 0.790 | 0.146 | 2 |
| 53 | 1.782e-04 | 0.720 | 0.134 | 2 |
| 54 | 1.190e-04 | 0.668 | 0.141 | 2 |
| 55 | 7.541e-05 | 0.634 | 0.140 | 2 |
| 56 | 4.634e-05 | 0.615 | 0.176 | 2 |
| 57 | 2.802e-05 | 0.605 | 0.145 | 2 |
| 58 | 1.684e-05 | 0.601 | 0.146 | 2 |
| 59 | 1.010e-05 | 0.600 | 0.086 | 1 |
| 60 | 6.072e-06 | 0.601 | 0.081 | 1 |
| 61 | 3.659e-06 | 0.603 | 0.077 | 1 |
| 62 | 2.211e-06 | 0.604 | 0.098 | 1 |
| 63 | 1.340e-06 | 0.606 | 0.081 | 1 |
| 64 | 8.141e-07 | 0.607 | 0.086 | 1 |
------------------------------------------------
Elapsed: 13.4159998894 seconds.
------------------------------------------------
[33mUserWarning[0m:c:userspablodocumentsgithubdolodolonumericoptimizenewton.py:150 Did not converge
# now we compare the decision rules with low and high risk aversion
plot_decision_rule(model, mdr, 'l', 'b', i0=0, n_steps=n_steps, label='$b_t$ (bad)' )
plot_decision_rule(model, mdr, 'l', 'b', i0=1, n_steps=n_steps, label='$b_t$ (good)' )
plot_decision_rule(model, mdr_high_gamma, 'l', 'b', i0=0, n_steps=n_steps, label='$b_t$ (bad) [high gamma]' )
plot_decision_rule(model, mdr_high_gamma, 'l', 'b', i0=1, n_steps=n_steps, label='$b_t$ (good) [high gamma]' )
plot(df['l'], df['l'], linestyle='--', color='black')
plot(df['l'], -0.2*df['c'], linestyle='--', color='black')
legend(loc= 'upper left')
<matplotlib.legend.Legend at 0x192abac8>
RBC Tutorial¶
Solving the rbc model¶
This worksheet demonstrates how to solve the RBC model with the dolo library and how to generates impulse responses and stochastic simulations from the solution.
This notebook is distributed with dolo in :
examples\notebooks\
. The notebook was opened and run from that directory.The model file is in :
examples\global_models\
First we import the dolo library.
%pylab inline
Populating the interactive namespace from numpy and matplotlib
from dolo import *
The RBC model is defined in a YAML file which we can read locally or pull off the web.
filename = ('https://raw.githubusercontent.com/EconForge/dolo'
'/master/examples/models/compat/rbc.yaml')
#filename='../models/rbc.yaml'
pcat(filename)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 | name: RBC model_spec: fga symbols: states: [z, k] controls: [i, n] auxiliaries: [c, rk, w] values: [V] shocks: [e_z] parameters: [beta, sigma, eta, chi, delta, alpha, rho, zbar, sig_z ] equations: arbitrage: - 1 = beta*(c/c(1))^(sigma)*(1-delta+rk(1)) | 0 <= i <= inf - w - chi*n^eta*c^sigma | 0 <= n <= inf transition: - z = (1-rho)*zbar + rho*z(-1) + e_z - k = (1-delta)*k(-1) + i(-1) auxiliary: - c = z*k^alpha*n^(1-alpha) - i - rk = alpha*z*(n/k)^(1-alpha) - w = (1-alpha)*z*(k/n)^(alpha) value: - V = log(c) + beta*V(1) calibration: beta : 0.99 phi: 1 chi : w/c^sigma/n^eta delta : 0.025 alpha : 0.33 rho : 0.8 sigma: 1 eta: 1 zbar: 1 sig_z: 0.016 z: zbar rk: 1/beta-1+delta w: (1-alpha)*z*(k/n)^(alpha) n: 0.33 k: n/(rk/alpha)^(1/(1-alpha)) i: delta*k c: z*k^alpha*n^(1-alpha) - i V: log(c)/(1-beta) covariances: [ [ sig_z**2] ] options: approximation_space: a: [ 1-2*sig_z, k*0.9 ] b: [ 1+2*sig_z, k*1.1 ] orders: [10, 50] |
yaml_import(filename)
reads the YAML file and generates a model
object.
model = yaml_import(filename)
The model file already has values for steady-state variables stated in the calibration section so we can go ahead and check that they are correct by computing the model equations at the steady state.
model.residuals()
OrderedDict([('transition', array([ 0.00000000e+00, 2.50466314e-13])), ('arbitrage', array([ -1.01030295e-14, -3.78141962e-12])), ('auxiliary', array([ -3.28626015e-13, 7.63278329e-17, 4.48352466e-12])), ('value', array([ 7.81597009e-14]))])
Printing the model also lets us have a look at all the model equations and check that all residual errors are 0 at the steady-state, but with less display prescision.
print( model )
Model object:
------------
- name: "RBC"
- type: "fga"
- file: "https://raw.githubusercontent.com/EconForge/dolo/master/examples/models/compat/rbc.yaml
- residuals:
transition
1 : 0.0000 : z = (1-rho)*zbar + rho*z(-1) + e_z
2 : 0.0000 : k = (1-delta)*k(-1) + i(-1)
arbitrage
1 : 0.0000 : 1 = beta*(c/c(1))**(sigma)*(1-delta+rk(1)) | 0 <= i <= inf
2 : 0.0000 : w - chi*n**eta*c**sigma | 0 <= n <= inf
auxiliary
1 : 0.0000 : c = z*k**alpha*n**(1-alpha) - i
2 : 0.0000 : rk = alpha*z*(n/k)**(1-alpha)
3 : 0.0000 : w = (1-alpha)*z*(k/n)**(alpha)
value
1 : 0.0000 : V = log(c) + beta*V(1)
Next we compute a solution to the model using a second order perturbation method (see the source for the approximate_controls function). The result is a decsion rule object. By decision rule we refer to any object is callable and maps states to decisions. This particular decision rule object is a TaylorExpansion (see the source for the TaylorExpansion class).
dr_pert = approximate_controls(model, order=2)
There are 2 eigenvalues greater than 1. Expected: 2.
We now compute the global solution (see the source for the time_iteration function). It returns a decision rule object of type SmolyakGrid (see the source for the SmolyakGrid class).
dr_global = time_iteration(model, pert_order=1, smolyak_order=3)
Decision rule¶
Here we plot optimal investment and labour for different levels of capital (see the source for the plot_decision_rule function).
Decisionbounds = [dr_global.smin[1], dr_global.smax[1]]
figsize(8,3.5)
subplot(121)
plot_decision_rule(model, dr_global, 'k', 'i', label='Global', bounds=bounds)
plot_decision_rule(model, dr_pert, 'k', 'i', label='Perturbation', bounds=bounds)
ylabel('i')
title('Investment')
legend()
subplot(122)
plot_decision_rule(model, dr_global, 'k', 'n', label='Global', bounds=bounds)
plot_decision_rule(model, dr_pert, 'k', 'n', label='Perturbation', bounds=bounds)
ylabel('n')
title('Labour')
legend()
tight_layout()
show()
It would seem, according to this, that second order perturbation does very well for the RBC model. We will revisit this issue more rigorously when we explore the deviations from the model’s arbitrage section equations.
Let us repeat the calculation of investment decisions for various values of the depreciation rate, \(\delta\). Note that this is a comparative statics exercise, even though the models compared are dynamic.
original_delta=model.calibration_dict['delta']
drs = []
delta_values = linspace(0.01, 0.04,5)
for val in delta_values:
model.set_calibration(delta=val)
drs.append(approximate_controls(model, order=2))
figsize(5,3)
for i,dr in enumerate(drs):
plot_decision_rule(model, dr, 'k', 'i',
label='$\delta={}$'.format(delta_values[i]),
bounds=bounds)
ylabel('i')
title('Investment')
legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
show()
model.set_calibration(delta=original_delta)
There are 2 eigenvalues greater than 1. Expected: 2.
There are 2 eigenvalues greater than 1. Expected: 2.
There are 2 eigenvalues greater than 1. Expected: 2.
There are 2 eigenvalues greater than 1. Expected: 2.
There are 2 eigenvalues greater than 1. Expected: 2.
We find that more durable capital leads to higher steady state investment and slows the rate of convergence for capital (the slopes are roughly the same, which implies that relative to steady state capital investment responds stronger at higher \(\delta\), this in addition to the direct effect of depreciation).
Use the model to simulate¶
We will use the deterministic steady-state as a starting point.
s0 = model.calibration['states']
print(str(model.symbols['states'])+'='+str(s0))
['z', 'k']=[ 1. 9.35497829]
We also get the covariance matrix just in case. This is a one shock model so all we have it the variance of \(e_z\).
distrib = model.get_distribution()
sigma2_ez = distrib.sigma
sigma2_ez
array([[ 0.000256]])
Impulse response functions¶
Consider a 10% shock on productivity.
s1 = s0.copy()
s1[0] *= 1.1
print(str(model.symbols['states'])+'='+str(s1))
['z', 'k']=[ 1.1 9.35497829]
The simulate
function is used both to trace impulse response
functions and compute stochastic simulations. Choosing n_exp>=1
,
will result in that many “stochastic” simulations. With n_exp = 0
,
we get one single simulation without any stochastic shock (see the
source for the
simulate
function). The output is a panda table of size \(H \times n_v\)
where \(n_v\) is the number of variables in the model and \(H\)
the number of dates.
irf = simulate(model, dr_global, s1, n_exp=0, horizon=40 )
print(irf.__class__)
print(irf.shape)
<class 'pandas.core.frame.DataFrame'>
(40, 7)
Let us plot the response of consumption and investment.
figsize(8,4)
subplot(221)
plot(irf['z'])
title('Productivity')
subplot(222)
plot(irf['i'])
title('Investment')
subplot(223)
plot(irf['n'])
title('Labour')
subplot(224)
plot(irf['c'])
title('Consumption')
tight_layout()
Note that the plotting is made using the wonderful
matplotlib
library. Read the online
tutorials to learn how
to customize the plots to your needs (e.g., using
latex in annotations). If
instead you would like to produce charts in Matlab, you can easily
export the impulse response functions, or any other matrix, to a
.mat
file.
irf_array = array( irf )
import scipy.io
scipy.io.savemat("export.mat", {'table': irf_array} )
Now Stochastic simulations¶
Now we run 1000 random simulations the result is an array of size \(H\times n_{exp} \times n_v\) where - \(H\) the number of dates - \(n_{exp}\) the number of simulations - \(n_v\) is the number of variables
sim = simulate(model, dr_global, s0, n_exp=1000, horizon=40 )
print(sim.shape)
(40, 1000, 7)
model.variables
('z', 'k', 'i', 'n', 'c', 'rk', 'w')
We plot the responses of consumption, investment and labour to the stochastic path of productivity.
i_z = model.variables.index('z')
i_i = model.variables.index('i')
i_n = model.variables.index('n')
i_c = model.variables.index('c')
figsize(8,4)
for i in range(1000):
subplot(221)
plot(sim[:, i, i_z], color='red', alpha=0.1)
subplot(222)
plot(sim[:, i, i_i], color='red', alpha=0.1)
subplot(223)
plot(sim[:, i, i_n], color='red', alpha=0.1)
subplot(224)
plot(sim[:, i, i_c], color='red', alpha=0.1)
subplot(221)
title('Productivity')
subplot(222)
title('Investment')
subplot(223)
title('Labour')
subplot(224)
title('Consumption')
tight_layout()
We find that while the distribution of investment and labour converges quickly to the ergodic distribution, that of consumption takes noticeably longer. This is indicative of higher persistence in consumption, which in turn could be explained by permanent income considerations.
Descriptive statistics¶
The success of the RBC model is in being able to mimic patterns in the descriptive statistics of the real economy. Let us compute some of these descriptive statistics from our sample of stochastic simulations. First we compute growth rates:
dsim = log(sim[1:,:,:]/sim[:-1,:,:,])
print(dsim.shape)
(39, 1000, 7)
Then we compute the volatility of growth rates for each simulation:
volat = dsim.std(axis=0)
print(volat.shape)
(1000, 7)
Then we compute the mean and a confidence interval for each variable. In the generated table the first column contains the standard deviations of growth rates. The second and third columns contain the lower and upper bounds of the 95% confidence intervals, respectively.
table = column_stack([
volat.mean(axis=0),
volat.mean(axis=0)-1.96*volat.std(axis=0),
volat.mean(axis=0)+1.96*volat.std(axis=0) ])
table
array([[ 0.01667413, 0.01280193, 0.02054634],
[ 0.00296542, 0.00175695, 0.00417388],
[ 0.09196494, 0.06834055, 0.11558933],
[ 0.01028367, 0.00788583, 0.01268152],
[ 0.00313835, 0.00236476, 0.00391193],
[ 0.02426923, 0.01861151, 0.02992694],
[ 0.01303212, 0.01002955, 0.01603469]])
We can use the pandas library to present the results in a nice table.
model.variables
('z', 'k', 'i', 'n', 'c', 'rk', 'w')
import pandas
df = pandas.DataFrame(table, index=model.variables,
columns=['Growth rate std.',
'Lower 95% bound',
'Upper 95% bound' ])
pandas.set_option('precision', 4)
df
Growth rate std. | Lower 95% bound | Upper 95% bound | |
---|---|---|---|
z | 0.017 | 0.013 | 0.021 |
k | 0.003 | 0.002 | 0.004 |
i | 0.092 | 0.068 | 0.116 |
n | 0.010 | 0.008 | 0.013 |
c | 0.003 | 0.002 | 0.004 |
rk | 0.024 | 0.019 | 0.030 |
w | 0.013 | 0.010 | 0.016 |
Error measures¶
It is always important to get a handle on the accuracy of the solution.
The omega
function computes and aggregates the errors for the
model’s arbitrage section equations. For the RBC model these are the
investment demand and labor supply equations. For each equation it
reports the maximum error over the domain and the mean error using
ergodic distribution weights (see the source for the
omega
function).
ErrorErrorfrom dolo.algos.dtcscc.accuracy import omega
print("Perturbation solution")
err_pert = omega(model, dr_pert)
err_pert
Perturbation solution
Euler Errors:
- max_errors : [ 0.00019241 0.00045583]
- ergodic : [ 1.37473238e-04 1.69920101e-05]
print("Global solution")
err_global=omega(model, dr_global)
err_global
Global solution
Euler Errors:
- max_errors : [ 1.38008607e-04 2.28991817e-06]
- ergodic : [ 1.32367122e-04 6.62075500e-07]
The result of omega
is a subclass of dict
. omega
fills that
dict with some useful information that the default print does not
reveal:
err_pert.keys()
['domain', 'errors', 'densities', 'ergodic', 'max_errors', 'bounds']
In particular the domain field contains information, like bounds and shape, that we can use to plot the spatial pattern of errors.
a = err_pert['domain'].a
b = err_pert['domain'].b
orders = err_pert['domain'].orders
errors = concatenate((err_pert['errors'].reshape( orders.tolist()+[-1] ),
err_global['errors'].reshape( orders.tolist()+[-1] )),
2)
figure(figsize=(8,6))
titles=["Investment demand pertubation errors",
"Labor supply pertubation errors",
"Investment demand global errors",
"Labor supply global errors"]
for i in range(4):
subplot(2,2,i+1)
imgplot = imshow(errors[:,:,i], origin='lower',
extent=( a[0], b[0], a[1], b[1]), aspect='auto')
imgplot.set_clim(0,3e-4)
colorbar()
xlabel('z')
ylabel('k')
title(titles[i])
tight_layout()
Online examples¶
How to get a global solution of the RBC model.
How to compute the response to a tax under perfect foresight
Redo figures 11.9.1, 11.9.2, 11.9.3 from Ljunquvist and Sargent: RMT4 (perfect foresight exercise contributed by Spencer Lyon). .. .. Other languages: .. —————- .. .. Solve the RBC model using Julia.