## page was renamed from TestingProgram <> = Status of test bench = The program performs the following tests: 1. Accuracy on a sphere in the state space 1. Accuracy on a stochastic simulation 1. Den Haan Marcet statistics The following solutions are implemented: * Smolyak-collocation method of Krueger, Kubler and Malin (SMOL) * Perturbation method at 2nd order of Kim, Kim and Kollman (PER2) * Perturbation method at 1st order, i.e. log linear approximation (PER1) * Monomial-rule Galerkin method of Pichler (MRGAL) * Cluster grid algorithm of Maliar, Maliar and Judd (CGA) * Stochastic simulation algorithm of Maliar, Maliar and Judd (SSA1) = Installation = The testing program is designed to run on the two following platforms: GNU/Linux and Windows/Cygwin. You need the following software to run the program: * GNU C++ compiler (g++) * GNU Fortran 95 compiler (gfortran) (it is now available in Cygwin) * development files for the GNU Scientific Library (GSL), which should be in a package called {{{libgsl0-dev}}} * GNU make * MATLAB Download the source of the testing program: [[attachment:JV2010-test-bench-1.0.tar.gz]]. Unpack it and enter into the subdirectory: {{{ tar xvf JV2010-test-bench-1.0.tar.gz cd JV2010-test-bench-1.0 }}} Then configure the package. Under GNU/Linux, type: {{{ ./configure --with-matlab=/usr/local/matlab76 }}} (where you should give the right MATLAB installation directory) Under Windows/Cygwin, type: {{{ ./configure --with-matlab=/cygdrive/c/Progra~1/MATLAB/R2008b }}} (note that you shoud use "Progra~1" instead of "Program Files" when giving the path to MATLAB directory, since spaces in pathnames are not supported) Finally, compile the testing program by typing: {{{ make }}} This should have created a program called {{{tester}}}. = Running the program = Under Cygwin, you need to add MATLAB DLLs to the path, with something like: {{{ export PATH=$PATH:/cygdrive/c/Progra~1/MATLAB/R2008b/win32 }}} The testing program is run with: {{{ ./tester }}} It is important to run it from the directory where it was built. The program will perform the tests for each of the 30 specifications, and for each specification for each solution method. For accuracy tests 1 and 2, it displays relative error for every equation. For accuracy test 3, and for all Euler equations (first separately, then together), it computes the DHM statistics several times, and displays the fraction of times that the statistics was out of the bilateral 5% confidence interval. <> The program accepts several options: * {{{-a}}}: use an alternative specification for the aggregate ressource constraint error, as suggested by Benjamin Malin (see [[#arcresid|below]]) * {{{-A}}}: use an alternative specification for the aggregate ressource constraint error, as suggested by Paul Pichler (see [[#arcresid|below]]) * {{{-b INTEGER}}}: start computations with a given specification (designated by an integer between 1 and 30); indices 1 to 5 are for A1 by increasing number of countries, indices 6 to 9 are for A2, ... * {{{-B INTEGER}}}: only do computations for a given specification (same numbering scheme than for {{{-b}}} option) * {{{-d}}}: for each model and each participant solution, create a CSV file containing simulated data for accuracy tests 1 and 2 (see [[#csv|below]]) * {{{-g}}}: always use product 4-point Gauss-Hermite for numerical integration (see [[#integration|below]]) * {{{-m}}}: always use monomial degree 5 rule for numerical integration (see [[#integration|below]]) * {{{-q}}}: always use quasi-Monte Carlo for numerical integration (see [[#integration|below]]) * {{{-r INTEGER}}}: seed for the random number generator used in tests 2 and 3 (Default: 0) * {{{-s NAME}}}: compute tests only for solution {{{NAME}}}. Possible values are {{{per1}}}, {{{per2}}}, {{{smol}}}, {{{mrgal}}}, {{{cga}}} and {{{ssa1}}} * {{{-t INTEGER}}}: compute only the specified accuracy test. Possible values are 1, 2 or 3 * {{{-u INTEGER}}}: number of points to be used in Test 1 (Default: 1,000) * {{{-v INTEGER}}}: number of simulations to be used in Test 2 (Default: 10,000) * {{{-w INTEGER}}}: number of simulations to be used in Test 3 (Default: 10,000) <> = CSV files = == Test 1 == The first <> columns (where <> is the number of countries) contain the state variable for capital stock (stock accumulated at end of previous period). The next <> columns contain current endogenous variables as simulated by participant's solution (note that those include the state variable for TFP level). The last <> columns are the residual of all equations. == Test 2 == The first <> columns contain current endogenous variables (note that capital has end-of-period stock timing convention). The next <> columns contains simulated shocks (the last one is the global shock). The last <> columns are the residual of all equations. Note that the first line contains initial values, obtained after dropping 500 first simulated periods. <> = Numerical integration = Three methods of numerical integration are implemented in the testing program: * product 4-point Gauss-Hermite * monomial degree 5 rule, as described in Judd (1998) "Numerical Methods in Economics", p. 275, eq. 7.5.11 * quasi-Monte Carlo using 1000 points; the sequence of points is drawn from a Niederreiter generator Note that for <> countries, the dimension of the integration problem of Euler errors is <>. The default behaviour of the program is to use Gauss-Hermite up to dimension 6 (i.e. when <>), and monomial degree 5 rule above. It is possible to alter this behaviour by forcing the program to use a given integration method (see [[#options|options]]). <> = Aggregate resource constraint residual = There are three possible ways of computing the aggregate ressource constraint residual. The default, used in the published papers, is: <> (where <> stands for capital adjustment cost). An alternative, triggered by {{{-a}}} option, and suggested by Benjamin Malin, is: <> Another alternative, triggered by {{{-A}}} option, and suggested by Paul Pichler, is: <> Benjamin Malin's version obviously gives lower error approximations in absolute value, since it has the greatest denominator. = Code overview = The class {{{ModelSpec}}} implements the abstract representation of a given specification, and has 8 subclasses {{{ModelSpecA1, ModelSpecA2, ...}}} corresponding to the 8 models. An instance of such a class contains all the parameters of the model and the logic for computing the relative errors of all equations, given the values of the variables and the shocks. Note that in class {{{ModelSpec}}}, the convention is to work with a vector of variables <> of length <> (where <> is the number of countries). The vector <> contains consumption level <>, labor <>, investment <>, capital (end of period stock)<>, technology level <> and <> (Lagrange multiplier of aggregate budget constraint) (see {{{ModelSpec.hh}}} for more details). Shocks are in a vector <> of size <> (idiosyncratic shocks + global shock). Relative errors are computed using <>, <>, <> and <>. The forward looking part of the Euler equation is separately computed by {{{ModelSpec::forward_part()}}}, so that it can be integrated over, and then fed back to {{{ModelSpec::errors()}}} (which computes the <> errors). Solution methods are implemented via a subclass of {{{ModelSolution}}} (for example see {{{smol/SmolSolution.cc}}} and {{{smol/SmolSolution.cc}}}). The purpose of these classes is to provide a uniformized wrapper around the participant's solutions. The main method of those classes is the policy function, which provides <> given <> and <>. The three tests are implemented in class {{{SolutionTester}}}. The main function is in {{{tester.cc}}}: it constructs the abstract representations of the 30 model specifications, and then performs the tests for all solution methods. == SMOL solution == Source for SMOL solution is in {{{smol/}}} subdirectory. It consists of a non-linear solver (all the Fortran files in {{{smol/}}}), which are combined in {{{libhybrid.a}}} by the Makefile. Each of the {{{TestA*}}} subdirectory contains three Fortran 90 files, and a CSV file with Chebychev polynomial coefficients for each number of countries. Since SMOL's code makes uses of global variables, it was necessary to create a dynamically loadable object for each specification, and to load objects on the fly (see class {{{SmolSolution}}}). Note that since SMOL doesn't provide a value for <>, the testing program uses the value of <> (marginal utility of consumption of first country) as a replacement. == PER1 and PER2 solutions == Data for PER1 and PER2 solutions are in {{{per/}}} subdirectory. There is one MAT-file for each specification, which contains the coefficients of the approximated policy function up to 2nd order. Class {{{PerSolution}}} reads the files, and computes the first- or second-order approximation. Note that the simulated time paths computed by the testing program do not use the technique of "pruning" the 3rd and higher order terms (as described in Kim, Kim, Schaumburg and Sims (2007)), while "pruning" is used in Kim, Kim and Kollman paper. This is simply due to the fact that the testing program uses a generic simulation routine, only based on the one-period ahead policy function provided by the participants; it does not exploit the pecularities of a given solution. Log linear (PER1) solution is implemented by the same class: it only consists in shutting down the second order terms in the approximation (using the {{{first_order}}} argument in the class constructor). == MRGAL solution == Data for MRGAL solution is in {{{mrgal/}}} subdirectory. There is one MAT-file for each specification. Class {{{MRGalSolution}}} reads the MAT files, and computes the policy function, according to the MATLAB files provided by Pichler. == CGA and SSA1 solutions == Data for the two solution provided by Maliar, Maliar and Judd are in {{{mmj/}}} subdirectory. The simulation code is the same for the two methods: only the data, provided in MAT files, differ. Classes {{{CGASolution}}} and {{{SSA1Solution}}} read the MAT files, and compute the policy function, according to the MATLAB files. As SMOL, this solution doesn't provide a value for <>: the value used here is the mean of weighted marginal utilities of consumption.