<?xml version="1.0" encoding="ISO-8859-1"?><article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<front>
<journal-meta>
<journal-id>0120-4483</journal-id>
<journal-title><![CDATA[Ensayos sobre POLÍTICA ECONÓMICA]]></journal-title>
<abbrev-journal-title><![CDATA[Ens. polit. econ.]]></abbrev-journal-title>
<issn>0120-4483</issn>
<publisher>
<publisher-name><![CDATA[Banco de la República]]></publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id>S0120-44832011000300008</article-id>
<title-group>
<article-title xml:lang="en"><![CDATA[OVERCOMING THE FORECASTING LIMITATIONS OF FORWARD-LOOKING THEORY BASED MODELS]]></article-title>
<article-title xml:lang="es"><![CDATA[SUPERANDO LOS LÍMITES PREDICTIVOS DE LOS MODELOS BASADOS EN LA TEORÍA CON VISIÓN DE FUTURO]]></article-title>
<article-title xml:lang="pt"><![CDATA[SUPERANDO OS LIMITES PREDITIVO DOS MODELOS BASEADOS NA TEORIA COM VISÃO DE FUTURO]]></article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname><![CDATA[González]]></surname>
<given-names><![CDATA[Andrés]]></given-names>
</name>
<xref ref-type="aff" rid="A01"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname><![CDATA[Mahadeva]]></surname>
<given-names><![CDATA[Lavan]]></given-names>
</name>
<xref ref-type="aff" rid="A01"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname><![CDATA[Rodríguez]]></surname>
<given-names><![CDATA[Diego]]></given-names>
</name>
<xref ref-type="aff" rid="A01"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname><![CDATA[Rojas]]></surname>
<given-names><![CDATA[Luis]]></given-names>
</name>
<xref ref-type="aff" rid="A01"/>
</contrib>
</contrib-group>
<aff id="A01">
<institution><![CDATA[,Banco de la República Departamento de Modelos Macroeconómicos del Banco de la República ]]></institution>
<addr-line><![CDATA[ ]]></addr-line>
</aff>
<aff id="A02">
<institution><![CDATA[,Oxford University Oxford Institute for Energy Studies ]]></institution>
<addr-line><![CDATA[ ]]></addr-line>
</aff>
<pub-date pub-type="pub">
<day>00</day>
<month>12</month>
<year>2011</year>
</pub-date>
<pub-date pub-type="epub">
<day>00</day>
<month>12</month>
<year>2011</year>
</pub-date>
<volume>29</volume>
<numero>66</numero>
<fpage>246</fpage>
<lpage>294</lpage>
<copyright-statement/>
<copyright-year/>
<self-uri xlink:href="http://www.scielo.org.co/scielo.php?script=sci_arttext&amp;pid=S0120-44832011000300008&amp;lng=en&amp;nrm=iso"></self-uri><self-uri xlink:href="http://www.scielo.org.co/scielo.php?script=sci_abstract&amp;pid=S0120-44832011000300008&amp;lng=en&amp;nrm=iso"></self-uri><self-uri xlink:href="http://www.scielo.org.co/scielo.php?script=sci_pdf&amp;pid=S0120-44832011000300008&amp;lng=en&amp;nrm=iso"></self-uri><abstract abstract-type="short" xml:lang="es"><p><![CDATA[Los modelos teóricamente consistentes deben mantenerse modestos para ser útiles. Si su fin es pronosticar eficazmente, tienen que basarse en datos ruidosos, irregulares, no modelados y que se traten del futuro. Los agentes también pueden usar estos datos para formular sus propias expectativas. En este artículo ilustramos un esquema para condicionar de manera simultánea los pronósticos y expectativas internas de los modelos DSGE lineales con visión de futuro, con los datos a través de un filtro de Kalman de intervalos fijos suavizado. También ensayamos con algunos diagnósticos de este método; específicamente, las descomposiciones que revelan cuando una predicción condicionada sobre un juego de variables implica los cálculos de otras variables que son inconsistentes con los precedentes económicos]]></p></abstract>
<abstract abstract-type="short" xml:lang="en"><p><![CDATA[Theory-consistent models have to be kept small to be tractable. If they are to forecast well, they have to condition on data that are unmodelled, noisy, patchy and about the future. Agents can also use these data to form their own expectations. In this paper we illustrate a scheme for jointly conditioning the forecasts and internal expectations of linearised forward-looking DSGE models on data through a Kalman Filter fixed-interval smoother. We also trial some diagnostics of this approach, in particular decompositions that reveal when a forecast conditioned on one set of variables implies estimates of other variables which are inconsistent with economic priors.]]></p></abstract>
<abstract abstract-type="short" xml:lang="pt"><p><![CDATA[Os modelos teoricamente consistentes devem ser mantidos modestos para serem úteis. Se a sua finalidade é prognosticar eficazmente, eles têm que estar baseados em dados barulhentos, irregulares, não modelados e que se refiram ao futuro. Os agentes também podem utilizar estes dados para formular as suas próprias expectativas. Neste artigo, ilustramos um esquema para condicionar, de maneira simultânea, os prognósticos e expectativas internas dos modelos DSGE lineares com visão de futuro, com os dados através de um filtro de Kalman de intervalos fixos suavizado. Também tentamos com alguns diagnósticos deste método; especificamente, as decomposições que revelam quando uma predição condicionada sobre um jogo de variáveis implica os cálculos de outras variáveis que são inconsistentes com os precedentes econômicos.]]></p></abstract>
<kwd-group>
<kwd lng="es"><![CDATA[predicción condicional]]></kwd>
<kwd lng="es"><![CDATA[DSGE]]></kwd>
<kwd lng="es"><![CDATA[filtro de Kalman]]></kwd>
<kwd lng="en"><![CDATA[Conditional forecast]]></kwd>
<kwd lng="en"><![CDATA[DSGE]]></kwd>
<kwd lng="en"><![CDATA[Kalman filter filter]]></kwd>
<kwd lng="pt"><![CDATA[Predição condicional]]></kwd>
<kwd lng="pt"><![CDATA[DSGE]]></kwd>
<kwd lng="pt"><![CDATA[filtro de Kalman]]></kwd>
</kwd-group>
</article-meta>
</front><body><![CDATA[   <font face="verdana" size="2">        <p align="center"><b><font size="4">OVERCOMING THE FORECASTING LIMITATIONS OF FORWARD-LOOKING THEORY BASED MODELS</font></b></p>       <p align="center"><b><font size="3" >SUPERANDO LOS L&Iacute;MITES PREDICTIVOS DE LOS MODELOS BASADOS EN LA TEOR&Iacute;A CON VISI&Oacute;N DE FUTURO*</font></b></p>        <p align="center"><b><font size="3">SUPERANDO OS LIMITES PREDITIVO DOS MODELOS BASEADOS NA TEORIA COM VIS&Atilde;O DE FUTURO</font></b></p>        <p>Andr&eacute;s Gonz&aacute;lez<br />     Lavan Mahadeva<br />     Diego Rodr&iacute;guez<br />   Luis Rojas </p>       <p>*Este art&iacute;culo expresa      exclusivamente las     opiniones de los autores     y no las del Banco de la     Rep&uacute;blica ni de su Junta   Directiva.</p>     <p>  Los autores son   respectivamente: director,   Departamento de Modelos   Macroecon&oacute;micos del     Banco de la Rep&uacute;blica;     Senior Research Fellow,     Oxford Institute for Energy     Studies, Oxford University;     jefe de Modelos     Macroecon&oacute;micos,     Departamento de Modelos     Macroecon&oacute;micos del     Banco de la Rep&uacute;blica y     estudiante del Doctorado     en Econom&iacute;a del European   University Institute.</p>       <p><b>Correo electr&oacute;nico:</b> <A  href="mailto:agonzago@banrep.gov.co">agonzago@banrep.gov.co</A></p>       <p><b>Documento recibido</b>: 25 de mayo de 2011;     version final aceptada:     8 de noviembre de 2011.</p> 	 <hr size="1">       <p>Los modelos te&oacute;ricamente consistentes deben mantenerse     modestos para ser &uacute;tiles. Si su fin es pronosticar     eficazmente, tienen que basarse en datos ruidosos, irregulares,     no modelados y que se traten del futuro. Los     agentes tambi&eacute;n pueden usar estos datos para formular     sus propias expectativas. En este art&iacute;culo ilustramos      un esquema para condicionar de manera simult&aacute;nea     los pron&oacute;sticos y expectativas internas de los modelos     DSGE lineales con visi&oacute;n de futuro, con los datos a trav&eacute;s de un filtro de Kalman de intervalos fijos suavizado.</p>       ]]></body>
<body><![CDATA[<p>Tambi&eacute;n ensayamos con algunos diagn&oacute;sticos de     este m&eacute;todo; espec&iacute;ficamente, las descomposiciones que     revelan cuando una predicci&oacute;n condicionada sobre un     juego de variables implica los c&aacute;lculos de otras variables que son inconsistentes con los precedentes econ&oacute;micos.</p>       <p><b>Clasificaci&oacute;n</b> JEL: F47, E01, C61.</p>       <p><b>Palabras clave:</b> predicci&oacute;n condicional, DSGE, filtro de    Kalman.</p>   <hr size="1">       <p>Theory-consistent models have to be kept small to be     tractable. If they are to forecast well, they have to condition     on data that are unmodelled, noisy, patchy and about     the future. Agents can also use these data to form their     own expectations. In this paper we illustrate a scheme     for jointly conditioning the forecasts and internal expectations     of linearised forward-looking DSGE models on     data through a Kalman Filter fixed-interval smoother. We     also trial some diagnostics of this approach, in particular     decompositions that reveal when a forecast conditioned     on one set of variables implies estimates of other variables     which are inconsistent with economic priors.  </p>       <p><b>JEL classification:</b> F47, E01, C61.    </p>       <p><b>Keywords:</b> Conditional forecast, DSGE, Kalman filter     filter. </p> 	<hr size="1" /> 	    <p>Os modelos teoricamente consistentes devem ser 	  mantidos modestos para serem &uacute;teis. Se a sua finalidade   &eacute; prognosticar eficazmente, eles t&ecirc;m que estar 	  baseados em dados barulhentos, irregulares, n&atilde;o 	  modelados e que se refiram ao futuro. Os agentes 	  tamb&eacute;m podem utilizar estes dados para formular 	  as suas pr&oacute;prias expectativas. Neste artigo, ilustramos 	  um esquema para condicionar, de maneira 	  simult&acirc;nea, os progn&oacute;sticos e expectativas internas 	  dos modelos DSGE lineares com vis&atilde;o de futuro, 	  com os dados atrav&eacute;s de um filtro de Kalman de 	  intervalos fixos suavizado. Tamb&eacute;m tentamos com 	  alguns diagn&oacute;sticos deste m&eacute;todo; especificamente, 	  as decomposi&ccedil;&otilde;es que revelam quando uma predi&ccedil;&atilde;o 	  condicionada sobre um jogo de vari&aacute;veis implica os 	  c&aacute;lculos de outras vari&aacute;veis que s&atilde;o inconsistentes 	  com os precedentes econ&ocirc;micos.	  </p> 	    <p><b>Classifica&ccedil;&atilde;o JEL:</b> F47, E01, C61.	  </p> 	    <p><b>Palavras-chave:</b> Predi&ccedil;&atilde;o condicional, DSGE, filtro 	  de Kalman. </p> 	<HR size="1" />       <p><b>I. INTRODUCTION</b></p>     ]]></body>
<body><![CDATA[<p>  Forecasting is a very different exercise from simulating, especially when forecasts   are used to guide and explain policy. A policy forecast should relate to all the data   that features in the public debate, even if that comes in an awkward variety of shapes   and forms. After all, agents in the real world could also be reacting to this data, after   taking account of their measurement error.</p>     <p>The need to reach out and connect with the relevant data is especially important     when the forecast is based on a dynamic stochastic general equilibrium (henceforth,     DSGE) model. DSGE models are distinguished by having a greater theoretical input.      Indeed, this explains their appeal to policy; with more theory it is easier to use the     forecast as a basis for discussion and explanation. Yet, more theory often comes at a     cost in terms of worse forecast performance because it is harder to match more rigid     theoretical concepts to available data and extending the model comes at great cost.      But, if any policy model ultimately does not link to the real world, it cannot be of much use as a policy tool.</p>     <p>    There are two broad categories of reasons why we should expect the useful data set     for a policy forecast to be awkward:</p>       <p>1) Real world data is unbalanced.</p>       <blockquote>         <p>a) Real world data comes with different release lags: we have more up to        date information on some series than on other series.<br />       b) Real world data comes with different frequencies. Some relevant economic        information is only available annually whilst some is in real        time.<br />       c) There may be useful off-model information on the expected current        and future values of exogenous and endogenous variables from other       sources which cannot be incorporated into the model system, at least        not without making the model cumbersome. Sometimes this information       is patchy: we have information on values at some future point. But        sometimes the information may be complete for the whole horizon, as       would be the case if we use the forecasts from other sources for the        exogenous variables.</p>   </blockquote>         <p>2) Real world data is subject to time-varying measurement uncertainty. This      also has many aspects.</p>         <blockquote>           <p>a) Real world data is imperfectly measured. For this reason published         data are often revised.<br />         b) Real world data that is available to agents may differ from the economic         concept that matters to their decisions.<br />         c) Forecasts from other models and judgement also come with a measurement         error.</p>     </blockquote>         Graph 1 shows us some examples of the kind of awkward data that one can expect to deal      with in the real world. Assume we are time t and planning to forecast up until time <i>T</i>.<br /> 	    <blockquote> 	      ]]></body>
<body><![CDATA[<p> &bull; First, we have series, such as employment, which come to us after a longer  	    delay.<br />   &bull; In contrast, series such as CPI are very up to date. And then, often it is the case 	    that a combination of small monthly models &mdash;monthly data on some prices<br /> 	    and even information from the institutions which set regulated prices&mdash; can 	    give a decent forecast of consumer prices into the next quarter.<br />   &bull; Some data is only available annually. For example, in Colombia the only national 	    salary series can be constructed from the income side of annual national 	    accounts. <br/>   &bull; Then, there is information on what can happen quite far ahead. The government 	    can preannounce its VAT plans which will have a first-round effect on<br /> 	    inflation. In monetary policy conditioning scenarios, the central bank credibly 	    announces its interest path (Las&eacute;en and Svensson, 2011).<br />   &bull; Also, we have implied forecasts from financial markets data. For example, 	    removing risk premia and other irrelevant effects from a yield curve gives us 	    data on what the risk-free expectation of future interest rates may be. Some 	    other important special cases of this type of information would be world interest 	    rates, world commodity prices and expected monetary policy rates. The 	    forecast should then be conditioned on the useful information contained in 	    this data.<br/>   &bull; Finally, there are forecasts from other models. A good example is that of 	    remittances. Using information on migration trends, exchange rate movements, 	    and some disaggregated capital flow data, a specialist can come up 	    with a good forecast for remittances, or at least a forecast that is better than 	    one that a DSGE model can generate internally. Population growth and relative 	    food prices are two other examples of variables that might also be best 	    forecast separately. Similarly, forecasts for the GDP of important trading 	    partners should probably come from forecasters in those countries or from<br /> 	    international institutions, such as the IMF, which are more capable of forecasting 	    those series.</p> </blockquote> 	    <p align="center" > <img src="img/revistas/espe/v29n66/v29n66a07g01.jpg"></p>       <p>The contention of this paper is that a model can only provide both decent predictions     and useful explanations     if it takes account of the real world data set. Obviously, if     there is valuable information is in this awkward data set then the better forecasts     should not ignore it. But more subtly, if the agents whose behaviour we are trying     to model might be using this type of information when they form expectations, we     would also need to mimic them if we are to expect to pick up their behaviour. For     example, if the data is noisy and is infected by short-term movements which have     nothing to do with the fundamentals, agents will ignore some of the movements in the data.. So, then should the forecast also do so.</p>       <p>Through the method described in this paper, a forecast can bring this rich but     awkward information to bear in forming a policy forecast with a forward-looking     DSGE model. The basic idea is to first solve the model for rational expectations     under the assumption that the data up until the end of the forecast horizon is perfectly     known. Then, in a second stage, the data uncertainty problem is wrapped around     these solutions which are the state equations of a Kalman filter. In Kalman filter     terminology, the solution is fixed-interval smoothing.</p>       <p>    This method has advantages over existing strategies.</p>       <p>    First it allows for measurement error of future data. If that future data consists of     forecasts produced by other models, then this model's forecast should take account     of the external models' errors. If that future data were instead announcements or     plans made by external institutions, this method allows for the important possibility     of imperfect credibility by incorporating that as measurement error. For example, a preannounced tax change may not be fully credible.</p>       <p>    Another important special case is financial market data. Financial market data is     noisy in the sense that the price that data measures diverges from the economic     concept that matters to agents. If agents have a longer holding horizon than financial     market participants, they would not react to short-term, reversible, movements in     financial data. Not to adjust for this would mean that forecasts might bounce around     with financial data unrealistically. An influential paper by Lettau and Ludvigson     (2004) emphasises that movements in short-term financial market data will be     smoothed by agents, and this should also matter when policy modelers assess their impact on macroeconomic variables.</p>       <p>Policy forecasters commonly smooth financial market data by imposing prior views     on the initial values and forecasts of financial variables; these are the controversial     constant exchange rate and interest rate assumptions. Economic shocks are then made     endogenous to make the model's solutions comply with those priors. Our proposal is     an alternative way of attaining the same result, and one which does not involve our     rewriting the economic model to make structural shocks endogenous, for now those   deviations are classified as data noise terms.</p>       <p>    A second advantage of this method is that the model does not need to be rewritten     each time the shape of the data of each given series changes. One only has to fill     the parts of the data set where data exists and put blanks (NAs!) in where there is     nothing available. With this method, one can imagine how the task of building the     data set and that of maintaining the forecasting model can become separate and hence, carried out at different intervals and by different groups of specialists.</p>       <p>    Third, we can be much more imaginative with what data we use with this method.     In this paper, we show how the measurement equations can be adapted to push the     forecast towards where there is interesting information without having to extend the     economic part of the model. In this way, we can exploit the interesting information     in money data without making that part of the endogenous core of the model, for     example. We can allow for the peculiarities of the national accounts data, and especially     what we know about its revisions policy. We can bring in information from surveys that we know are important in monetary policy decisions.</p>       ]]></body>
<body><![CDATA[<p> Another advantage to this approach is that it allows us access to the whole toolkit     that comes with the Kalman filter. We show how we can derive decompositions of     forecasts according to the contributions of data, and not just according to the contributions     of economic shocks, as is standard. In fact, we can show how estimates of     the contributions of the shocks depend on data. Going further, we also show how     we can try and spot where forecasts will not be well identified in terms of shocks.     The information context of different series to the forecast can be measured. And     then retrospectively, we can use this technology to compare the forecasts across     different vintages of data to see if policy mistakes were due to data mismeasurement,     as suggested by Orphanides (2001) and Boragan Aruoba (2004). This means that we     can present the differences between forecasts across policy rounds in terms of the     contribution of news in the data. This incremental way of presenting the forecast to a     busy Monetary Policy Committee is at least more efficient. All these outputs help us to integrate the model's forecast into the central bank's policy decisions.</p>       <p>This paper belongs to a branch of the forecasting literature that addresses the problem     that it is simply not viable for one model to incorporate all useful information. An     important paper by Kalchbrenner, Tins-ley, Berry, and Garrett (1977) formalises     how to bring in auxiliary information of a different frequency and with patchy     observations into a backward-looking forecasting model &mdash;model-pooling. Papers     by Leeper and Zha (2003) and more recently Robertson, Tallman, and Whiteman     (2005) and Andersson, Palmqvist, and Waggoner (2010) have implemented conditioning     in VAR models. Working with a forward-looking DSGE model, Monti (2010)     demonstrates how to pool the model forecast with judgemental forecasts, and Bene&scaron;,     Binning, and Lees (2008) present tests of how plausible these pooled forecasts are.     Schorfheide, et al., (2011) discusses appending forecasts for non-modelled variables     onto a DSGE model. But these papers do not consider how the agents in the DSGE model might themselves be using future information.</p>       <p>This crucial step was taken in a recent paper by Maih (2010) &mdash;the closest to our     work. Maih allows for agents to incorporate future uncertain data in forming expectations     and considers the choice between conditioning on future information in the     form of a truncated normal distribution which, having upper and lower bounds as     well as a covariance and a mean as parameters, is a more general model of conditioning     than ours. However, as the variance of a truncated normal variable is a nonlinear     function of the bounds, the reader may prefer an approach in terms of means     and variances only. Maih did not discuss ways to get around the potentially serious     computational costs of solving a forward-looking model with anticipated future data,     nor did he discuss unbalanced data. We cover these gaps. We also derive and trial   some revealing outputs and diagnostics, new to the conditioning literature.</p>       <p>A systematic conditioning strategy is already common practice among those central     banks that pioneered the use of DSGE models to forecast, in Norway, for example.<br />   We acknowledge their contribution without being able to cite unpublished work.</p>       <p>The rest of this paper is organized as follows. Section II summarises our key assumptions     and clarifies our notation. Section III presents the solved economic model in the     absence of data uncertainty, in this section this is also extended to allow for future     data. In Section IV, this is extended to introduce the data into our forecast. Section V     discusses the different strategies for modelling data uncertainty. Section VI presents     several useful different ways of presenting the policy forecast. Section VII allows     for reporting variables. Section VIII discusses the problem of state identification.<br />     Section IX presents some forecasts using this strategy on Colombian data. Section   X concludes.</p>       <p><b>II. THE KEY ASSUMPTIONS, THE SEQUENCING OF    SOLUTIONS AND NOTATION</b></p>       <p>    <b>A. CRUCIAL ASSUMPTIONS </b></p>       <p>    The input of this paper is the outcome of a micro-founded general equilibrium     problem. We assume that in a prior stage, the relevant decisions of agents have been     formulated as optimizing problems, that the first and second-order conditions have     been derived and that those conditions have then been aggregated to match the data     and transformed into a linear dynamic system with all variables only in terms of log     deviations from time-invariant steady-state values. See for example Uhlig (1995).     This paper begins at the next stage: where we need to solve to model and then to     match it to available data. By solving the model, we mean that we need to model     how the monetary policymakers choose their policy instrument and how agents form     rational expectations. By forecasting, we mean how we want to use this model to fit     and predict available data. We want all these decisions to reflect what data are really     available. Even then, as we shall now see, we make use of a separation theorem to     focus on the data part of the solution, assuming that the model has been solved and is given to us in a standard form.</p>       <p>    <b>B. THE SEQUENCE OF EVENTS AND THE INFORMATION SETS </b></p>       <p>The choices of policymakers and agents happen at a current time<i> s = t</i> which lies     in between 1 and the end of the forecast horizon, <i>T = t + k</i>. Unlike conventional     expositions, it is assumed that the potential data set that was available to agents and     policymakers at any time in the past up until now, that is at time <i>u (u</i> = 1,<i>..., t)</i>, could     have included possibly useful off-model information on variables timed from 1     to <i>u + k</i>, and also could have included information about data uncertainty. Later on,     we specify exactly what are in the data sets, but here, we just mention that they could     contain values of future variables which we interpret as forecasts from other models.     As we move from the past to the current date, information is never forgotten. The     information sets are written as <b><i>F</i></b><sub>0</sub>,..., <i><b>F</b></i><sub>t</sub> with the property that <img src="img/revistas/espe/v29n66/v29n66a07s01.jpg"> for any <i>i &lt; j</i>.     The macroeconomic forecaster makes his/her optimal forecasts based on the same   data set as the one used by economic agents.</p>       ]]></body>
<body><![CDATA[<p>In this set up, the information set is common, the dynamic model linear, the objective     function quadratic and expectations rational. Then, the literature on optimal     policy under data uncertainty tells us that there is a <i>separation property</i> such that     the problem can be split into two artificial stages and solved recursively<sup><a href="#1" name="s1">1</a></sup>.The first     stage, where the rational expectations of agents are solved for, is certainty equivalent;     at this point, the second-order properties of the data measurement and economic     shocks do not matter. Our scheme allows for future data. For that reason, agents and     the forecaster must be allowed to see shocks <i>k</i> periods in advance in this artificial     first stage, as in Schmitt-Groh&eacute; and Uribe (2008). In a second stage, this partial solution     is combined with a description of the data and the second-order properties of the data measurement and economic shocks to give the final solution.</p>       <p><b>C. NOTATION </b></p>       <p>    The operator <i><b>E</b><sub>i</sub></i>  means that we are taking expectations of a variable with respect to     an information set <i><b>F</b><SUB>i</SUB></i> of timing <i>i</i>. Matrices and vectors are in bold, with matrices in     upper case. <b>I</b><i><SUB>M</SUB></i> refers to the <img src="img/revistas/espe/v29n66/v29n66a07s05.jpg"> identity matrix. 0<i><SUB>MN</SUB></i> is an <img src="img/revistas/espe/v29n66/v29n66a07s07.jpg"> matrix with     zero elements.<img src="img/revistas/espe/v29n66/v29n66a07s08.jpg">  refers to the trace of matrix <img src="img/revistas/espe/v29n66/v29n66a07s09.jpg"> refers to the transpose of a     matrix <img src="img/revistas/espe/v29n66/v29n66a07s10.jpg"> refers to the Moore Penrose inverse of the matrix <img src="img/revistas/espe/v29n66/v29n66a07s11.jpg"> is the<i> ij<SUP>th</SUP></i>    element of the matrix <img src="img/revistas/espe/v29n66/v29n66a07s13.jpg">.</p>       <p>    <b>III. SOLVING THE MODEL WITH FUTURE DATA</b></p>       <p>    Incorporating future information substantially expands the number of variables in     the model. The purpose of this section is to show how future information can be     incorporated into a DSGE model solution without adding substantially to the computational     cost, because we only need to carry out some of the less intensive computations     on the extended model. Essentially, the solution with future information can be     carried by the commonplace solution without future information.</p>       <p>Our starting point is a micro founded general equilibrium problem that has already     been expressed in terms of log-linearised deviations from a steady state. The exposition   and method of solution follows Klein (2000) closely:</p>       <p><img src="img/revistas/espe/v29n66/v29n66a07f01.jpg"></p>   </FONT>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">There are <i>N</i> variables in the vector <b>X</b><SUP>0</SUP><SUB>s</SUB>.  <b>X</b><SUP>0</SUP><SUB>s</SUB> includes all economic variables endogenous     for the model. The <i>N&epsilon;</i> variables in <b>Z</b><sup>0</sup><sub>s</sub> are a set of exogenous variables that     follow univariate first-order process. We call them economic shocks. Then there is    a matrix equation</font></p>   <FONT size=2 face="Verdana, Arial, Helvetica, sans-serif">    <p><img src="img/revistas/espe/v29n66/v29n66a07f02.jpg"></p>   </FONT>    <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">where <b>B</b><sup>0</sup><sub>&epsilon;</sub>   and <b>D</b><sup>0</sup><sub>&epsilon;</sub> are both diagonal matrices, although <b>B</b><sup>0</sup><sub>&epsilon;</sub> may have a zero entry     on its diagonal. These exogenous variables could be extended to include those which     follow a VAR but for ease of exposition, and without any loss of generality, here we     assume that they are only univariate.   </font></p>   <FONT size=2 face="Verdana, Arial, Helvetica, sans-serif">   </FONT>       ]]></body>
<body><![CDATA[<p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">    Being exogenous, these variables have no expectational error. Any autoregressive     component of the shock is captured in the process (2) such that &epsilon;<SUB>s+1</SUB> is a martingale     difference process with respect to the information set at time s. In particular</font></p>   <FONT size=2 face="Verdana, Arial, Helvetica, sans-serif">    <p><img src="img/revistas/espe/v29n66/v29n66a07f03.jpg"></p>       <p>and, without loss of generality:</p>       <p><img src="img/revistas/espe/v29n66/v29n66a07f04.jpg"></p>   </FONT>       <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">The trivial steady-state solution, defined by <b>X</b><sup>0</sup><sub>s</sub>      = <b>0</b><sub>N,1</sub>and &epsilon;<sub>s</sub>= <b>0</b><sub>N&epsilon;,1</sub>, for all time <i>s</i>,   exists and is unique. The model begins at that steady state:</font></p>   <FONT size=2 face="Verdana, Arial, Helvetica, sans-serif">    <p><img src="img/revistas/espe/v29n66/v29n66a07f05.jpg"></p>   </FONT>    <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">Then, if we include the variables in Z<sup>0</sup><sub>s</sub> in the vector X<sup>0</sup><sub>s</sub> the solved model can be     presented in the form:   </font></p>   <FONT size=2 face="Verdana, Arial, Helvetica, sans-serif">    <p><img src="img/revistas/espe/v29n66/v29n66a07f06.jpg"></p>   </FONT>    <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">The solution in the form of equation (6) can be written in partitioned form, with the     exogenous variables <img src="img/revistas/espe/v29n66/v29n66a07s03.jpg">recovered in the lower partition:</font></p>   <FONT size=2 face="Verdana, Arial, Helvetica, sans-serif">    <p><img src="img/revistas/espe/v29n66/v29n66a07f07.jpg"></p>   </FONT>    ]]></body>
<body><![CDATA[<p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">where the matrices <b>B</b><sup>0</sup><sub>&epsilon;</sub>   and <b>D</b><sup>0</sup><sub>&epsilon;</sub> are defined in equation (2).   </font></p>   <FONT size=2 face="Verdana, Arial, Helvetica, sans-serif">       <p>In fact Klein (2000, pages 1417-1418, equations (5.20) and (5.21)) shows that the upper     partition of the equation (7) can be partitioned further, where the upper positions of     the vector are assigned to the non-predetermined variables, and the lower to the     predetermined endogenous variables. From now on it is assumed that the vector <b>X</b><sup>0</sup><sub>s</sub> is arranged in that way, and <i>N<sub>p</sub></i> is defined as the number of endogenous predetermined     variables. A variable is predetermined if its generating process is <i>backward-looking,     following</i>    Klein's definition of a <i>backward-looking</i> process. We need not impose this     prior designationbut if not, we would have to allow for more initial conditions for     state variables (Sims, 2002), complicating the exposition.</p>       <p>According to this further partition:  </p>   <img src="img/revistas/espe/v29n66/v29n66a07f08.jpg">       <p>Where:</p>       <p><img src="img/revistas/espe/v29n66/v29n66a07f09.jpg"></p>       <p><img src="img/revistas/espe/v29n66/v29n66a07f10.jpg"></p>       <p>And:</p>       <p><img src="img/revistas/espe/v29n66/v29n66a07f11.jpg"></p>   </FONT>    <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">The matrices <b>S</b><sub><i>ij</i></sub>, <b>T</b><sub><i>ij</i></sub>, <b>Z</b>     <usb>     <i>ij</sub></i>, <b>Q</b><sub>i</sub> for<i> i, j</i> =1, 2 are determined by a generalized Schur     factorization of <b>A</b> and <b>B</b>. For this paper, it is crucial to note that these matrices are     independent of the matrices <b>B</b><sup>0</sup><sub>&epsilon;</sub> and <b>D</b><sup>0</sup><sub>&epsilon;</sub> where the error process enters. Thus, it will     always be the case that the matrix <img src="img/revistas/espe/v29n66/v29n66a07s04.jpg">  is independent of the parameters of the exogenous   shock processes (of matrices <b>B</b><sup>0</sup><sub>&epsilon;</sub> and <b>D</b><sup>0</sup><sub>&epsilon;</sub>). </font></p> <FONT size=2 face="Verdana, Arial, Helvetica, sans-serif">       <p>    The matrices <img src="img/revistas/espe/v29n66/v29n66a07s02.jpg">, <img src="img/revistas/espe/v29n66/v29n66a07s06.jpg"> and <img src="img/revistas/espe/v29n66/v29n66a07s12.jpg"> do depend on <b>B</b><sup>0</sup><sub>&epsilon;</sub> and <b>D</b><sup>0</sup><sub>&epsilon;</sub> however.</p>       ]]></body>
<body><![CDATA[<p>    To allow for the possibility of future data to affect current behaviour, agents must     be able to anticipate shocks up until the forecast horizon. In this section, the model     and solution is extended to allow for that possibility along the lines of Schmitt-Groh&eacute;     and Uribe (2008).</p> </FONT>       <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">    It has been assumed that at each time <i>u</i> when a forecast is made, shocks in principle     can be anticipated <i>k</i> periods ahead<img src="img/revistas/espe/v29n66/v29n66a07s14.jpg">. <img src="img/revistas/espe/v29n66/v29n66a07s17.jpg" width="44" height="25" /> is defined as perfect information     on the vector of shocks &epsilon;<sub>s+1</sub> but known earlier at time <i>s &ndash; i</i> for <i>i =0</i>, ..., <i>k &ndash;</i> 1. Then,     the extended vector of exogenous shocks can be written as:</font></p>   <FONT size=2 face="Verdana, Arial, Helvetica, sans-serif">    <p><img src="img/revistas/espe/v29n66/v29n66a07f12.jpg"></p>       <p>and the extended vector of economic variables becomes:</p>       <p><img src="img/revistas/espe/v29n66/v29n66a07f13.jpg"></p>       <p>The extended system of exogenous variables is now:</p>       <p><img src="img/revistas/espe/v29n66/v29n66a07f14.jpg"></p>       <p>It immediately follows that the solution to the extended system is of the form:</p>       <p><img src="img/revistas/espe/v29n66/v29n66a07f15.jpg"></p>       <p>Comparing the two solutions, we can see that only the matrices <img src="img/revistas/espe/v29n66/v29n66a07s06.jpg"> and <img src="img/revistas/espe/v29n66/v29n66a07s12.jpg"> need to     be recalculated. These matrices can be derived from applying the same formulae as   in the model without future shocks, (17), (18) and 3 but when <b>B</b><sup>0</sup><sub>&epsilon;</sub> is replaced with <b>B</b><sub>&epsilon;</sub>   ,    <b>D</b><sup>0</sup><sub>&epsilon;</sub>   is replaced with <b>D</b><sub>&epsilon;</sub>   and <b>C</b> is replaced with <img src="img/revistas/espe/v29n66/v29n66a07s18.jpg">.  </p>       ]]></body>
<body><![CDATA[<p><img src="img/revistas/espe/v29n66/v29n66a07f16.jpg"></p>       <p>Where:</p>       <p><img src="img/revistas/espe/v29n66/v29n66a07f17.jpg"></p>       <p><img src="img/revistas/espe/v29n66/v29n66a07f18.jpg"></p>       <p>We have shown that extending the model to incorporate future information does not     require carrying out a new Schur factorization on a system with many more endogenous<br />     variables reflecting information on each future period on each variable.  </p>       <p>The computational cost would also be high if the vectorised formula 3 were actually     used in the larger extended model. Luckily, as Klein (2000) explains, there is an     alternative recursive method for calculating <b>M</b>, that can be applied to the extended     system to keep the computational cost manageable. In summary, the solution to the     model with future shocks can be carried most of the way by the solution to the model     without future shocks.</p>       <p><b>IV. ALLOWING FOR AWKWARD FEATURES OF THE POLICY   FORECASTING DATA SET</b></p>       <p>    Up until now, the solution has abstracted from the data set. In this section, the data     are introduced. Data can be uncertain and unbalanced. The idea is to wrap the     system of (15) around an observation system which relates the true values to the     noisy observed data values. We will now describe the observation system.    </p>       <p><b>A. INTRODUCING THE DATA SET </b></p>   </FONT>       <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">Let the vector <b>y</b><sup>u</sup><sub>s</sub> of size <img src="img/revistas/espe/v29n66/v29n66a07s19.jpg"> be the observed data that pertains to variables at     time s available in information set <i>u</i>. There can be holes in this data: some observations     may not be available at the time <i>s</i> in information set of time <i>u</i>. Thus <img src="img/revistas/espe/v29n66/v29n66a07s25.jpg">     where <i>N<sub>Dmax</sub></i> is the maximum number of data series possibly available.</font></p>   <FONT size=2 face="Verdana, Arial, Helvetica, sans-serif">    ]]></body>
<body><![CDATA[<p>    This measurement system is written as: </p>       <p><img src="img/revistas/espe/v29n66/v29n66a07f19.jpg"></p>       <p>With:</p>       <p><img src="img/revistas/espe/v29n66/v29n66a07f20.jpg"></p>   </FONT>       <p><font size="2"> <img src="img/revistas/espe/v29n66/v29n66a07s26.jpg"></font><font size="2" face="Verdana, Arial, Helvetica, sans-serif"> is the vector of deterministic variables such as dummies, constants or trends     which can affect data measurement.     (Economic adjustments to the model are already     incorporated in the states).  <b>V</b><sup>u</sup><sub>s</sub> is the normally distributed stochastic component of     the data errors which have variance-covariance matrices <img src="img/revistas/espe/v29n66/v29n66a07s28.jpg"> that vary both with the   time of the data and also with the information set.</font></p>   <FONT size=2 face="Verdana, Arial, Helvetica, sans-serif">    <p>Note that the data errors are assumed to be independent of the economic shocks.     This assumption is contestable: in many cases, one would expect data measurement     problems to be related to the economic cycle or the entry or exit of members of the     sample during booms and recessions<sup><a href="#2" name="s1">2.</a></sup> It can easily be relaxed in the Kalman filter     algorithms. However, the greater generality makes it harder to identify shocks. And     so, the possibly strong restriction that the economic and data uncertainty shocks are     independent is imposed in order to avoid getting embroiled in identification problems at this point. Problems of identification are discussed later on.</p>   <b>R</b><sup>u</sup><sub>s</sub> is a selector matrix which alters the number of rows to suit the number of series     with time s data observations within the time u information set: <i>s =</i> 1,...,<i>T</i> and     <i>u =</i> 1,...,<i> t</i>. It is assumed that there is data available in principle from time 0 to time     <i>T</i>, where <i>T</i> is the end of the forecast, but this is quite general as <b>R</b><sup>u</sup><sub>s</sub> can fill in the   holes.   </FONT>    <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">Then, the selector matrices <b>R</b><sup>u</sup><sub>s</sub> are formed by taking the <img src="img/revistas/espe/v29n66/v29n66a07s29.jpg"> identity     matrix and deleting rows corresponding to when there are no data series available at     time <i>s</i> within information set <i>u</i>.   </font></p>   <FONT size=2 face="Verdana, Arial, Helvetica, sans-serif">    <p>Naturally, then:</p>       <p><img src="img/revistas/espe/v29n66/v29n66a07f21.jpg"></p>   </FONT>    <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">With<img src="img/revistas/espe/v29n66/v29n66a07s32.jpg">being a mythical data set where the maximum number of data series is     always available (although with measurement error).    </font></p>   <FONT size=2 face="Verdana, Arial, Helvetica, sans-serif">    ]]></body>
<body><![CDATA[<p>That the variance-covariance matrices of the data error <img src="img/revistas/espe/v29n66/v29n66a07s28.jpg"> are both time and information     set heteroscedastic     means that available data can be weighted according to     how reliable that data is thought to be, but also in real time; that is, relative to when     decisions are made and expectations formed.</p>       <p><b>B. THE INFORMATION SET</b></p>       <p>    The information sets available to agents, policymakers and the forecasters alike at     time <i>u</i> is given by:</p>       <p><img src="img/revistas/espe/v29n66/v29n66a07f22.jpg"></p>       <p>for <i>u =</i> 1,...,<i>t, t + k = T</i> and <img src="img/revistas/espe/v29n66/v29n66a07s33.jpg">. This structure satisfies the descriptions     given in section II.B. As shall be seen shortly, the information set up allows for a data   set with the particular characteristics mentioned in the introduction.</p>       <p><b>C.  THE SOLUTION TO THE DATA UNCERTAINTY PROBLEM </b></p>       <p>    The idea is to derive the expectations and forecasts of the economic variables in     the model as the fixed-interval smoothed state estimates of the economic system     (6) conditional on the measurement system (19) and (20) and the information structure     (22).</p>   </FONT>       <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">    Although the exposition of the Kalman filter is standard, see Harvey (1991); Durbin     and Koopman (2001) for example, it is worth repeating here so that we can clarify our     particular notation. Given assumptions that the residuals are Gaussian, the Kalman     filter produces the minimum mean squared linear estimator of the state vector <b>x</b><SUB>t +1</SUB>    using the set of observations for time t in information set <i><b>F</b><sub>u</sub></i>. Let us call that estimate <img src="img/revistas/espe/v29n66/v29n66a07s15.jpg"> and its associated covariance <img src="img/revistas/espe/v29n66/v29n66a07s16.jpg">. The series of these estimates are a building     block towards what we are really interested in: the fixed interval smoothed estimates.     We carry out this calculation for each information set, although typically we will only     be interested in the forecasts from the last (the current) time <i>u = t</i> information set.    </font></p>   <FONT size=2 face="Verdana, Arial, Helvetica, sans-serif">       <p>Begin with the initial values contained in information set <i><b>F</b><sub>1</sub></i>. Then the recursion runs     from <i>s =</i> 1 to <i>s = T &ndash; </i>1 over:</p>   <img src="img/revistas/espe/v29n66/v29n66a07f23a.jpg">       <p>Where:</p>       ]]></body>
<body><![CDATA[<p><img src="img/revistas/espe/v29n66/v29n66a07f23b.jpg"></p>   </FONT>       <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">And the <img src="img/revistas/espe/v29n66/v29n66a07s41.jpg"> gain matrices are:</font></p>   <FONT size=2 face="Verdana, Arial, Helvetica, sans-serif">    <p><img src="img/revistas/espe/v29n66/v29n66a07f24.jpg"></p>   </FONT>    <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">the <img src="img/revistas/espe/v29n66/v29n66a07s34.jpg">covariance matrices of one-step estimation error are given by the Riccati   equation:</font></p>   <FONT size=2 face="Verdana, Arial, Helvetica, sans-serif">    <p><img src="img/revistas/espe/v29n66/v29n66a07f24b.jpg"></p>   </FONT>       <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">and the <img src="img/revistas/espe/v29n66/v29n66a07s21.jpg">covariance matrices of the one-step-ahead prediction errors in   the observation data <i><b>L</b><sub>us</sub></i>  are defined as:</font></p>   <FONT size=2 face="Verdana, Arial, Helvetica, sans-serif">    <p><img src="img/revistas/espe/v29n66/v29n66a07f24c.jpg"></p>       <p>The initial values are given as:</p>       <p><img src="img/revistas/espe/v29n66/v29n66a07f24d.jpg"></p>       <p>We also need an updated estimate at least at time <i>T</i>: </p>       ]]></body>
<body><![CDATA[<p><img src="img/revistas/espe/v29n66/v29n66a07f24e.jpg"></p>       <p>with a series of variance-covariance matrices:</p>       <p><img src="img/revistas/espe/v29n66/v29n66a07f24f.jpg"></p>       <p>Where:</p>       <p><img src="img/revistas/espe/v29n66/v29n66a07f24h.jpg"></p>       <p>The fixed-interval smoothed estimates are instead given by working backwards from     <img src="img/revistas/espe/v29n66/v29n66a07s20.jpg"> with the following recursion:</p>       <p><img src="img/revistas/espe/v29n66/v29n66a07f25.jpg"></p>   </FONT>       <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">With <b>r</b><i><sub>uT</sub></i> for <i>s = T</i> &ndash; 1, <i>T &ndash; 2</i>,...,1 and the associated variance-covariance matrices   of the smoothed prediction error given by:</font></p>   <FONT size=2 face="Verdana, Arial, Helvetica, sans-serif">    <p><img src="img/revistas/espe/v29n66/v29n66a07f26.jpg"></p>       <p>The one-step ahead predictions of the data are given by:</p>       ]]></body>
<body><![CDATA[<p><img src="img/revistas/espe/v29n66/v29n66a07f27a.jpg"></p>       <p>and the smoothed predictions follow a process:</p>       <p><img src="img/revistas/espe/v29n66/v29n66a07f27b.jpg"></p>       <p>The <img src="img/revistas/espe/v29n66/v29n66a07s21.jpg"> variance-covariance matrix of the forecast error in the smoothed     predictions of the fitted data at time s conditional on the information set <i>u</i>, <img src="img/revistas/espe/v29n66/v29n66a07s22.jpg"> is   given by:</p>       <p><img src="img/revistas/espe/v29n66/v29n66a07f27c.jpg"></p>       <p>The forecasts and fitted values of our model, which are identical to the expectations   of agents, are given as:</p>   </FONT>       <p><img src="img/revistas/espe/v29n66/v29n66a07f28.jpg"></p>   <FONT size=2 face="Verdana, Arial, Helvetica, sans-serif">       <p>Equation (28) is possibly the most important in the paper, linking the estimates from     the algorithm using the data to the expectations of agents. It can be thought of as an     assumption, as it would not hold if information sets of agents and the policy modellor     were not symmetric.     The Kalman filter algorithm also gives us other useful statistics which we can use in   analyzing and presenting the forecast, as we shall do later on.</p>       <p><b>V. PHILOSOPHIES FOR FINDING PARAMETER VALUES FOR   THE DATA MEASUREMENT EQUATION</b></p>   </FONT>    <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">To make this idea operational, we need to describe the schemes to find values for the     matrices <img src="img/revistas/espe/v29n66/v29n66a07s23.jpg">  and <img src="img/revistas/espe/v29n66/v29n66a07s24.jpg"> that describe the data measurement system.</font></p>   <FONT size=2 face="Verdana, Arial, Helvetica, sans-serif">    ]]></body>
<body><![CDATA[<p><b>A.  TENDER LOVING CARE </b></p>       <p>    Priors for each of the elements of these matrices could be justified and imposed on a     case by case basis, and the rest estimated. Provided the estimations are identified, a     large degree of generality could be allowed for here. The model could be estimated in     separate parts (ignoring some of the interrelations) or it could be estimated together,or in blocks. In any case, an appropriate estimation method, probably Bayesian,     which balances prior information against data information, could be applied to the     problem. As the priors are not automatic, and both the state-space and the data set     could be very large, this could be costly to carry out and possibly even to maintain.     But this might well be the most accurate method for building the data uncertainty     part of the model, and may be what is needed if the demands of policy are such that     the forecasts have to be consistent with all monitored indicators.</p>       <p><b>B. PURPOSE-BUILT DATA SET </b></p>       <p>    The most common method used to construct the data set for a forecasting model is     by finding one data series to match each important state variable. However, there     will be some series &mdash;for example, the Lagrange multipliers or the flexible price     variables&mdash; for which no good data series is available at all. They could be left to be     solved within the model.</p>       <p>For those variables for which there is a series in the data set, the choice of <b>H</b> is     straightforward: where the model concept has a series to match a state variable, the     row and column <b>H </b>would serve as an identity matrix. For example, if the national     accounts GDP data corresponds to the value-added output in the model and GDP     is the nth data series and value-added output the m<SUP>th</SUP> state variable, <b>H</b> will have a 1     in entry<i> (m, n) </i>and zero elsewhere in the m<sup>th</sup> row and the n<sup>th</sup> column. Even if it can     be assumed that that data series might on average rise and fall alongside the model     concept, there is less reason to argue that it will be unbiased or without some noise.     On these grounds, some systematic bias, <img src="img/revistas/espe/v29n66/v29n66a07s30.jpg">, and some data measurement     error,<img src="img/revistas/espe/v29n66/v29n66a07s31.jpg">, might be allowed to interfere in these relations.</p>       <p>Where the model concept has no data, the consistent and transparent solution would     be to eliminate that row in <b>H</b> and let the model solve for these series, by combining     the available data with knowledge of model structure and parameter estimates.  </p>       <p>This method is less costly, as data series are chosen essentially based on only what     state variables are important in the model. But the forecasts may be much worse     than an approach that also considers what useful data is available in designing the     match. The purpose-built data set ignores the useful technology that is in this paper     for bringing in other useful data.</p>       <p><b>C.  EXTENDING THE MODEL TO FIT IN DATA </b></p>       <p>    Yet, there may be variables which are not needed in the core solution of the model,     but for which there is useful data. It can be very valuable to extend the model to     include data from these variables for two reasons. First, if that data is informative it     should directly improve the forecast. Second, if that data is what agents and policymakers     use, the model will need to incorporate that data if it is to predict their behaviour     well. One example is that of money aggregates which on occasion provide very     useful and timely information for monetary policy, but which do not play a critical     part in the solution of many DSGE models. Another example is when there is only     annual data available for a quarterly model.</p>       <p>It is assumed that there are N<SUB>M</SUB> of these variables, called <b>m</b><sub>s</sub> and that can be written   as a static function of the variables in the model:</p>       ]]></body>
<body><![CDATA[<p><img src="img/revistas/espe/v29n66/v29n66a07f29a.jpg"></p>       <p>with each &epsilon;<sub>m,s</sub>, is other and &epsilon;<sub>s</sub>   uncorrelated <img src="img/revistas/espe/v29n66/v29n66a07s56.jpg"> non-singular. The key assumption here is     that the solution for these variables can take place in a second stage after the main     model has been solved. For example, assume that annual data is only available on a   flow series, such as GDP:</p>       <p><img src="img/revistas/espe/v29n66/v29n66a07f29b.jpg"></p>   </FONT>       <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">where <i>y<sub>as</sub></i>, is the data on the annual series and <i>Z<sub>q,s-i</sub></i> is the unobserved (seasonally     adjusted) quarterly state, and where both the annual and quarterly series are     expressed in terms of log deviations of steady state. This could be introduced into the   model in the form of equation (29).</font></p>   <FONT size=2 face="Verdana, Arial, Helvetica, sans-serif">    <p>Let us assume that these variables are related to the data according to the equation:</p>       <p><img src="img/revistas/espe/v29n66/v29n66a07f29c.jpg"></p>       <p>Therefore, this extension of the model can be built in by adapting the measurement   equation according to:</p>       <p><img src="img/revistas/espe/v29n66/v29n66a07f29d.jpg"></p>       <p>The rest of the results of the paper would follow if the following replacements were   made in the measurement equation:</p>       <p><img src="img/revistas/espe/v29n66/v29n66a07f29e.jpg"></p>       ]]></body>
<body><![CDATA[<p>If the form (29) is not justified, then the original system would have to be enlarged   to incorporate this information.</p>       <p>D.  DATA INTENSIVE METHODS </p>       <p>    At the other extreme, very many data series, many more than there are states, could     be included, as in Boivin and Giannoni (2006). With very many data series, it will     be infeasible to separately impose priors on the elements of the matrices linking each     data series to each unobserved state variables.     What might be possible is to group the data series in terms of which state variable     they contain some useful information for, with the groups not necessarily being     exclusive. There would be, at most, <i>N</i> groups of data.</p>       <p>One possible identifying assumption is that the common information that this     group contains is proportional     to the state variable that indexes that group. Standard     dynamic factor analysis methods can be used to estimate and forecast each state     variable, with the added twist here that the unobserved behaviour of the state variables   is restricted to be consistent with the economic model.</p>       <p><b>E.  SOME GENERAL PRINCIPLES TO CHOOSE THE DATA MEASUREMENT UNCERTAINTY </b></p>       <p>A more difficult decision is the choice of <img src="img/revistas/espe/v29n66/v29n66a07s28.jpg"> . Imposing case-by-case priors and estimating     will almost certainly be infeasible, and also unidentifiable. It would be better     to impose some structure to reduce the degrees of freedom. One solution could be     to assume some process for the variance to increase slowly over the sample, starting     from time 0 up until the end of the forecast, but with that rate of increase slowing     down. The idea is to then estimate, or calibrate, that process, taking each series at a     time &mdash;or much more ambitiously&mdash; to estimate the series of the matrices as a whole.     But, there may be some uneven heteroskedasticity around the current period because     of the calendar of national accounts release. This could be dealt with by splitting the     data set into three periods and by putting in slope dummies into the variance process   to adjust for the regime shift.</p>       <p>&bull; The first period goes from time 0 to time <i>T</i>1, and takes us up to two quarters     before the current quarter for which national accounts data is available.<br />     &bull; The second period is from time <i>T</i>1 to time <i>T</i>2 which is two quarters after the     current quarter. Within this year, centered on the current time, judgement     plays a major role.<br />   &bull; The final period is from time <i>T</i>2 up until the end of the forecast at time <i>T</i>.</p>   </FONT>     <p><img src="img/revistas/espe/v29n66/v29n66a07f29f.jpg"></p>       <p>For each data series, the <img src="img/revistas/espe/v29n66/v29n66a07s39.jpg"> data measurement variances across both time and     information sets are summarised by five parameters: a starting value <img src="img/revistas/espe/v29n66/v29n66a07s35.jpg">, a final     value <img src="img/revistas/espe/v29n66/v29n66a07s36.jpg">, a rate of improvement &alpha;, and two slope dummies to separate out the     window surrounding the current time, <img src="img/revistas/espe/v29n66/v29n66a07s37.jpg">and <img src="img/revistas/espe/v29n66/v29n66a07s38.jpg">. If a real time data base were     available, that could be used to set estimates of the scale of data uncertainty at a     particular lag, for example at <i>t &ndash;</i> 3, and through that same route to estimate <img src="img/revistas/espe/v29n66/v29n66a07s40.jpg">. These kinds of estimates should help pin down values of the parameters here.</p>   <FONT size=2 face="Verdana, Arial, Helvetica, sans-serif">     <p><b>VI. USEFUL OUTPUTS FROM THE FORECASTING SYSTEM</b></p>     ]]></body>
<body><![CDATA[<p>    A policy forecast is judged not just on its predictability, but also on how well it tells     stories. Indeed, often the job of the policy forecaster is to explain what went wrong! In     order to do this, it is important to be able to decompose the forecast into two dimensions:     first, in terms of the data and second, in terms of the economic and data measurement     shocks. The contributions on either dimension are myriad, and so it is also important to     think of interesting ways of summarising this information. Finally, it would be useful     to have a metric for assessing the information worth of bits of data or whole series.</p> </FONT>    <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif"><br />   <b>A.  DATA DECOMPOSITIONS </b><br />   Consider the <img src="img/revistas/espe/v29n66/v29n66a07s41.jpg"> multiplier matrices <img src="img/revistas/espe/v29n66/v29n66a07s42.jpg">, which when multiplied by the selector matrix <img src="img/revistas/espe/v29n66/v29n66a07s43.jpg"> determine how much each piece of information (data plus adjustments) from the mythical full data set at time <i>s</i> based on the information set <img src="img/revistas/espe/v29n66/v29n66a07s32.jpg"> affects the smoothed estimate at time<i> s</i>. These multipliers are defined by the relationship:</font></p> <FONT size=2 face="Verdana, Arial, Helvetica, sans-serif">    <p><img src="img/revistas/espe/v29n66/v29n66a07f30a.jpg"></p> </FONT>    <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">Define <img src="img/revistas/espe/v29n66/v29n66a07s41.jpg"> the matrices <img src="img/revistas/espe/v29n66/v29n66a07s46.jpg"></font></p> <FONT size=2 face="Verdana, Arial, Helvetica, sans-serif">    <p><img src="img/revistas/espe/v29n66/v29n66a07f30b.jpg"></p>     <p>Then, the multiplier matrices are calculated in part backwards and in part forwards   from time s. The recursion formulas from Koopman and Harvey (2003) are:</p>     <p><img src="img/revistas/espe/v29n66/v29n66a07f31.jpg"></p>     <p><img src="img/revistas/espe/v29n66/v29n66a07f32.jpg"></p>     <p><img src="img/revistas/espe/v29n66/v29n66a07f33.jpg"></p>     <p><img src="img/revistas/espe/v29n66/v29n66a07f34.jpg"></p>     ]]></body>
<body><![CDATA[<p><img src="img/revistas/espe/v29n66/v29n66a07f35.jpg"></p>     <p><img src="img/revistas/espe/v29n66/v29n66a07f36.jpg"></p>     <p><img src="img/revistas/espe/v29n66/v29n66a07f37.jpg"></p>     <p><img src="img/revistas/espe/v29n66/v29n66a07f38.jpg"></p>     <p>The <i>contributions</i> of each piece of data to these estimates are then simply that piece     of data multiplied by its multiplier, only that here they are also indexed by the information   sets through<i> u</i>.</p>     <p>    Publishing forecasts is about communications as much as anything else. But, if the     central bank wants to communicate with the public, it should always refer to data     that is publicly and objectively available1. For that reason, it is also important to present the contributions of data to the smoothed estimates of the data series themselves.     These are given by:</p>     <p><img src="img/revistas/espe/v29n66/v29n66a07f39.jpg"></p>     <p>A popular maxim in policy forecasting is that it is the source of the shock that     matters, meaning that the explanations of the forecast depend very much on what     shocks are believed driving the forecast. It would therefore be useful to know how     we determine these shocks; in terms of which bits of data tell us that there is a     demand versus a supply side shock, for example. The economic shocks are given as<br />   the last<img src="img/revistas/espe/v29n66/v29n66a07s47.jpg">members of the state vector, and so equation 6.1 can be used to generate   this interesting decomposition.</p>     <p><b>    1. Summarizing the Information Given by the Data Decompositions</b></p>     <p>The decompositions in this section provide a myriad of information. It is usually     convenient to summarize that information along some relevant dimension. There are     many interesting possibilities; but three standard aggregations spring to mind:<br />   1) <i>The contribution of each individual data series</i> in a given set is given by adding     up the contributions of observations on each series over time.<br />   2) <i>The impact of the news in each data series on each forecasted variable</i> can     be calculated by subtracting     the contribution of each data series as it was in     the previous information set from the contribution of that series as it is in the     current data set.<br />   3) The data set into the part that reflects off -model judgement and the rest.<i>The     role of judgements</i> is then the sum of the contributions of those pieces of data     which are designated to be judgement.<br />   4) These calculations can only be in terms of <i>the contribution exclusive to key     variables</i>, for example inflation and GDP, and then only <i>over the more interesting   periods</i>, such as the forecast.</p>     ]]></body>
<body><![CDATA[<p><b>2. The Information Contribution of Each Piece of Data to the Forecast</b></p>     <p>    The expected contributions of particular pieces of data to the forecasted variable     were derived previously in this section. It is interesting to complement that with     some measure of how much useful information each data series brings to the forecast, which can be thought of as a forecast error variance decomposition in terms of     the data series.</p>     <p>    Tinsley, Spindt, and Friar (1980) and later Coenen, Levin, and Wieland (2004)     describe how the information     contribution of a piece of data to a forecast is related to     the reduction in uncertainty that using that data brings to the forecast. The expected     uncertainty in two sets of variables x and y is defined as:</p>     <p><img src="img/revistas/espe/v29n66/v29n66a07f40a.jpg"></p>     <p>where <i>f (y,x)</i> is their joint density. Consistently the uncertainty in just <i>y</i> is:</p> </FONT>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif"><img src="img/revistas/espe/v29n66/v29n66a07f40b.jpg"></font></p> <FONT size=2 face="Verdana, Arial, Helvetica, sans-serif">     <p>where<i> f (y)</i> is the marginal density of <i>y</i>, and the conditional uncertainty of <i>y</i>  given <i>x</i> is:</p>     <p><img src="img/revistas/espe/v29n66/v29n66a07f40c.jpg"></p> </FONT>    <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">where <i>f (y | x)</i> is the conditional density of y given <i>x</i>. The mutual information content     of <i>x</i> and y is then defined by the reduction in expected uncertainty when <i>x</i> is used to   predict <i>y:</i></font></p> <FONT size=2 face="Verdana, Arial, Helvetica, sans-serif">    <p><img src="img/revistas/espe/v29n66/v29n66a07f40d.jpg"></p>     ]]></body>
<body><![CDATA[<p>and is always positive definite. If <i>y</i> and <i>x</i> are jointly normally distributed, Tinsley,   Spindt, and Friar (1980) show that the mutual information content is then:</p>     <p><img src="img/revistas/espe/v29n66/v29n66a07f40e.jpg" width="580" height="71" /></p>     <p>where &Omega;<sub>z1z2</sub> is the covariance matrix between two vectors <i>z<SUB>1</SUB></i> and <i>z<SUB>2</SUB>S</i>. If x is partioned     into two sets of information, the extra gain in using x1 alongside x2 over just using x1   is <i>G(y | x1, x2)</i> where:</p>     <p><img src="img/revistas/espe/v29n66/v29n66a07f40f.jpg"></p>     <p>This is the percentage difference between the square root of the determinant of the     conditional variance covariance matrix of <i>y</i> just using <i>x</i><SUB>1</SUB> and the square root of the     determinant of the conditional variance covariance matrix using both <i>x</i><SUB>1</SUB> and <i>x</i><SUB>2</SUB>. As     such it seems to be a good measure of the marginal information content of <i>x</i><SUB>2</SUB> over     and above <i>x</i><SUB>1</SUB>. Note that it is not the same as mutual information content of <i>y</i> and <i>x</i><SUB>2</SUB>,   which is I <i>(y | x2)</i>.</p>     <p>    This measures the information value to estimating all the states in the forecast over     the whole sample. But, what is more likely to be of interest is to assess the information     content of a data series in terms of predicting just one state variable, and even     then, over a particular time window. For example, the relevant question might be,     how useful is a particular data series in predicting inflation over the two-year forecast     horizon? An appropriate answer would then be given by the following statistic     based on a scalar version of (40) above.</p>     <p><br />   <b>Proposition</b>. The extra information content of the j<SUP>th</SUP> data series in predicting the i<SUP>th</SUP>   variable on average over the two year forecast horizon could then be measured by:</p>     <p><img src="img/revistas/espe/v29n66/v29n66a07f41.jpg"></p>     <p>where the<i> i<sup>th</sup></i> variable is of interest, <i>t</i> is the current time and the frequency is quarterly,     and <i>T<SUP>-j</SUP></i> indicates that all but the  <i>j<sup>th</sup></i> data series whose information content we want to   measure to calculate the covariance of the states is being used.</p>     <p><b>B.  EXTRACTING INFORMATION ON THE ECONOMIC CONTENT OF THE MODEL </b></p>     ]]></body>
<body><![CDATA[<p>    It is also interesting to see to what extent the forecasted values are driven by economic     shocks. The economic content of the model can also be presented as impulse responses     or variance decompositions as in Gerali and Lippi (2003).</p>     <p><b>1. Economic Decompositions</b></p> </FONT>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">    The last<i> <img src="img/revistas/espe/v29n66/v29n66a07s47.jpg"></i>state variables describe the exogenous economic shocks, also in terms     of when they were first spotted.  <img src="img/revistas/espe/v29n66/v29n66a07s48.jpg">refers to the vector of state estimates of those     state variables at time <i>s</i> using information set <i>u</i>. The lower partition described in     equation (15) and these smoothed state estimates to recover smoothed estimates of   the white noise components of these shocks as:</font></p> <FONT size=2 face="Verdana, Arial, Helvetica, sans-serif"><img src="img/revistas/espe/v29n66/v29n66a07f42.jpg">     <p>    Then the contribution of the <i>i<SUP>th</SUP></i> shock to the smoothed estimate at time <i>s</i> on information   set <i>u</i> is:</p>     <p><img src="img/revistas/espe/v29n66/v29n66a07f43a.jpg"></p> </FONT>     <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">    where <i>R<sup>&epsilon;i</sup></i> is a <img src="img/revistas/espe/v29n66/v29n66a07s55.jpg" width="202" height="26" /> matrix formed by taking an identity matrix     and putting zeros in all but the <i>i<sup>th</sup></i> diagonal element. The contribution to the smoothed fitted values of the data series will be:</font></p> <FONT size=2 face="Verdana, Arial, Helvetica, sans-serif">    <p><img src="img/revistas/espe/v29n66/v29n66a07f43b.jpg"><br />   .<br />   Here, it should be remembered that the sum of the contributions across all shock   will equal the smoothed estimate of the data series minus the mean bias terms   <img src="img/revistas/espe/v29n66/v29n66a07s49.jpg"></p>     <p><b>2. Impulse Responses</b></p>     <p>    To obtain the economic decomposition of the states, the decomposition is first     written as:</p>     <p><img src="img/revistas/espe/v29n66/v29n66a07f44a.jpg"></p> </FONT>    ]]></body>
<body><![CDATA[<p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">where the <img src="img/revistas/espe/v29n66/v29n66a07s50.jpg"> matrices <img src="img/revistas/espe/v29n66/v29n66a07s51.jpg"> , are given by <img src="img/revistas/espe/v29n66/v29n66a07s52.jpg">.</font></p> <FONT size=2 face="Verdana, Arial, Helvetica, sans-serif">    <p>Substituting out for the true states gives:</p>     <p><img src="img/revistas/espe/v29n66/v29n66a07f44b.jpg"></p>     <p>which decomposes the state estimate into the contribution of each economic shock     and each data noise shock. Impulse responses to economic shocks or data errors can   now be derived from expression (44).</p>     <p>    Similarly, the smoothed estimates of the data and the policy interest rate can be     decomposed into shocks with a view to obtaining the impulse responses to the fitted   data values:</p>     <p><img src="img/revistas/espe/v29n66/v29n66a07f44c.jpg"></p>     <p><b>3. Variance Decompositions</b></p>     <p>    The variance decomposition of a state estimate is the proportion of variance     explained     by the variance of each type of shock at each horizon. The unconditional variance of   the state estimates is first derived as:</p>     <p><img src="img/revistas/espe/v29n66/v29n66a07f45.jpg"></p>     <p>Given a suitable calibration about the variance-covariance matrix of economic     shocks <img src="img/revistas/espe/v29n66/v29n66a07s53.jpg">  which could be an identity matrix, expression (45) permits the decomposition.</p>     ]]></body>
<body><![CDATA[<p>    It would be useful to draw bands of uncertainty around the forecasts. The forecast     error of the estimated state variable is:</p>     <p><img src="img/revistas/espe/v29n66/v29n66a07f45a.jpg"></p> </FONT>    <p><font size="2" face="Verdana, Arial, Helvetica, sans-serif">So, if by defining the matrices <img src="img/revistas/espe/v29n66/v29n66a07s54.jpg">such that:</font></p> <FONT size=2 face="Verdana, Arial, Helvetica, sans-serif">    <p><img src="img/revistas/espe/v29n66/v29n66a07f45b.jpg"></p>     <p>then the forecast error covariance of the state estimates can be written as:</p>     <p><img src="img/revistas/espe/v29n66/v29n66a07f45c.jpg"></p>     <p>Similarly the covariance of the estimates of the fitted variables is:</p>     <p><img src="img/revistas/espe/v29n66/v29n66a07f45d.jpg"></p>     <p>with the covariance of the forecast error of fitted data in terms of shocks as:</p>     <p><img src="img/revistas/espe/v29n66/v29n66a07f45e.jpg"></p>     ]]></body>
<body><![CDATA[<p>It should be noted, however, that these expressions do not take account of estimation     uncertainty. That would seem to be more obviously deficient in a set up where the data     are formally separated from the model variables. In our opinion, then, these expressions     should be superceded by a formal combined treatment of uncertainty and estimation,     and by one which is designed for policy. Such a scheme is outlined in Sims (2002)   and applied by Adolfson, Andersson, Lind&eacute;, Villani, and Vredin (2005).</p>     <p><b>VII. ALLOWING FOR REPORTING VARIABLES</b></p>     <p>    One way to better exploit this wealth of information, is to introduce variables into the     economic model which are there just for reporting. This is a very common practice     in policy forecasting as round by round the way the forecast is presented changes     depending on particular issues.    </p>     <p>Formally, the reporting variables could follow the process:    </p>     <p><img src="img/revistas/espe/v29n66/v29n66a07f45f.jpg"></p>     <p>    where <img src="img/revistas/espe/v29n66/v29n66a07s57.jpg"> is a vector of white noise residuals, allowing for the fact that reporting is     not exact. Schorfheide, Sill, and Kryshko (2010) discuss the importance of allowing     for these non-modelled variables and suggest how such auxiliary equations can be     estimated. Here, more simply, the variance of these residuals is assumed to be given.</p>     <p>In keeping with the spirit of what a reporting variable is it is also assumed that these     residuals are independent of the shocks and data noise terms in the rest of the model.     Otherwise these variables would have to be incorporated into the model.   The smoothed estimates of the reporting variables are:</p>     <p><img src="img/revistas/espe/v29n66/v29n66a07f45g.jpg"></p>     <p>Then, the forecast of these variables can be decomposed into the contributions of   economic shocks with:</p>     <p><img src="img/revistas/espe/v29n66/v29n66a07f45h.jpg"></p>     ]]></body>
<body><![CDATA[<p>from equation (43).</p>     <p>    The impulse responses are:</p>     <p><img src="img/revistas/espe/v29n66/v29n66a07f45i.jpg"></p>     <p>and the variance decomposition follows:</p>     <p><img src="img/revistas/espe/v29n66/v29n66a07f45j.jpg"></p>     <p>The variance-covariance matrix of forecast errors is:</p>     <p><img src="img/revistas/espe/v29n66/v29n66a07f45k.jpg"></p>     <p><b>VIII. THE STATE IDENTIFICATION PROBLEM</b></p>     <p>    An identification problem arises when the values of parameters cannot be separately     estimated given the combination of a theoretical model and a data set. This type of     identification problems bedevils the estimation of DSGE models: see Canova and     Sala (2006), Fukac and Pagan (2006) and Guerron-Quintana (2007) and recently     Koop, Pesaran, and Smith (2011). Here, it is assumed that the parameter values are known; and so, that particular problem has been set aside. However, it is still very     possible that there is an identification problem in the estimates of state variables: the     data and model together might fail to separately identify some states. Indeed, as the     solution for states is a first step in the recursions used to estimate parameters; for     example, as in maximum likelihood estimation, state identification could be seen a     necessary condition for parameter identification. These problems are not generic to     the proposal in this paper, but this procedure will certainly bring these problems to<br />   the fore. Therefore, it is worthwhile to spend some time discussing the problem of   state identification.</p>     <p>Weak identification can be expected to especially affect our estimates of the economic     shocks. Remember that economic shocks have no direct data counterpart to which     their value can be tied down. Then, it becomes more likely that the forecast of a variable    &mdash;even on which good data is available&mdash; is based on very unreliable estimates     of what shocks cause those movements.</p>     ]]></body>
<body><![CDATA[<p>There are many different ways in which these problems can show themselves, if one     digs deep enough.</p>     <p>One classic symptom would be if the estimates of two shocks that affect one variable     for which there is a data counterpart were estimated to have large, but offsetting,     contributions. Another would be if the estimate of a shock failed to update from     its initial prior value. A different way of looking at this problem is to note that the     data series are not separately informative about state estimates, and are in this sense     multicollinear. This then points to yet another symptom: one which would be where     the estimates of particular states were too sensitive to changes in the data set, as explained in Watson (1983).</p>     <p>One subcategory of this shock identification problem is when data measurement     errors cannot be separately     estimated from economic shocks. This could reveal itself     when the estimate of a data measurement error were negatively correlated with the     economic shock. Remember that the covariances between data measurement and     economic shocks are assumed to be zero. Relaxing that assumption would be more     realistic but would also expose us to more identification errors of this type.</p>     <p>It is difficult to offer general solutions to state identification problems. It is important     to remember that a failure of identification has to do with the flawed combination of model and data   &mdash;and with neither individually&mdash;.</p>     <p>Thus, there may be some shocks that can be identified with some data but not with     others. And, there may be some shocks which are not identifiable by any data that are     available! The solution to identification may lie either in changing the model; or, in     looking for new data; and will probably involve some compromises in either or both     directions. Solutions would seem to be case by case.</p>     <p>    But, it is possible to formulate general tests to detect identification problems.     Burmeister and Wall (1982) offer one test for the identification problems for parameter     estimates in state space models. Their suggestion is to keep an eye out for very     large correlations among parameter estimates. Analogously, here high correlations     among the estimates of the economic shocks would reveal poor identification.</p>     <p>    The variance-covariance matrix of the impact of the economic shocks (which are     included in the state vector) is given in equation (45). Then, the following test can     be constructed. </p>     <p><b>Proposition</b>. Given an information set u, the estimate of the economic shock impacts     i and j at time s are not likely to be well identified separately if:</p>     <p><img src="img/revistas/espe/v29n66/v29n66a07f46.jpg"></p>     <p>The identification problem can also be assessed from the point of view of whether     adding a particular series to a given data set brings in more information in identifying   a group of variables, using test (41).</p>     ]]></body>
<body><![CDATA[<p><b>IX. SOME APPLICATIONS</b></p>     <p>    In this section some experiments from applying this approach to a quarterly DSGE     model for Colombia are reported. The model (PATACON) is described in G&oacute;mez,     Mahadeva, Sarmiento, and Rodr&iacute;guez (2011). It is an open economy model which     has been calibrated to fit Colombian data for the period 2003-2006. The database, of     a quarterly frequency, runs from 1994Q1 to 2006Q4, is described in Mahadeva and     Parra (2008).</p>     <p><b>A. A COMPLETE FORECAST </b></p>     <p>    To begin with, the first aim is to show that this model, and its database, is at least     capable of generating a decent forecast. One can, in principle, generate a forecast     with very little data, but that forecast could be quite poor. Adding more data can in     principle improve that forecast.     The Graphs 2 and 3 show the forecasts of the model for quarterly real value-added     income (nominal GDP divided by the CPI) growth. Twelve series are included in     the data base up until period 40 (10 years of data), and it is assumed that they are all     perfectly measured. In this experiment, the model allows for 12 structural shocks all     of which seem to be reasonably well identified.</p>     <p align="center"  ><img src="img/revistas/espe/v29n66/v29n66a07g02.jpg"></p>     <p>The first Graph presents the forecast with no other data beyond the 40 quarters&acute;   cutoff point. The forecast in the second Graph is based on an extra two quarters of inflation, interest rate, and real exchange rate data.</p>     <p align="center"  ><img src="img/revistas/espe/v29n66/v29n66a07g03.jpg"></p>     <p>Comparing the two forecasts makes the simple point that this model is capable of     making a decent forecast, but only if the information on recent data is included. This     is true even if that information is from other series, given that the theoretical part of   the model is able to interpret the linkages between these series.</p>     <p>    <b>B. EXPERIMENT TO SHOW THE IMPORTANCE OF IMPERFECTLY MEASURED AWKWARD DATA </b></p>     <p>    This next set of experiments is about the advantage of allowing for measurement error     in unbalanced data. The first in this next set of experiments uses data on consumption     and inflation only until period 47 and no more, and assumes that both series     are well measured. The results are in Graph 4. Then, three extra quarters of data     on CPI inflation are introduced, while still assuming that both those series are well     measured. This is a very similar exercise to that of the previous section. Comparing     Graph 4 with Graph 5, it seems that the extra data on CPI ensures that the boom in     consumption in between periods 47-52 picked up.</p>     ]]></body>
<body><![CDATA[<p align="center"  ><img src="img/revistas/espe/v29n66/v29n66a07g04.jpg"></p>     <p align="center" ><img src="img/revistas/espe/v29n66/v29n66a07g05.jpg"></p>     <p>However, we should be careful about leaping to the conclusion that more data always     improves a forecast. All that has been shown is that the extra data on prices helps     the forecast track consumption better. If those extra two data points in the CPI were     badly measured, then what we have produced is an even worse forecast. This is a     very real possibility in Colombia because the statistical authority only updates its     aggregate CPI weights once every five years, and then even near term forecasts are   subject to measurement error.</p>     <p>    To expand further on this point, the next experiment compares the previous two     forecasts with one in which we allow for data mismeasurement in the extra three CPI     data points, in Graph 6. This forecast is now a compromise between the two others;     and, perhaps, reflects a better balance between not ignoring important data and not     chasing it too closely.</p>     <p align="center"  ><img src="img/revistas/espe/v29n66/v29n66a07g06.jpg"></p>     <p><b>C. EXPERIMENTS TO SHOW THE IMPORTANCE OF OFF-MODEL FORECASTS </b></p>     <p>    The next set of Graphs illustrates the use of the data decompositions. In what follows,     there are only two economic shocks, a demand and a supply shock, and it is assumed that the supply shock dominates. In the first experiment there is only noisy data     on consumption. The task is to predict all variables in the model, including true     consumption itself. Graph 7 shows the multipliers of consumption data in predicting     true consumption in period 45 in this experiment. Notice that the distribution is     symmetric about the current period. Given that there are measurement errors,     surrounding data helps to estimate the unobserved state at the current period: only     that here, that pattern reflects in part the dynamics of the DSGE model.</p>     <p align="center"  ><img src="img/revistas/espe/v29n66/v29n66a07g07.jpg"></p>     <p>The next experiment looks into exactly how the information from off-model forecast     of inflation in the short term helps in predicting consumption now. The experiment     assumes perfectly measured data on consumption and inflation only until period 40,     and then inflation data only for the next two years. These extra two years of inflation     data are assumed to have measurement error, just as if they were forecasts from a   separate inflation model.</p>     <p>    The multipliers on estimating consumption in period 45, shown in Graph 8, describe     how inflation data &mdash;even a year ahead&mdash; plays some part in helping us understand what is happening to consumption in the absence of timely consumption data. The     multipliers are negative because higher prices imply a lower level of consumption,     conditional on the supply shock being important. Shown this figure, a policymaker     would understand how his/her off-model inflation forecast would be consistent with     his/her forecast for consumption, and so ultimately with GDP.</p>     ]]></body>
<body><![CDATA[<p align="center"  ><img src="img/revistas/espe/v29n66/v29n66a07g08.jpg"></p>     <p>This can be compared to a forecast with data on both consumption and inflation     up until period 53. Consumption is imperfectly measured, but the inflation data     is assumed to be without error. Graph 9 plots the multipliers on inflation data in     predicting consumption at period 45. It is interesting to see that the multipliers on     future inflation data are now positive in predicting current consumption. This has to<br />   do with the economic dynamics of the model. Given that the supply shock is important,   higher prices now mean lower consumption level now, and so higher prices in   the future mean lower consumption expected in the future. Intertemporal consumption   dictates that lower expected consumption in the future would raise consumption now and so the multiplier on future inflation data is positive.</p>     <p align="center"  ><img src="img/revistas/espe/v29n66/v29n66a07g09.jpg"></p>     <p>Data about the future in the form of forecasts from external sources can be useful but     is uncertain. To bring this out, the next experiment examines the role of imperfectly     credible announcements of future data outturns. In this experiment, the forecaster     is assumed to be in period 40 and trying to forecast consumption a year ahead.     Consumption and inflation data only up until period 40 are available and both those     series are perfectly measured. Now, information is provided to agents and forecaster     alike on what remittances are going to be for the next two years (from periods 41 to     49). Those announcements are not perfectly credible though; there is measurement     error. The multipliers reveal how that future imperfect information on an important     exogenous variable matters for the forecast of consumption in period 45. This could     also be presented by picking out the contribution to consumption growth in period     45 of the remittance data. In more sophisticated experiments, this technology can be     used to explain the importance of financial market data on the expected future values   of asset prices on forecasts of macroeconomic variables.</p>     <p align="center"  ><img src="img/revistas/espe/v29n66/v29n66a07g10.jpg"></p>     <p><b>D. EXPERIMENT TO REVEAL THE IDENTIFICATION PROBLEM </b></p>     <p>    The Graph 11 illustrates the identification problems that we discussed in section     VIII. The figure shows the contribution of two main shocks to consumption     growth     when we do not allow for measurement error in the CPI, and when it is assumed     that there are only two economic shocks. The model has picked up what it identifies     as being a very large offsetting contribution of the two shocks. It does not seem     likely that these really have had offsetting effects on consumption, but rather that the     model and data cannot separately identify the contribution of each, only their linear combination.</p>     <p>    <b>X. CONCLUSIONS</b></p>     <p>    The data that is informative for making monetary policy decisions comes in many     shapes and sizes and is uncertain. In this paper, we propose one possible way of putting that awkward, but still useful, data set to work in forecasting from a linear     dynamic forward-looking model.</p>     <p align="center"  ><img src="img/revistas/espe/v29n66/v29n66a07g11.jpg"></p>     ]]></body>
<body><![CDATA[<p>There are some important practical advantages to this approach. First, it allows theoretical     models access to the rich set of real world data that is actually in use. The     method also stimulates different ways of presenting and explaining the forecasting.     For example, we can present a forecast in terms of what data explains the decisions     over key variables, and not just what set of shocks causes that forecast. The difference     is that as the data is observed the forecast becomes more transparent. We also     show how this method can cope with less than perfectly credible announcements     of future information, such as that contained in financial market data. Last, but not     least, this method has the advantage of separating the data preparation process from   the model formulation and solution process. This would better suit how central banks carry out their policy forecasts in practice, given that these two activities are quite   specialized.</p>     <p>    That said, although we consider this approach to be a step forward, there still remain     some important aspects of the policy forecasting problem which have not been taken     into account.</p>     <p>    First, we do not deal with the possibility of inevitable misspecification, a recent     preoccupation of the DSGE literature (Negro and Schorfheide, 2009); even though,     as Maih (2010) shows, the benefits of conditioning depend crucially on misspecification.     An important source of misspecification and forecast error in DSGE models     is down to drift in economic relations, which are assumed to be fixed in the steady     state of these models. For example, it is often observed that the imports to GDP and     exports to GDP ratios rise persistently with greater openness. Our data uncertainty     method is not designed to deal with such trending economic shocks.    </p>     <p>Second, we do not discuss how the parameters of the economic model are estimated     or calibrated. We discuss alternative procedures for estimating the measurement     system that links the data to the unobserved economic model, but we do not do     that in much depth. As such, we do not allow for parameter estimation effects in     this decomposition of the contributions of different data series. Neither have we yet     tackled the problem of estimating the whole distribution of these impulse responses     and decompositions of the forecasts, and not just the mode value, as this would need     to take account of parameter estimation uncertainty. We do provide some expressions     for the asymptotic conditional uncertainty of the forecast; but without formal     treatment of estimation uncertainty, we cannot claim that this is a serious presentation     of the forecast distribution.    </p>     <p>We do not allow for second-order approximations of the form popularised by     Schmitt-Groh&eacute; and Uribe (2004). Nor do we allow for asymmetric risks in the forecast     which has become common central bank practice following Britton, Fisher,     and Whitley (1998).    </p>     <p>Finally, as the set up here is based on a linear model, or more precisely a linearised     version of a nonlinear model, this strategy does not apply to nonlinear solution     methods, such as that presented in Laxton and Juillard (1996) and Pichler (2008). It     is our hope that we, and others, will be able to apply some common solutions to these     problems within our apparatus.</p>     <p><b>COMENTARIOS</b></p>     <p><sup><a href="#s1" name="1">1</a> </sup> See a parallel literature on assessing optimal policies in linear rational expectations models      under data uncertainty following papers by Gerali and Lippi (2003), Pearlman (1986), Svensson and    Woodford (2003) and Svensson and Woodford (2004).</p>     <p><sup><a href="#s2" name="2">2</a> </sup> Data and economic shocks will also be correlated in the complicated case that information is<br />   not symmetric between agents and the modellor.</p>     <p><b>REFERENCES</b></p>     ]]></body>
<body><![CDATA[<!-- ref --><p>1. M.; Lind&eacute;, J.; Villani,     M.; Vredin, A. &quot;Modern Forecasting Models in     Action: Improving Macroeconomic Analyses     at Central Banks&quot;, Working Paper Series, num.     188, Sveriges Riksbank (Central Bank of Sweden),     2005.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000323&pid=S0120-4483201100030000800001&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>    2. Andersson, M. K.; Palmqvist, S.; Waggoner, D.     F. &quot;Density-Conditional Forecasts in Dynamic     Multivariate Models&quot;, Working Paper, num.     247, Sveriges Riksbank, 2010.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000325&pid=S0120-4483201100030000800002&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>    3. Benes;, J.; Binning, A.; Lees, K. &quot;Incorporating     Judgement with DSGE Models&quot;, Reserve     Bank of New Zealand, Discussion Paper Series,     num. DP2008/10, Reserve Bank of New Zealand,     2008.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000327&pid=S0120-4483201100030000800003&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>    4. Boivin, J.; Giannoni, M. &quot;DSGE Models in a     Data-Rich Environment&quot;, Working Paper, num.     T0332, NBER, 2006.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000329&pid=S0120-4483201100030000800004&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>    5. Boragan Aruoba, S. &quot;Data Uncertainty in General     Equilibrium&quot;, <i>Computing in Economics     and Finance</i>, num. 131, Society for Computational     Economics, 2004.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000331&pid=S0120-4483201100030000800005&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     ]]></body>
<body><![CDATA[<!-- ref --><p>    6. Britton, E.; Fisher, P.; Whitley, J. &quot;The Inflation     Report Projections: Understanding the     Fanchart&quot;, <i>Quarterly Bulletin</i>, num. 1, Bank of     England, 1998.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000333&pid=S0120-4483201100030000800006&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>    7. Burmeister, E.; Wall, K. &quot;Kalman Filtering Estimation     of Unobserved Rational Expectations     with An Application to the German Hyperinflation&quot;,   <i>Journal of Econometrics</i>, vol. 20, num. 2,     pp. 255-284, 1982.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000335&pid=S0120-4483201100030000800007&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>    8. Canova, F.; Sala, L. &quot;Back to Square One: Identification     Issues in DSGE Models&quot;, Working Paper     Series, num. 583, European Central Bank,     2006.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000337&pid=S0120-4483201100030000800008&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>    9. Coenen, G.; Levin, A.; Wieland, V. &quot;Data Uncertainty     and the Role of Money as an Information     Variable for Monetary Policy&quot;, <i>European     Economic Review</i>, vol. 49, num. 4, pp. 975-1006,   2004.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000339&pid=S0120-4483201100030000800009&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref -->  </p>     <!-- ref --><p>10. Durbin, J.; Koopman, S. <i>Time Series Analysis by     State Space Methods</i> (vol. 24), Oxford Statistical     Series, Oxford University Press, 2001.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000341&pid=S0120-4483201100030000800010&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     ]]></body>
<body><![CDATA[<!-- ref --><p>    11. Fukac, M.; Pagan, A. &quot;Issues in Adopting DSGE     Models for Use in the Policy Process&quot;, Working     Papers, num. 2006/6, Czech National Bank, Research     Department, 2006.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000343&pid=S0120-4483201100030000800011&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>    12. Gerali, A.; Lippi, F. &quot;Optimal Control and Filtering     in Forward-Looking Economies&quot;, Working     Paper, num. 3706, Centre of Economic     Policy Research, 2003.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000345&pid=S0120-4483201100030000800012&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>    13. G&oacute;mez, A. G.; Mahadeva, L.; Sarmiento, J. D.     P.; Rodr&iacute;guez, D. G. &quot;Policy Analysis Tool Applied     to Colombian Needs: PATACON Model     Description&quot;, Borradores de Econom&iacute;a, n&uacute;m.<br />   008698, Banco de la Rep&uacute;blica, 2011.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000347&pid=S0120-4483201100030000800013&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>    14. Guerron-Quintana, P. A. &quot;What You Match Does     Matter: The Effects of Data on DSGE Model Estimation&quot;,     Working Paper, North Carolina State     University, 2007.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000349&pid=S0120-4483201100030000800014&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>    15. Harvey, A. Forecasting, <i>Structural Time Series     Models and the Kalman Filter</i>, Cambridge,     Cambridge University     Press, 1991.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000351&pid=S0120-4483201100030000800015&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     ]]></body>
<body><![CDATA[<!-- ref --><p>    16. Kalchbrenner, J. H.; Tinsley, P. A.; Berry, J.;     Garrett, B. &quot;On Filtering Auxiliary Information     in Short-Run Monetary Policy&quot;, <i>Carnegie-Rochester     Conference Series on Public Policy</i>, vol.<br />   7, num. 1, pp. 39-84, 1977.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000353&pid=S0120-4483201100030000800016&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>    17. Klein, P. &quot;Using the Generalized Schur Form to     Solve a Multivariate Linear Rational Expectations     Model&quot;, <i>Journal of Economic Dynamics and Control</i>,     vol. 24, num. 10, pp. 1405-1423, 2000&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000355&pid=S0120-4483201100030000800017&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --><!-- ref --><p>    18. Koop, G.; Pesaran, M. H.; Smith, R.&quot;On Identification     of Bayesian DSGE Models&quot;, Working     Papers, num. 1108, University of Strathclyde     Business School, Department of Economics,     2011.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000356&pid=S0120-4483201100030000800018&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>    19. Koopman, S.; Harvey, A. &quot;Computing Observation     Weights for Signal Extraction and Filtering&quot;,   <i>Journal of Economic Dynamics and   Control</i>, vol. 27, num. 7, pp. 1317-1333, 2003.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000358&pid=S0120-4483201100030000800019&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref -->  </p>     <!-- ref --><p>    20. E. &quot;Anticipated Alternative     Policy Rate Paths in Policy Simulations&quot;,   <i>International Journal of Central Banking</i>, vol.     7, num. 3, pp. 1-35, 2011.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000360&pid=S0120-4483201100030000800020&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref -->   </p>     <!-- ref --><p>    21. Laxton, D.; Juillard, M. &quot;A Robust and Efficient   Method for Solving Nonlinear Rational Expectations     Models&quot;, IMF Working Papers, num.   96/106, International Monetary Fund, 1996.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000362&pid=S0120-4483201100030000800021&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>    22. Leeper, E. M.; Zha, T. &quot;Modest Policy Interventions&quot;,   <i>Journal of Monetary Economics</i>, vol. 50,   num. 8, pp. 1673-1700, 2003.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000364&pid=S0120-4483201100030000800022&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>    23. Lettau, M.; Ludvigson, S. &quot;Understanding     Trend and Cycle in Asset Values: Reevaluating     the Wealth Effect on Consumption&quot;, <i>American     Economic Review</i>, vol. 94, num. 1, pp. 276-299,   2004.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000366&pid=S0120-4483201100030000800023&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>    24. Mahadeva, L.; Parra, J. C. &quot;Testing a DSGE     Model and its Partner Database&quot;, Borradores de     Econom&iacute;a, num. 004507, Banco de la Rep&uacute;blica,   2008.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000368&pid=S0120-4483201100030000800024&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>    25. Maih, J. &quot;Conditional forecasts in DSGE models&quot;,     Working Paper, num. 2010/07, Norges   Bank, 2010.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000370&pid=S0120-4483201100030000800025&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>    26. Monti, F. &quot;Combining Judgment and Models&quot;,   <i>Journal of Money, Credit and Banking</i>, vol. 42,   num. 8, pp. 1641-1662, 2010.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000372&pid=S0120-4483201100030000800026&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>    27. Negro, M. D.; Schorfheide, F. &quot;Monetary Policy     Analysis with Potentially Misspecified Models&quot;,   <i>American Economic Review</i>, vol. 99, num.   4, pp. 1415-1450, 2009.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000374&pid=S0120-4483201100030000800027&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>    28. Orphanides, A. &quot;Monetary Policy Rules Based     on Real-Time Data&quot;, <i>American Economic Review</i>,   vol. 91, num. 4, pp. 964-984, 2001.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000376&pid=S0120-4483201100030000800028&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>    29. Pearlman, J. &quot;Diverse Information and Rational     Expectations Models&quot;, <i>Journal of Economic     Dynamics and Control</i>, vol. 10, num. 1-2, pp.   333-338, 1986.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000378&pid=S0120-4483201100030000800029&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>    30. Pichler, P. &quot;Forecasting with DSGE Models:     The Role of Nonlinearities&quot;, The B.E. <i>Journal   of Macroeconomics</i>, vol. 8, num. 1, p. 20, 2008.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000380&pid=S0120-4483201100030000800030&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>    31. Robertson, J. C.; Tallman, E. W.; Whiteman, C.     H. &quot;Forecasting Using Relative Entropy&quot;, <i>Journal     of Money, Credit and Banking</i>, vol. 37, num.   3, pp. 383-401, 2005.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000382&pid=S0120-4483201100030000800031&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>    32. Schmitt-Groh&eacute;, S.; Uribe, M. &quot;Solving Dynamic     General Equilibrium Models Using a Second-     Order Approximation to the Policy Function&quot;,   <i>Journal of Economic Dynamics and Control</i>,   vol. 28, num. 4, pp. 755-775, 2004.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000384&pid=S0120-4483201100030000800032&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>    33. Schmitt-Groh&eacute;, S.; Uribe, M. &quot;What's News in     Business Cycles&quot;, Working Paper, num. 14215,   NBER, 2008.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000386&pid=S0120-4483201100030000800033&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>    34. Schorfheide, F. &quot;Estimation and Evaluation     of DSGE Models: Progress and Challenges&quot;,     Working Papers, num. 16781, National Bureau   of Economic Research, Inc., 2011.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000388&pid=S0120-4483201100030000800034&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>    35. Schorfheide, F.; Sill, K.; Kryshko, M. &quot;DSGE     Model-Based Forecasting of Non-Modelled     Variables&quot;,<i> International Journal of Forecasting</i>,   vol. 26, num. 2, pp. 348-373, 2010.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000390&pid=S0120-4483201100030000800035&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>    36. Sims, C. A. &quot;Solving Linear Rational Expectations     Models&quot;, <i>Computational Economics</i>, vol.   20, num. 1-2, pp. 1-20, 2002.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000392&pid=S0120-4483201100030000800036&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>    37. Svensson, L. E.; Woodford, M. &quot;Indicator Variables     for Optimal Policy&quot;, <i>Journal of Monetary   Economics</i>, num. 50, pp. 691-720, 2003.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000394&pid=S0120-4483201100030000800037&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>    38. Svensson, L. E.; Woodford, M. &quot;Indicator Variables     for Optimal Policy under Asymmetric Information&quot;,   <i>Journal of Economic Dynamics and   Control</i>, vol. 28, pp. 661-690, 2004.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000396&pid=S0120-4483201100030000800038&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>    39. Tinsley, P.; Spindt, P.; Friar, M. &quot;Indicator and     Filter Attributes of Monetary Aggregates: A     Nit-Picking Case for Disaggregation&quot;, <i>Journal     of Econometrics</i>, vol. 14, num. 1, pp. 61-91,   1980.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000398&pid=S0120-4483201100030000800039&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>    40. Uhlig, H. &quot;A Toolkit for Analyzing Nonlinear     Dynamic Stochastic Models Easily&quot;, Discussion     Paper, num. 9597, Tilburg University Center,   1995.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000400&pid=S0120-4483201100030000800040&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p>     <!-- ref --><p>    41. Watson, P. &quot;Kalman Filtering as an Alternative     to Ordinary Least Squares-Some Theoretical     Considerations and Empirical Results&quot;, <i>Empirical   Economics</i>, vol. 8, num. 2, pp. 71-85, 1983.    &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;[&#160;<a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000402&pid=S0120-4483201100030000800041&lng=','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a>&#160;]<!-- end-ref --></p> </FONT>      ]]></body><back>
<ref-list>
<ref id="B1">
<label>1</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[M]]></surname>
</name>
<name>
<surname><![CDATA[Lindé]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
<name>
<surname><![CDATA[Villani]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
<name>
<surname><![CDATA[Vredin]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA['Modern Forecasting Models in Action: Improving Macroeconomic Analyses at Central Banks']]></article-title>
<source><![CDATA[Working Paper Series]]></source>
<year>2005</year>
<numero>188</numero>
<issue>188</issue>
<publisher-name><![CDATA[Sveriges Riksbank (Central Bank of Sweden)]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B2">
<label>2</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Andersson]]></surname>
<given-names><![CDATA[M. K]]></given-names>
</name>
<name>
<surname><![CDATA[Palmqvist]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
<name>
<surname><![CDATA[Waggoner]]></surname>
<given-names><![CDATA[D. F]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA['Density-Conditional Forecasts in Dynamic Multivariate Models']]></article-title>
<source><![CDATA[Working Paper]]></source>
<year>2010</year>
<numero>247</numero>
<issue>247</issue>
<publisher-name><![CDATA[Sveriges Riksbank]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B3">
<label>3</label><nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Benes]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
<name>
<surname><![CDATA[Binning]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
<name>
<surname><![CDATA[Lees]]></surname>
<given-names><![CDATA[K]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA['Incorporating Judgement with DSGE Models']]></article-title>
<source><![CDATA[]]></source>
<year>2008</year>
<numero>DP2008/10</numero>
<issue>DP2008/10</issue>
<publisher-name><![CDATA[Reserve Bank of New Zealand]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B4">
<label>4</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Boivin]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
<name>
<surname><![CDATA[Giannoni]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA['DSGE Models in a Data-Rich Environment']]></article-title>
<source><![CDATA[Working Paper]]></source>
<year>2006</year>
<numero>T0332</numero>
<issue>T0332</issue>
<publisher-name><![CDATA[NBER]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B5">
<label>5</label><nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Boragan Aruoba]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[' Data Uncertainty in General Equilibrium']]></article-title>
<source><![CDATA[Computing in Economics and Finance]]></source>
<year>2004</year>
<volume>131</volume>
<publisher-name><![CDATA[Society for Computational Economics]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B6">
<label>6</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Britton]]></surname>
<given-names><![CDATA[E]]></given-names>
</name>
<name>
<surname><![CDATA[Fisher]]></surname>
<given-names><![CDATA[P]]></given-names>
</name>
<name>
<surname><![CDATA[Whitley]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA['The Inflation Report Projections: Understanding the Fanchart']]></article-title>
<source><![CDATA[Quarterly Bulletin]]></source>
<year>1998</year>
<numero>1</numero>
<issue>1</issue>
<publisher-name><![CDATA[Bank of England]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B7">
<label>7</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Burmeister]]></surname>
<given-names><![CDATA[E]]></given-names>
</name>
<name>
<surname><![CDATA[Wall]]></surname>
<given-names><![CDATA[K]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA['Kalman Filtering Estimation of Unobserved Rational Expectations with An Application to the German Hyperinflation']]></article-title>
<source><![CDATA[Journal of Econometrics]]></source>
<year>1982</year>
<volume>20</volume>
<numero>2</numero>
<issue>2</issue>
<page-range>255-284</page-range></nlm-citation>
</ref>
<ref id="B8">
<label>8</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Canova]]></surname>
<given-names><![CDATA[F]]></given-names>
</name>
<name>
<surname><![CDATA[Sala]]></surname>
<given-names><![CDATA[L]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA['Back to Square One: Identification Issues in DSGE Models']]></article-title>
<source><![CDATA[Working Paper Series]]></source>
<year>2006</year>
<volume>583</volume>
<publisher-name><![CDATA[European Central Bank]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B9">
<label>9</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Coenen]]></surname>
<given-names><![CDATA[G]]></given-names>
</name>
<name>
<surname><![CDATA[Levin]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
<name>
<surname><![CDATA[Wieland]]></surname>
<given-names><![CDATA[V]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA['Data Uncertainty and the Role of Money as an Information Variable for Monetary Policy']]></article-title>
<source><![CDATA[European Economic Review]]></source>
<year>2004</year>
<volume>49</volume>
<numero>4</numero>
<issue>4</issue>
<page-range>975-1006</page-range></nlm-citation>
</ref>
<ref id="B10">
<label>10</label><nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Durbin]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
<name>
<surname><![CDATA[Koopman]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
</person-group>
<source><![CDATA[Time Series Analysis by State Space Methods]]></source>
<year>2001</year>
<volume>24</volume>
<publisher-name><![CDATA[Oxford University Press]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B11">
<label>11</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Fukac]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
<name>
<surname><![CDATA[Pagan]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA['Issues in Adopting DSGE Models for Use in the Policy Process']]></article-title>
<source><![CDATA[Working Papers]]></source>
<year></year>
<numero>2006/6</numero>
<issue>2006/6</issue>
</nlm-citation>
</ref>
<ref id="B12">
<label>12</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Gerali]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
<name>
<surname><![CDATA[Lippi]]></surname>
<given-names><![CDATA[F]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA['Optimal Control and Filtering in Forward-Looking Economies']]></article-title>
<source><![CDATA[Working Paper]]></source>
<year>2003</year>
<numero>3706</numero>
<issue>3706</issue>
<publisher-name><![CDATA[Centre of Economic Policy Research]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B13">
<label>13</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Gómez]]></surname>
<given-names><![CDATA[A. G]]></given-names>
</name>
<name>
<surname><![CDATA[Mahadeva]]></surname>
<given-names><![CDATA[L]]></given-names>
</name>
<name>
<surname><![CDATA[Sarmiento]]></surname>
<given-names><![CDATA[J. D. P]]></given-names>
</name>
<name>
<surname><![CDATA[Rodríguez]]></surname>
<given-names><![CDATA[D. G]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA['Policy Analysis Tool Applied to Colombian Needs: PATACON Model Description']]></article-title>
<source><![CDATA[Borradores de Economía]]></source>
<year>2011</year>
<numero>008698</numero>
<issue>008698</issue>
<publisher-name><![CDATA[Banco de la República]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B14">
<label>14</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Guerron-Quintana]]></surname>
<given-names><![CDATA[P. A]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA['What You Match Does Matter: The Effects of Data on DSGE Model Estimation']]></article-title>
<source><![CDATA[Working Paper]]></source>
<year>2007</year>
<publisher-name><![CDATA[North Carolina State University]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B15">
<label>15</label><nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Harvey]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
</person-group>
<source><![CDATA[Forecasting, Structural Time Series Models and the Kalman Filter]]></source>
<year>1991</year>
<publisher-loc><![CDATA[Cambridge ]]></publisher-loc>
<publisher-name><![CDATA[Cambridge University Press]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B16">
<label>16</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Kalchbrenner]]></surname>
<given-names><![CDATA[J. H]]></given-names>
</name>
<name>
<surname><![CDATA[Tinsley]]></surname>
<given-names><![CDATA[P. A]]></given-names>
</name>
<name>
<surname><![CDATA[Berry]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
<name>
<surname><![CDATA[Garrett]]></surname>
<given-names><![CDATA[B]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA['On Filtering Auxiliary Information in Short-Run Monetary Policy']]></article-title>
<source><![CDATA[Carnegie-Rochester Conference Series on Public Policy]]></source>
<year>1977</year>
<volume>7</volume>
<numero>1</numero>
<issue>1</issue>
<page-range>39-84</page-range></nlm-citation>
</ref>
<ref id="B17">
<label>17</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Klein]]></surname>
<given-names><![CDATA[P]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA['Using the Generalized Schur Form to Solve a Multivariate Linear Rational Expectations Model']]></article-title>
<source><![CDATA[Journal of Economic Dynamics and Control]]></source>
<year>2000</year>
<volume>24</volume>
<numero>10</numero>
<issue>10</issue>
<page-range>1405-1423</page-range></nlm-citation>
</ref>
<ref id="B18">
<label>18</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Koop]]></surname>
<given-names><![CDATA[G]]></given-names>
</name>
<name>
<surname><![CDATA[Pesaran]]></surname>
<given-names><![CDATA[M. H]]></given-names>
</name>
<name>
<surname><![CDATA[Smith]]></surname>
<given-names><![CDATA[R]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA['On Identification of Bayesian DSGE Models']]></article-title>
<source><![CDATA[Working Papers]]></source>
<year>2011</year>
<numero>1108</numero>
<issue>1108</issue>
<publisher-name><![CDATA[University of Strathclyde Business School, Department of Economics]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B19">
<label>19</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Koopman]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
<name>
<surname><![CDATA[Harvey]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA['Computing Observation Weights for Signal Extraction and Filtering']]></article-title>
<source><![CDATA[Journal of Economic Dynamics and Control]]></source>
<year>2003</year>
<volume>27</volume>
<numero>7</numero>
<issue>7</issue>
<page-range>1317-1333</page-range></nlm-citation>
</ref>
<ref id="B20">
<label>20</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[E]]></surname>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA['Anticipated Alternative Policy Rate Paths in Policy Simulations']]></article-title>
<source><![CDATA[International Journal of Central Banking]]></source>
<year>2011</year>
<volume>7</volume>
<numero>3</numero>
<issue>3</issue>
<page-range>1-35</page-range></nlm-citation>
</ref>
<ref id="B21">
<label>21</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Laxton]]></surname>
<given-names><![CDATA[D]]></given-names>
</name>
<name>
<surname><![CDATA[Juillard]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA['A Robust and Efficient Method for Solving Nonlinear Rational Expectations Models']]></article-title>
<source><![CDATA[IMF Working Papers]]></source>
<year>1996</year>
<numero>96/106</numero>
<issue>96/106</issue>
<publisher-name><![CDATA[International Monetary Fund]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B22">
<label>22</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Leeper]]></surname>
<given-names><![CDATA[E. M]]></given-names>
</name>
<name>
<surname><![CDATA[Zha]]></surname>
<given-names><![CDATA[T]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA['Modest Policy Interventions']]></article-title>
<source><![CDATA[Journal of Monetary Economics]]></source>
<year>2003</year>
<volume>50</volume>
<numero>8</numero>
<issue>8</issue>
<page-range>1673-1700</page-range></nlm-citation>
</ref>
<ref id="B23">
<label>23</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Lettau]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
<name>
<surname><![CDATA[Ludvigson]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA['Understanding Trend and Cycle in Asset Values: Reevaluating the Wealth Effect on Consumption']]></article-title>
<source><![CDATA[American Economic Review]]></source>
<year>2004</year>
<volume>94</volume>
<numero>1</numero>
<issue>1</issue>
<page-range>276-299</page-range></nlm-citation>
</ref>
<ref id="B24">
<label>24</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Mahadeva]]></surname>
<given-names><![CDATA[L]]></given-names>
</name>
<name>
<surname><![CDATA[Parra]]></surname>
<given-names><![CDATA[J. C]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA['Testing a DSGE Model and its Partner Database']]></article-title>
<source><![CDATA[Borradores de Economía]]></source>
<year>2008</year>
<numero>004507</numero>
<issue>004507</issue>
<publisher-name><![CDATA[Banco de la República]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B25">
<label>25</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Maih]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA['Conditional forecasts in DSGE models']]></article-title>
<source><![CDATA[Working Paper]]></source>
<year>2010</year>
<numero>2010/07</numero>
<issue>2010/07</issue>
<publisher-name><![CDATA[Norges Bank]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B26">
<label>26</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Monti]]></surname>
<given-names><![CDATA[F]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA['Combining Judgment and Models']]></article-title>
<source><![CDATA[Journal of Money, Credit and Banking]]></source>
<year>2010</year>
<volume>42</volume>
<numero>8</numero>
<issue>8</issue>
<page-range>1641-1662</page-range></nlm-citation>
</ref>
<ref id="B27">
<label>27</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Negro]]></surname>
<given-names><![CDATA[M. D]]></given-names>
</name>
<name>
<surname><![CDATA[Schorfheide]]></surname>
<given-names><![CDATA[F]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA['Monetary Policy Analysis with Potentially Misspecified Models']]></article-title>
<source><![CDATA[American Economic Review]]></source>
<year>2009</year>
<volume>99</volume>
<numero>4</numero>
<issue>4</issue>
<page-range>1415-1450</page-range></nlm-citation>
</ref>
<ref id="B28">
<label>28</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Orphanides]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA['Monetary Policy Rules Based on Real-Time Data']]></article-title>
<source><![CDATA[American Economic Review]]></source>
<year>2001</year>
<volume>91</volume>
<numero>4</numero>
<issue>4</issue>
<page-range>964-984</page-range></nlm-citation>
</ref>
<ref id="B29">
<label>29</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Pearlman]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA['Diverse Information and Rational Expectations Models']]></article-title>
<source><![CDATA[Journal of Economic Dynamics and Control]]></source>
<year>1986</year>
<volume>10</volume>
<numero>1-2</numero>
<issue>1-2</issue>
<page-range>333-338</page-range></nlm-citation>
</ref>
<ref id="B30">
<label>30</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Pichler]]></surname>
<given-names><![CDATA[P]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA['Forecasting with DSGE Models: The Role of Nonlinearities']]></article-title>
<source><![CDATA[The B.E. Journal of Macroeconomics]]></source>
<year>2008</year>
<volume>8</volume>
<numero>1</numero>
<issue>1</issue>
<page-range>20</page-range></nlm-citation>
</ref>
<ref id="B31">
<label>31</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Robertson]]></surname>
<given-names><![CDATA[J. C]]></given-names>
</name>
<name>
<surname><![CDATA[Tallman]]></surname>
<given-names><![CDATA[E. W]]></given-names>
</name>
<name>
<surname><![CDATA[Whiteman]]></surname>
<given-names><![CDATA[C. H]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA['Forecasting Using Relative Entropy']]></article-title>
<source><![CDATA[Journal of Money, Credit and Banking]]></source>
<year>2005</year>
<volume>37</volume>
<numero>3</numero>
<issue>3</issue>
<page-range>383-401</page-range></nlm-citation>
</ref>
<ref id="B32">
<label>32</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Schmitt-Grohé]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
<name>
<surname><![CDATA[Uribe]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA['Solving Dynamic General Equilibrium Models Using a Second- Order Approximation to the Policy Function']]></article-title>
<source><![CDATA[Journal of Economic Dynamics and Contro]]></source>
<year>2004</year>
<volume>28</volume>
<numero>4</numero>
<issue>4</issue>
<page-range>755-775</page-range></nlm-citation>
</ref>
<ref id="B33">
<label>33</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Schmitt-Grohé]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
<name>
<surname><![CDATA[Uribe]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA['What's News in Business Cycles']]></article-title>
<source><![CDATA[Working Paper]]></source>
<year>2008</year>
<volume>14215</volume>
<publisher-name><![CDATA[NBER]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B34">
<label>34</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Schorfheide]]></surname>
<given-names><![CDATA[F]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA['Estimation and Evaluation of DSGE Models: Progress and Challenges']]></article-title>
<source><![CDATA[Working Papers]]></source>
<year>2011</year>
<numero>16781</numero>
<issue>16781</issue>
<publisher-name><![CDATA[National Bureau of Economic Research, Inc.]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B35">
<label>35</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Schorfheide]]></surname>
<given-names><![CDATA[F]]></given-names>
</name>
<name>
<surname><![CDATA[Sill]]></surname>
<given-names><![CDATA[K]]></given-names>
</name>
<name>
<surname><![CDATA[Kryshko]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA['DSGE Model-Based Forecasting of Non-Modelled Variables']]></article-title>
<source><![CDATA[International Journal of Forecasting]]></source>
<year>2010</year>
<volume>26</volume>
<numero>2</numero>
<issue>2</issue>
<page-range>348-373</page-range></nlm-citation>
</ref>
<ref id="B36">
<label>36</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Sims]]></surname>
<given-names><![CDATA[C. A]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA["Solving Linear Rational Expectations Models"]]></article-title>
<source><![CDATA[Computational Economics]]></source>
<year>2002</year>
<volume>20</volume>
<numero>1-2</numero>
<issue>1-2</issue>
<page-range>1-20</page-range></nlm-citation>
</ref>
<ref id="B37">
<label>37</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Svensson]]></surname>
<given-names><![CDATA[L. E]]></given-names>
</name>
<name>
<surname><![CDATA[Woodford]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA['Indicator Variables for Optimal Policy']]></article-title>
<source><![CDATA[Journal of Monetary Economics]]></source>
<year>2003</year>
<numero>50</numero>
<issue>50</issue>
<page-range>691-720</page-range></nlm-citation>
</ref>
<ref id="B38">
<label>38</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Svensson]]></surname>
<given-names><![CDATA[L. E]]></given-names>
</name>
<name>
<surname><![CDATA[Woodford]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA['Indicator Variables for Optimal Policy under Asymmetric Information']]></article-title>
<source><![CDATA[Journal of Economic Dynamics and Control]]></source>
<year>2004</year>
<volume>28</volume>
<page-range>661-690</page-range></nlm-citation>
</ref>
<ref id="B39">
<label>39</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Tinsley]]></surname>
<given-names><![CDATA[P]]></given-names>
</name>
<name>
<surname><![CDATA[Spindt]]></surname>
<given-names><![CDATA[P]]></given-names>
</name>
<name>
<surname><![CDATA[M]]></surname>
<given-names><![CDATA[Friar]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA['Indicator and Filter Attributes of Monetary Aggregates: A Nit-Picking Case for Disaggregation']]></article-title>
<source><![CDATA[Journal of Econometrics]]></source>
<year>1980</year>
<volume>14</volume>
<numero>1</numero>
<issue>1</issue>
<page-range>61-91</page-range></nlm-citation>
</ref>
<ref id="B40">
<label>40</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Uhlig]]></surname>
<given-names><![CDATA[H]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA['A Toolkit for Analyzing Nonlinear Dynamic Stochastic Models Easily']]></article-title>
<source><![CDATA[Discussion Paper]]></source>
<year>1995</year>
<numero>9597</numero>
<issue>9597</issue>
<publisher-name><![CDATA[Tilburg University Center]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B41">
<label>41</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Watson]]></surname>
<given-names><![CDATA[P]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA['Kalman Filtering as an Alternative to Ordinary Least Squares-Some Theoretical Considerations and Empirical Results']]></article-title>
<source><![CDATA[Empirical Economics]]></source>
<year>1983</year>
<volume>8</volume>
<numero>2</numero>
<issue>2</issue>
<page-range>71-85</page-range></nlm-citation>
</ref>
</ref-list>
</back>
</article>
