Note: A previous version of this blog post included an incorrect claim that the Bertelsmann study was funded by the German government. It was not.


Ever since two econometric studies (CEPR and Bertelsmann) purporting to estimate the gains from a successful Transatlantic Trade and Investment Partnership agreement were published in 2013 revealing positive — but vastly disparate — outcomes, TTIP opponents have been on the offensive, dismissing economic modelling as a subjective and politically motivated exercise.


Although estimating the benefits and costs of a massive trade agreement (for which the terms remain unknown) can hardly be considered an exact science, there is value to the public and to policymakers in understanding the range of possibilities. So, in other words, perhaps the problem is not the production of econometric estimates but, rather, the manner in which those estimates can be misused or misinterpreted that should concern us.


At the Cato TTIP conference earlier this month, there was a whole session devoted to the topic: Understanding the Economic Models and the Estimates They Produce. That discussion is fleshed out a bit in two Cato Online Forum essays, which I want to bring to your attention.


The first is a critique of the models from University of Manchester economics professor Gabriel Siles-Brugge, who articulates his perception of the problem and suggests some remedies. The second is a defense and broader explaination of the models from University of Munich professor and director of the Ifo Center for International Economics Gabriel Felbermayr, who is the primary modeller/​author of the Bertelsmann study.


Other conference-related essays, including a couple more on econometric models (from Laura Baughman and Dan Pearson), can be found here.