Back in March 2016, Noah Smith had a blog post titled, Russ Roberts and the new empirical world. In the post, Smith mentions six episodes from Russ Roberts' podcast Econ Talk that are basically a series on econometrics. I enjoyed listening to this series because it's informative to hear verbal explanations for econometric concepts.
This post is a collection of my favorite quotes from the six episodes.
Ed Leamer on the State of Econometrics (May 10, 2010)
"Economists don't observe feathers in a vacuum. They observe feathers when the wind is blowing, when humidity varies, eagle feathers, duck feathers. Tons of things that will affect the result."
"So when somebody says in the past $100 billion of spending had such and such impact on the U.S. economy, if there was this level of unemployment and this level of growth in the previous period, people are presuming that the same structural relationships that help when those estimates were made will still hold. So even though the cause of the recession might be totally different, even though what the money is spent on might be totally different, implicit in those multiplier arguments is the presumption that it doesn't matter."
"If you want to know: does government spending have a multiplier? Then you need a treatment group and a control group. You need to randomly subject an economy to a burst in spending and see what happens to that economy and contrast that to the control groups that did not get that spending."
"Econometric analysis is really journalism. Journalist's job is to marshal facts and put them together persuasively."
"To think that designing experiments is suddenly going to change economics into an empirical scientific discipline. That's doesn't seem likely to happen."
"All we have, especially in macro is opinions and their either persuasive and well thought out or not."
Joshua Angrist on Econometrics and Causation (December 22, 2014)
"We talk as an ideal the kind of randomized trial or field trial that's often used in medicine to determine cause and effect or to gauge cause and effect."
"Regression is just a way to control for things, to try and hold characteristics of groups that you're trying to compare fixed."
"I can't imagine seeing an empirical paper about cause and effect which doesn't at least show me the author's best effort at some kind of regression estimates where they control for the observed difference between groups."
"Each of these is an attempt to generate some kind of apples to apples comparison out of observational data."
"Sometimes instrumental variables is a method for leveraging experimental random assignment in complicated experiments where the treatment itself cannot be manipulated but there's an element of manipulation in the treatment."
"Regression discontinuity designs are non-experimental researched designed that attempt to mimic an experiment by using the rules that determine allocation to treatment states."
"So some of the evidence in these areas is stronger and weaker. There is certainly a lot of interesting evidence here that's worth discussing. That's my standard."
"Well science is done by human beings. When you come at it from an idealistic view, you're bound to be disappointed."
"One of the most influential documents in the history of social science is Friedman and Schwartz... It's an effort to get at the causes of the Depression. I think we can do better than Friedman and Schwartz with the tools today. Their work is a benchmark and a worthy benchmark."
Noah Smith on Whether Economics is a Science (December 28, 2015)
"You can often exploit quirks of how policies happen or how the real process works to get clean identification."
"The most important difference with natural experiments is you can't replicate them or repeat because each natural experiment happens only once."
"It's extremely hard to find these clean natural experiments in macro."
"If you're going to believe the results of an experiment you always have to make a leap of faith that all the reasonable stuff has been controlled for, right? That the experimenter has good controls and that's an assumption and a leap of faith you have to make in every science experiment."
"I think the thing about the stimulus is... it's not at all clear. The stimulus was not an actual experiment at all."
"There are some thunderbolt studies but what's much more common is an accumulated weight of studies that all have consistent results... So don't rely too much on these thunderbolt studies even though sometimes they do exist but they're pretty rare."
Adam Ozimek on the Power of Econometrics and Data (February 8, 2016)
"Evidence is good even if it doesn't mean the same thing to everyone."
"I think the research comes out and looks at slightly different angles and adjusts for slightly different mechanisms and we know so much more about the minimum wage than we did 10 years ago... If you look at the literature closely it doesn't look like a draw where two sides just lob evidence back and forth. It looks like progress to me."
"Macro's definitely harder. There's no doubt about that. There's less data. It's harder to isolate partial equilibrium."
"The stimulus act isn't something like the minimum wage. It's not a discrete policy where you turn the fiscal level up and it goes from 0 to a 1 to a 2 to a 3. The stimulus act was like a dozen different things and so to say that research hasn't told us whether the stimulus act was good or not or increased jobs, well I mean you could write a 100 page paper on just what was in it... but I do think that within the stimulus package there are things we can learn and have learned."
James Heckman on Facts, Evidence and the State of Econometrics (January 25, 2016)
"The new techniques are not so new. They involve instrumental variables which I think go back to Philip Wright in 1928. Instrumental variables have been a central part of econometrics for the last 70-80 years."
"Unfortunately the credibility revolution has taken this notion that there's some missing variable out there, some unobservable and we want to control for that unobservable to a new level, to a new extreme so much so there seems to be an obsession with making sure that we don't have this unobservable contaminating our result without asking what question that we're getting from this instrument."
"I think there's been a huge shift away from understanding behavior and moving towards statistical artifacts that are hard to interpret as responses to economic questions. So I think the credibility revolution has been somewhat overstated and not properly appreciated as having really turned focus away from serious economic analysis towards something I think is purely statistical."
"Calibrated models are models looking at some stylized facts that are putting together different pieces of data that are not mutually consistent. I mean literally you take estimates of this area, and estimates from that area and you assemble something that's like a Frankenstein..."
"I think every successful body of social science that use basic models, they've gotten to kind of the core of the idea; not bells and whistles. And bells and whistles are kind of second generation, third generation add-ons. Bells and whistles are often professionally and privately rewarding but maybe not so rewarding for the subject as a contributor to economic knowledge generally and public policy."
"Milton Friedman raised to me some of my first concerns about the credibility revolution. I remember telling him about some work I was doing and you remember the work I was doing was very complicated for its time. He looked at me and said, 'It looks like that kind of work has lots of room for fraud.'"
David Autor on Trade, China and U.S. Labor Markets (March 14, 2016)
"It's not that I think our estimates are cooked or even that sensitive. It's that they might miss other important margins. And I'm happy to concede that point. I mean, we've tried. It's not that we've sort of agreed to just sort of punt on that question. There's probably a lot of ways to look for those missing margins. We really haven't found them."