Paper: The illusion of predictability: How regression statistics mislead experts

Emre Soyer and Robin M. Hogarth: The illusion of predictability: How regression statistics mislead experts:

Abstract:

Does the manner in which results are presented in empirical studies affect perceptions of the predictability of the outcomes? Noting the predominant role of linear regression analysis in empirical economics, we asked 257 academic economists to make probabilistic inferences given different presentations of the outputs of this statistical tool. Questions concerned the distribution of the dependent variable conditional on known values of the independent variable. Answers based on the presentation mode that is standard in the literature led to an illusion of predictability; outcomes were perceived to be more predictable than could be justified by the model. In particular, many respondents failed to take the error term into account. Adding graphs did not improve inferences. Paradoxically, when only graphs were provided (i.e., no regression statistics), respondents were more accurate. The implications of our study suggest, inter alia, the need to reconsider how to present empirical results and the possible provision of easy-to-use simulation tools that would enable readers of empirical papers to make accurate inferences.

HT: Brad Delong

Oregon Medicaid Study: Need More Power

If you missed it last week, a two year follow up from the Oregon Medicaid experiment was published and caused a bit of a stir as the study concluded any health gains from Medicaid were statistically insignificant. The only problem is, there doesn’t appear to be enough participants in the study to measure any individual health gains for a specific condition (baring any large effects).

Kevin Drum:

Let’s do the math. In the Oregon study, 5.1 percent of the people in the control group had elevated GH [glycated hemoglobin, aka A1C, or colloquially, blood sugar] levels. Now let’s take a look at the treatment group. It started out with about 6,000 people who were offered Medicaid. Of that, 1,500 actually signed up. If you figure that 5.1 percent of them started out with elevated GH levels, that’s about 80 people. A 20 percent reduction would be 16 people.

So here’s the question: if the researchers ended up finding the result they hoped for (i.e., a reduction of 16 people with elevated GH levels), is there any chance that this result would be statistically significant? […] The answer is almost certainly no. It’s just too small a number.

Austin Frakt:

I plugged these numbers into Stata’s sample size calculation program (sampsi) to do a power calculation for the difference between two proportions. I found that the probability of this result occurring under the null hypothesis that Medicaid would have no effect on GH levels is 0.35. The null cannot be rejected. We knew this from the paper, and, hence, all the hubbub. (Never mind that we also cannot reject a much larger effect. The authors cover this in their discussion.)

The standard level of statistical significance is rejecting the null with 0.95 probability. Assuming the same baseline 5.1% elevated GH rate and a 20% reduction under Medicaid, what sample size would we need to achieve a 0.95 level of significance? Plugging and chugging, I get about 30,000 for the control group and a 7,500 treatment (Medicaid) group. (I’ve fixed the Medicaid take-up rate at 25%, as found in the study.) This is a factor of five bigger than the researchers had.

You should read the entirety of both Frakt and Dunn’s posts. Frakt also has a follow up post to the one I have quoted with some technical explanation to his power calculation.

PS: Matt Yglesias has a post on the experiment needed to determine the effectiveness of Medicaid. He also notes that Florida and Texas missed out on this opportunity when they completely rejected all Medicaid expansion.

Data Science of the Facebook World

While I highly doubt the subset of Facebook users who use Wolfram Alpha are a representative sample of Facebook users as a whole, Steve Wolfram’s post is interesting and full of charts.

Snippet from the beginning of his post:

More than a million people have now used our Wolfram|Alpha Personal Analytics for Facebook. And as part of our latest update, in addition to collecting some anonymized statistics, we launched a Data Donor program that allows people to contribute detailed data to us for research purposes.

A few weeks ago we decided to start analyzing all this data. And I have to say that if nothing else it’s been a terrific example of the power of Mathematica and the Wolfram Language for doing data science. (It’ll also be good fodder for the Data Science course I’m starting to create.)

We’d always planned to use the data we collect to enhance our Personal Analytics system. But I couldn’t resist also trying to do some basic science with it.

I’ve always been interested in people and the trajectories of their lives. But I’ve never been able to combine that with my interest in science. Until now. And it’s been quite a thrill over the past few weeks to see the results we’ve been able to get. Sometimes confirming impressions I’ve had; sometimes showing things I never would have guessed. And all along reminding me of phenomena I’ve studied scientifically in A New Kind of Science.

So what does the data look like? Here are the social networks of a few Data Donors—with clusters of friends given different colors. (Anyone can find their own network using Wolfram|Alpha—or the SocialMediaData function in Mathematica.)

social networks

 

High School Report Card in Economics

From Real Time Economics:

The nation’s “Report Card in Economics,” released Wednesday, found no improvement in high-school seniors’ economics knowledge from six years ago, on the eve of the crisis. The U.S. Department of Education project surveyed and tested nearly 11,000 12th-graders in 480 American public and private schools.

Researchers also found few differences in the distribution among the three levels of students’ economics knowledge: basic, proficient or advanced. Basic knowledge means a student can identify and recognize concepts such as gross domestic product; proficient entails a more comprehensive set of concepts, including opportunity costs and interest rates. Advanced translates into an understanding of fiscal and monetary policy as well as exchange rates. In both the 2006 and 2012 assessments, only 3% of students were advanced. The other two levels inched up: 39% of students were at the basic level in 2012, from 38% in 2006; the percentage of proficient students rose to 40% in 2012 from 39% in 2006.

Takeaway: only three percent of high school seniors have a basic understanding of Financial Policy or Monetary Policy. Unfortunately, if they don’t take any economics in college, they probably won’t ever understand, as we cannot trust pundits or the news to explain or understand these topics.

PS: How well do you suppose Congress would score on this test?