“A new MIT study found that not only do rideshares increase congestion, but they also made traffic jams longer, led to a significant decline in people taking public transit, and haven’t really impacted car ownership,” reports Gizmodo. As I’ve noted previously, transit advocates blame ride hailing for all sorts of problems in order to justify taxes and other restrictions to limit competition.

The new study from MIT is frankly unpersuasive. First of all, it says very little about the methodology used: page 1 of the study is an introduction and page 2 immediately begins to present the results. It appears the writers compared data in 44 urban areas before and after the introduction of ride hailing into those areas between 2012 and 2016, but exactly what data and which years isn’t clear.

Second, the writers appear to have made no effort to correct for or even consider any other variables. Although Uber began operating in San Francisco in 2010, ride hailing didn’t really begin growing until 2014. But the other thing that happened in 2014 was a huge drop in gasoline prices — prices fell by 50 percent in some areas. This isn’t even mentioned in the paper even though that drop could have most of the same effects the paper attributes to ride hailing.

Third, a fact not mentioned in many of the press reports about the paper, the writers conclude that ride hailing increased congestion by just 0.9 percent. Less than 1 percent! That’s smaller than the margin of error of much of the data that was probably used in the paper.

Fourth, the paper also blames an 8.9 percent drop in transit ridership on ride hailing. If ride hailing had reduced transit ridership by that much, we should be grateful that someone is substituting for-profit transportation that goes where people want when they want to go for money-losing transportation that only goes to a limited number of destinations on rigid schedules. However, that drop, which in most areas began in 2014, is due more to the decline in gas prices than to ride hailing.

If this study weren’t from MIT, I would call it junk science. Maybe it should be called that anyway because its results aren’t replicable (since the methodology isn’t explicitly described) and it failed to account for alternative explanations of the changes it observed.