Higher education, with all its rankings and competition, is kind of like a sport. Well now the federal government is going to help us keep score with the College Scorecard, which promises, according to President Obama in his weekly radio address, to give all Americans “access to reliable data on every institution of higher education.” Of course, if we have “reliable data” we’ll know which institutions are good or bad. Right?

Nope.

For starters, some schools are missing data, including in the three areas highlighted in initial, at-a-glance search results: cost, graduation rate, and salary ten years after entering. Also, while one can search for schools by program, the earnings data — arguably the key output — aren’t for those specific programs. They are the median salary for the entire school’s federal aid recipients. In other words, the Scorecard, at least right now, offers only very broad, somewhat incomplete stats.

More important, the Scorecard leaves out crucial context you need to determine if an institution is really bad — in other words, it doesn’t actually add value for students — or if it does add value but is working with more challenging folks. It’s akin to suggesting the Little League world champion’s coach isn’t very good because the Yankees hit bigger homers.

I’ve written about the huge importance of outside-school realities before, but a new report from the Brookings Institution examining big increases in student loan defaults — the timing of which dovetails nicely with the Scorecard — helps to provide more context.

When they think of “bad” schools, most edu-policy people probably think “for-profits,” and some may think “community colleges.” But it is clear that these institutions tend very strongly to be the ones taking the high-risk, marginal students to whom the federal government happily hands big loans and says “go to college.” What is not clear, especially given their student bodies and the programs they offer, is whether such schools actually add less value than schools with much higher paid grads, potentially all the way up to Harvard or Yale.

The Brookings report furnishes graphs showing, at least among borrowers, which schools have the toughest challenges. For-profit borrowers have by far the lowest parental income: around $30,000 per year from eyeballing the graph, compared to roughly $48,000 for community colleges and nonselective four-year schools; $65,000 for somewhat selective four-year institutions; and $80,000 for selective four-year schools. Median age of borrowers at entry is about 24 years old at for-profit schools, 23 at community colleges, 20 at non-selective schools, and 18 at the top two levels of selectivity. To put a bow on it, the percentage of borrowers who are first-generation college attendees is around 58 percent at for-profits, 50 percent at community colleges, 42 percent at nonselective four-year schools, 38 percent at somewhat selective institutions, and 25 percent at selective colleges.

Data behind the College Scorecard — into which I am just starting to wade — also help reveal the importance of factors outside of the schools themselves, though the Scorecard doesn’t feature such analysis. Future salaries, for instance, correlate powerfully with schools’ SAT and ACT scores. The correlation between salaries and average SAT score is 0.62, a “strong” correlation. The correlation between salaries and midpoint ACT math score is 0.69. And the correlation with the percentage of students using Pell Grants — a rough approximation of student poverty — is ‑0.49, very close to a “strong” correlation.

This is essential context. But the Scorecard doesn’t focus on it at all, giving the impression that school quality is just about cost, graduation rates, and earnings. Which is great for superficially demonizing some schools and lionizing others, but terrible for deciding which colleges are good, and which maybe shouldn’t even be in the game.