We’re number 6 through 15!

The new NRC rankings have now been released.  It’ll be fun to dig through these — I don’t yet see a giant spreadsheet available online, but you can use the search tool at phds.org to see how the rankings look for math departments.  As the title suggests, the rankings have big error bars around them:  the top of the list looks like

  • 1-4 Princeton
  • 1-3 Berkeley
  • 2-5 Harvard
  • 2-6 NYU
  • 4-9 Stanford
  • 4-12 Michigan
  • 4-11 Yale
  • 5-11 MIT
  • 5-16 Penn State
  • 6-15 Wisconsin
  • 8-28 UCLA
  • 9-25 Columbia
  • 10-28 UCSD
  • 9-32 Cal Tech
  • 9-30 Texas
  • 10-37 Brown

Some departments are already complaining about the quality of data used in the rankings — with some justice, sounds like.

Update: Nathan Dunfield was kind enough to post a spreadsheet with the data on math departments to Google Docs.

Tagged , , ,

21 thoughts on “We’re number 6 through 15!

  1. Noah Snyder says:

    Looks like one of their main criteria is “not located in Chicago,” what with Chicago at 12-39 (one ahead of Michigan State?!) and Northwestern at 29-57.

  2. The full spreadsheet is available from the NRC site (35 Mb of data). I pulled out just the math part and put in Google Docs here.

    Not sure what it all means yet. There’s two scores (R and S) and each one is given via 5% and 95% markers. The ones JSE posted are the R variant, though I’m not sure how one imputes the linear order from the 5% and 95% stats…

  3. Richard Kent says:

    I should make the spreadsheet public, Nathan.

  4. Noah Snyder says:

    Nathan you don’t seem to have made that file publicly accessible?

  5. Hmm, Google docs claims that it’s accessible to all, and there’s 5 anonymous viewers right now.

    Here’s the link as a plain webpage.

  6. Richard Kent says:

    Now I can see the google doc.

  7. Ok, I understand the difference between the R and the S rankings now. Both are based on the same overall quantitative data (pubs, grants, etc.) that the NRC collected. The “S” rankings weight these in based on a survey of faculty. The “R” rankings (which are the default) instead impute these weights by regressing the data against an opinion based survey. (It turns out the what people *claim* to use when judging a department has very little to do with how they actually do it.)

    The ranks ranked (e.g. 12-34) come from a Monte Carlo thingy. Basically they add some random noise to both the data and the weights, and then order the departments. The smaller ranking is the top 5th percentile result, and the lowest is the 95th percentile.

    I’m not clear on how PhDs.org goes from the (5, 95) data to the linear ordering. (They’re not averaging them, I think.) I assume the reason they don’t give the mean or median ranking is political…

  8. David says:

    I am surprised this blog post hasn’t gotten more comments.

    The universal human lust for hierarchy is disappointing me today.
    It must be feeling its age.

  9. Tom Nevins says:

    There’s a potentially interesting interactive tool now on the Chronicle of Higher Ed’s web site:

    http://chronicle.com/page/NRC-Rankings/321/

  10. Here’s a possible explanation for why the University of Chicago was only #21. From the full data, you can see they counted postdocs (Dickson instructors) as faculty. In particular, they counted 66 faculty of which only 54% are tenured (roughly matching the 32 permament faculty listed on their webpage). So this increases the denominator in all the faculty related things, and they did poorly in publications per faculty (99 of 125), though decent in cites per pub (21 of 125). Also the portion of faculty with grant support was low (58% or 62 of 125) but their award level was decent (21 of 125).

    This doesn’t explain why Northwestern is only #46 instead of, say, #15, as most of the count there is tenured faculty.

  11. David says:

    An interesting twist on Nathan’s observation on Dickson instructors at Chicago:
    at Yale 100% of the faculty reported is tenured faculty. I.e. the Gibbs instructors
    were not counted at all.

    So while Chicago counted all of it’s non-tenured faculty as faculty, Yale did not.
    And apparently the rules for self reporting this information were vague enough
    and the people sorting the data knew little enough that the disparity was simply
    allowed and not correction was made for it.

    Does make one wonder how many other poor judgement calls were made in
    the course of designing and implementing the survey.

  12. SQ says:

    Another thing is that they don’t seem to concern themselves with a) the relative quality of the journals being published in and b) the fact that some fields publish many more papers than others by their nature. So a professor who publishes a typical amount in say applied math, but in average journals, will come out seeming much better than the number theorist who publishes papers in top journals every year or two.

    I hate to say it, but it looks like an awesome (if unintentional) “troll”. They put most departments vaguely where people expect them, adding a sense of legitimacy, and then put a small number either way higher or way lower that where they deserve to maximize the enragement effect.

  13. Eric Rowell says:

    University of Delaware appears to have made significant strides as well as Penn State. They are number 1 if you only consider citations/faculty! Also “interdisciplinary faculty” is a terrible statistic: some schools have separate applied math departments while some math depts. include applied math and even computer science! Given that CS and applied math people often publish so many more papers than folks in more theoretical fields this skews thing in their favor.

  14. Evan Bullock says:

    According to this, only 50% of first-year graduate students at Rice have support.

  15. Amie says:

    OK, so I asked my dad (a statistician) to look at the data, and he had the following basic observation:

    “I haven’t seen the methodology description yet, but I have a preliminary suspicion. I factored all the variables that seemed to go into the composite ranking and they do not correlate very well. It seems to me they have taken a rather heterogeneous set of measures (representing at least 3 to 5 dimensions), and tried to combine them into a single rating. This is a common problem with aggregate scales. What some claim to be IQ, for example, is a composite of fairly heterogeneous skills. Intelligence is multidimensional.

    I’m especially suspicious when they combine (if they did) percent of graduate students supported with faculty citations. We all know that what is an outstanding department for an undergraduate is a different animal for a graduate student and different for a post-doc, different for assistant professor, different for tenured professor, different for senior faculty. A single ranking can’t capture this diversity.

    In the past, rankings like Carnegie were done by having faculty rate other departments. This is admittedly subjective, but it does capture a single dimension — prestige, or what faculty aspire to — as opposed to a statistical composite.

    I’m suspicious of composites in general. What does it mean if someone is hyperactive, rich, narcissistic, aggressive, and intelligent? Is that combination of traits caused by one thing? I think not, even though these traits may be highly correlated. On the other hand, it’s even more meaningless to combine traits that are unrelated. What does it mean if someone is hyperactive, outgoing, creative, introspective, and assertive? Just because some people score low on all these traits and other high, doesn’t mean they all measure a unitary trait. Same with departments.”

  16. JSE says:

    “What does it mean if someone is hyperactive, rich, narcissistic, aggressive, and intelligent?”

    They’ve got a future in politics?

  17. Since Northwestern came up in some of the above comments, I thought it would be appropriate to remark on some of the statistics reported for Northwestern in the spreadsheet that Nathan posted.

    (i) All of our graduate students are supported by some combination of TAships, NSF funding, and internal (i.e. Northwestern-funded) fellowships. On the other hand, according to the spreadsheet, in Fall 2005 44.7% of our students had TAships and none had research fellowships.

    (ii) As far as I can tell, the faculty information is taken from the 2005-6 academic year, in which case the count of 29 includes two lecturer positions (one being our calculus coordinator, the other being the person in charge of computing — both non-research positions) as well as 27 tenured and tenure track faculty. Of these 3 tenure/tenure track faculty were women, as was our calculus coordinator. At that time, Harvard had one female faculty member (a professor of the practice of mathematics). However, Harvard was ranked above us in terms of percentage of female faculty. (The explanation for this seems to be that BPs are included in Harvard’s faculty count.)

    (iii) Northwestern’s department has one member who holds a joint appointment with Statistics, and another who frequently shares grants jointly with engineers and others doing applied research, yet is listed as having no interdisciplinary faculty.

    I don’t have enough information available off-hand to decide on the accuracy of the other pieces of data figuring into Northwestern’s ranking, but the discrepancies I’ve mentioned here don’t inspire confidence.

  18. A more general comment on the data: as has already been noted, it seems that the way faculty is counted is completely unsystematic. Both Chicago’s and Harvard’s counts seem to include non-tenure line terminal positions (Dicksons and BPs). Northwestern’s count seems to include non research faculty. Some other counts seem to include just the tenure-line faculty. Given the obvious difference this can make to things like publications, citations, and grants per faculty, this seems to be a fairly fundamental flaw in the data collection.

    Does anyone have any explanation for why such a non-uniform approach was taken?

    The publications and citations per faculty number is also curious. For example (looking on MathSciNet), Yau has more than 5000 citations, Mazur has almost 2000, and Gross has more than 1000. For Northwestern faculty, Manin has 2500, and Friedlander (at Northwestern in 2005) has about 1000; however Northwestern also has (and had in 2005) several tenure-track and recently tenured faculty with much smaller citation numbers (whereas I would guess that all Harvard faculty have rather high numbers — e.g. Richard Taylor, a comparitively younger faculty member at Harvard, also has 1000 citations). The publications per faculty number for Northwestern is 0.9, and the citations per publication number is 0.7, which gives a total average number of citations as 0.63. (Over some time period? I am confused on this.) For Harvard, the numbers are 0.73 and 1.81, for a total number of citations for 1.32. Looking at the raw citations numbers for Harvard vs. Northwestern, I’m surprised that Harvard’s total number of citations per publication is only twice that of Northwestern. (Perhaps Harvard’s number is being diluted by the inclusion of BPs?) For the University of Delaware, on the other hand, the corresponding number is 0.93 *2.36, roughly 2.2. These numbers may well be correct according to the chosen measures, but I’m not sure that they reflect reality very well. For example, University of Delaware seems to be primarily an applied department (and perhaps their high citation number reflects something about publications in applied mathematics; I’m not well enough informed to know).

    Does anyone understand better what these numbers are purporting to measure?

  19. Does anyone have any explanation for why such a non-uniform approach was taken?

    My guess is that the situation in mathematics with many non-tenure line positions whose title is “(string of modifiers) assistant professor” is rare (often unheard of) in most other disciplines, and so the NRC didn’t think to come up with some uniform criteria for this. Even if they had, there might still have been oddities. The BP positions at Harvard are, as you say, terminal, but BPs are actually full voting members of the Faculty of Arts and Sciences and might well look reasonably like faculty in the official books to an uninformed observer, especially with the title of “assistant professor” rather than “instructor”.

    I do know there was some flexibility in the other end, namely whether (and which) emeritus faculty were included, which could conceivably be manipulated to a department’s advantage.

    I also assume that the citation rates were gathered through some generic tool rather than MathSciNet. Probably, with proper design one could generate an interesting ranking system if one could mine the full MathSciNet database. Before the NRC rankings came out, I did a little experiment by looking at

    (Papers published in Annals/Inventiones/JAMS in last five years) / (number of faculty)

    It’s surely a cold comfort, but Northwestern did really well by this measure…

  20. Anonymous says:

    Nathan — it may indeed be cold comfort, but I wouldn’t mind seeing the “Annals/Inventiones/JAMS” rankings nonetheless.

  21. Bobo says:

    Maybe it means that Chicago (Northwestern) is not so much better than other universities as is thought by folks working at Chicago (Northwestern).

    There are even good mathematicians in Kansas.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: