Ratings and Rankings Game, Part 1: Postsecondary Institutional Ratings System
Published by: Russ Poulin | 2/12/2014
February 12, 2014
Hooray! College football in the United States is finally doing away with the Bowl Championship System (BCS) to anoint a national champion. Meanwhile, a mania remains for ratings and ranking systems of collegiate performance in academics, financial aid administration, and employment of graduates. I can understand the need for gauging performance, but it is easier to imagine than to engineer.
Last week, I was invited to participate in two meetings regarding ratings and rankings systems. I was invited to sit on a technical panel assembled by the U.S. Department of Education to give input on the proposed Postsecondary Institutional Ratings System (PIRS). I was also asked to a meeting with our friends from U.S. News & World Report on their rankings of online programs. In this post, I’ll focus on PIRS and report on the U.S. News meeting in an upcoming blog post.
The PIRS Methodology Panel
PIRS seems to be aimed at a dual purposes:
1) creating accountability measures for how institutions administer federal financial aid; and
2) providing consumers with information to help them select an institution.
Our charge was not to discuss whether such a system was needed, but to focus on the hows regarding the methodology of accomplishing the task….or barriers to doing so. The following are some highlights from the day. While there were many technical experts giving great advice, I will not dwell on the technical details in favor of giving you a sense of the main points. Spoiler Alert: There were lots of questions about whether the Department should create the ratings.
In an overview, Hans L’Orange of the State Higher Education Executive Officers (SHEEO) reminded us, “It’s complicated.” Don Hossler of Indiana University cited the heuristic that research can be accurate, simple, or generalizable. But, only two of these can be done at once. John Pryor, now working on the Gallup Purdue index on alumni, cited research saying that students tend not to pay attention to rankings. Sean Cocoran of New York University asked, “What problem are we trying to solve?”
Accountability Vs. Consumer Information
While there is some overlap in the data needed for the two purposes of PIRS (accountability and consumer information), students searching for a college are looking for data that goes far beyond measures such as what percentage of those who left the institution defaulted on their loans.
YOU Need Better Data!
Tod Massa of the State Council of Higher Education for Virginia was very direct in his admonishment that echoed the sentiments of several who spoke throughout the day. In saying “YOU need better data,” he said that the Department has not collected the data necessary to meet its goals. He proceeded to demonstrate a seemingly robust collection of data and dashboard that Virginia collects on its institutions.
A Unit Record System is Needed
To really accomplish the goals, a student unit record system across K-12 and higher education is needed. While this has been proposed, it has been politically unpopular with enough people on both the right and the left to keep it from happening. Which leads us to…
Decisions are Political as Much as Methodological
Patrick Kelley of the National Center for Higher Education Management Systems (NCHEMS) reminded us that some of the barriers are political. Some don’t want a unit record. All want measures that make their institution shine.
We Should Focus on Outcomes, Especially Learning Outcomes
There was general agreement on this point. One presenter praised standardized tests, but several speakers cited the need for much work to create a common understanding of learning outcomes.
The Problems with Post-Graduations Outcomes
Several problems were cited with these statistics including how to measure salary or employment across disciplines, the inclusion of transfer and eventual graduation, and including attending graduate school as a success. The latter might have a negative impact on income.
This is an Opportunity for Improvement
Patrick Kelly of NCHEMS said that “If every institution performs like its peers , we will never be the most educated country in the world.” A few speakers opined how this can be an opportunity for instituions to benchmark and improve their practices.
Identify Top and Low Performers; Avoid Rankings
Tom Bailey of the Community College Research Center urged the Department: “Don’t look at the fine differences between institutions,” but to focus on the top vs. low performers. Christine Keller of APLU went on to describe a system with three categories:
The Importance of Location
Patrick Perry of the California Community Colleges asked what would happen if the local community college lost its ability to grant aid. Then the student is left with going to a higher cost college or not going to college.
Remember the Non-Traditional
I spent most of my time focused on non-traditional students and non-traditional institutions. I cited:
I also talked about the Fall 2012 IPEDS data showing that about 13% of all students are fully at a distance and another 13% take some of their courses at a distance. As rankings are created, we need to remember what seems to be a growing population who are not studying on-campus.
Unfortunately, I had to leave before Bob Morse of U.S. News spoke. We will talk more about U.S. News in our next blog post. Meanwhile, let’s continue to watch what develops. The Department seems keen on moving forward. Lacking the necessary data, what will they do?