By Brad Farnsworth and Patti McGill Peterson
The debate about accountability for higher education—particularly the controversy over outcomes measurement—extends well beyond the United States. With the extraordinary growth in higher-education demand and investment, especially in developing countries, the interest shown by organizations and governments has been correspondingly strong.
In the absence of other measures, global rankings of universities have for some time served as a surrogate for quality measurement. Such rankings, however, have generated considerable frustration. Critics argue that less-wealthy countries’ investments in higher education are not reflected in the rankings, which appear to favor elite institutions and research production.
Prospects for alternative measures of institutional achievement have therefore been welcomed by many for their potential, for example, to focus on institutional progress through the lens of student learning outcomes assessment. Several international organizations have taken an especially strong interest in this aspect of accountability, particularly the Organisation for Economic Co-operation and Development (OECD), which officially represents 34 developed nations. Among educators in the United States, OECD is probably best known for its Programme for International Student Assessment, or PISA, which tests 15-year-old students in science, math, and reading in 70 countries.
In the sphere of higher education, OECD recently completed a feasibility study for a project called Assessment of Higher Education Learning Outcomes, or AHELO. The general goal of AHELO is to develop a universally accepted standard for testing the knowledge and ability of students in higher education.
AHELO focuses on the measurement of student learning in three areas, or “strands”: engineering, economics, and generic skills. The generic skills strand is designed to test a broad set of skills, including written communication and analytical reasoning—skills that are broad enough to be a potential proxy for overall institutional quality. AHELO represents a monumental logistical undertaking. The feasibility study used computers to test nearly 23,000 students in 17 countries and regions. Some of the strongest support for the feasibility study came from developing countries, most of which are not members of OECD. They tend to view AHELO as a way of leveling the playing field for their educational systems. In light of the results of the feasibility phase and its extraordinary expense (some estimates place the number in excess of $13 million), the question quickly arises whether AHELO is really an accountability- and quality-measurement tool that colleges and universities would welcome.
ACE, joined by other associations of higher education, has raised a number of concerns about AHELO. From the beginning of the project, ACE has asked administrators at OECD whether a global assessment tool could effectively take into account national, cultural, linguistic, and institutional variation to produce reliable and meaningful results. ACE has also raised questions regarding the purpose of AHELO: Is it intended to provide a form of transnational accountability? Is it meant to establish baseline data for institutions to foster improvement of their educational programs? Are the assessments intended primarily for institutions or governments? Will it readily become another ranking system?
At the end of the feasibility study, it became clear that many of ACE’s expressed concerns were warranted. The ability of the testing instruments to unilaterally account for differences in culture and institutions remains in question. The purpose of AHELO remains confused. The goal of institutional improvement continues to be juxtaposed with the goal of providing comparative information to governments and policymakers for their decision-making processes.
It is unclear at this point whether and how AHELO might move forward, but it is not the only international project intended to measure the effectiveness of higher education. The European Commission—part of the European Union—has been instrumental in helping to develop U-Multirank, which is designed to measure institutional performance on a wide range of variables, including spending on instruction, time to degree, and rates of employment. These are not unlike the elements of a rating system President Obama recently proposed as a way to measure the performance of U.S. institutions. The International Standards Organization, whose main task is to harmonize technical specifications of products and services, has also recently joined the movement, and is in the early stages of developing a set of standards for higher education.
One of the hallmarks of U.S. higher education in the twenty-first century is its interconnectedness with higher education around the world. This is characterized by increasing mobility for students and faculty, international research networks, and MOOCs that gather students from all around the world into a virtual global classroom. The quest for accountability, the measurement of outcomes, and the pursuit of quality represent another borderless frontier for higher education, and it should prompt us to have regular and deeper discussions with our colleagues in other countries about how we measure what we do. One thing that is striking about the AHELO feasibility study is the negligible extent to which it sought input from college and university leaders. As part of its responsibility for accountability, the global higher education community needs to step forward and take leadership in framing learning-outcomes measurement and institutional quality. Otherwise, it will be left to others.
Reprinted from the Winter 2014 edition of ACE’s flagship magazine, The Presidency.