Original publication: April 2015
Authors: Bernd Wächter (ACA), Maria Kelo (ENQA), Queenie K.H. Lam (ACA), Philipp Effertz (DAAD), Christoph Jost (DAAD), Stefanie Kottowski (DAAD)
Short link to this post: http://bit.ly/2pyJo6j
Available languages:

This study titled “university quality indicators: a critical assessment” was jointly produced by the German Academic Exchange Service (DAAD) and the Academic Cooperation Association (ACA) on behalf of the European Parliament. It takes stock of the latest developments in higher education quality approaches, i.e. quality assurance (QA) and (global) rankings.

 

Scope and methodology

The research team conducted extensive desk research on QA and rankings as well as systematic text analysis of policy documents, websites and literature. Additionally, interviews with key stakeholders were conducted to validate and complete findings.

A first Chapter deals with the background of the Study. This is followed by Chapter 2, on the methodology. Chapter 3 is devoted to QA. It provides an introduction to the European context, a comparison of systems in eight EU countries and three non-EU countries across various dimensions, and examines the consequences of different QA systems plus upcoming trends and challenges. Chapter 4 covers rankings. After a brief introduction on the growing importance of global rankings, the analysis covers the institutional frameworks and methods of six global instruments, including the EU-sponsored U-Multirank, three major national rankings and a global business school ranking, before the intended and unintended impacts of rankings are discussed. Chapter 5 compares the differences and commonalities of QA and rankings. Chapter 6 contains the recommendations.

Emergence and growing importance of QA and rankings

QA and university rankings have existed for decades. However, external QA was historically a responsibility of the ministry in charge of higher education, and rankings existed only at a national level. Along with the massification of higher education, the first independent QA agencies emerged in the early 1990s. Global university rankings first appeared in 2003 when a team of researchers in China produced the Academic Ranking of World Universities to ‘benchmark’ Chinese universities against top universities worldwide. The growth of global rankings coincides with the advance of globalisation, the new role of higher education as a beacon for mobile capital and talents, marketization of higher education, and the advancement of digital media. As of 2014, ten global rankings were identified. The European response to this phenomenon is a multi-dimensional EU-funded university mapping and ranking project – U-Multirank. Both independent QA agencies and global university rankings have become prevalent today.

Purposes of QA and rankings

QA and rankings have fundamentally different purposes. The (stated) purpose of most rankings sampled for the Study is to identify ‘excellence’, in terms of the best higher education institutions (HEIs). In addition, rankings have often (unstated) commercial purposes. In contrast, the main purposes of external QA are to guarantee compliance with (minimum) standards and to support quality enhancement. By providing independent information, QA is to help building trust in higher education, which is expected to provide a better basis for recognition and thus to facilitate mobility.

Implementation of QA and rankings

The different purposes of QA and rankings are reflected in their legal and institutional framework. Global rankings are typically run by private companies and have no legal consequence on HEIs. QA agencies are independent non-profit organisations and their work does have legal consequences. In most EU countries all study programmes are subject to external quality assessments; in some the agencies assess entire HEIs. A successful QA is often a requirement to operate a programme or an institution and, in some countries, affects public funding. Despite many differences between countries, the European Standards and Guidelines for Quality Assurance (ESG) provide a shared framework. The European Quality Assurance Register for Higher Education (EQAR) contains ESG-compliant national QA agencies. Most European agencies comply with the ESG which is, at EU level, considered a precondition for operating abroad. But legal and political hurdles at national level still hamper the emergence of a genuine European QA market.

The mainly private institutional set-up of global rankings in contrast to the public framework of QA is also linked to the quality criteria applied: Global rankings primarily apply research-related criteria for which data are available, whereas QA tends to focus on teaching and learning for which data have to be collected. The majority of global rankings use data from one single data broker. Since a similar data set on teaching-related indicators does not exist, QA criteria have a more qualitative nature. They are presented in a self-assessment report drafted by the HEI, verified and enhanced by an external peer review, and published in an external assessment report.

QA reports are hard to understand, while global ranking results appear to be easily readable. This is, however, a ‘fake simplicity’. Several methodological shortcomings limit the usefulness of rankings in measuring ‘quality’ of higher education. These include the reliance on a single data source, the focus on publications and citations, the exclusion of certain academic fields as well as the limitation to English publications. Further, student surveys used for rankings are not representative. Finally, the differences between ranked institutions are often only marginal. To address these shortcomings different initiatives have been set up: The International Ranking Expert Group (IREG) has introduced a ranking audit, although the methodology of the audit itself is still in need of improvement. Another attempt is the creation of U-Multirank to do justice to the diversity of higher education. However, U-Multirank requires major resource input and its high degree of differentiation also stands in the way of easy readability.

Consequences and impacts of QA and rankings

Hard evidence is in short supply concerning the impacts of both QA and rankings. QA largely aims at securing compliance with minimum standards and quality enhancement. Negative consequences include the unattractive reporting and, in some cases, excessive bureaucracy, which may have a negative impact on the development of a quality culture. Rankings are viewed as creating a whole set of intended and unintended effects. Evidenced impacts of rankings have been found on student recruitment and admission, higher education marketing, reputation and legitimacy of HEIs, governance and operation of HEIs, and academic publication practices. Undesirable impacts of rankings include ‘data massage’ to improve the ranking position, homogenisation of higher education provisions, and academic drift.

Interrelation between QA and rankings

Despite their differences, there are tendencies of QA learning from rankings and vice versa. Some QA agencies apply quality ratings which indicate a quality that is better than required (e.g. ‘excellent’ or ‘exceeding’). Thereby they enter the area of ‘excellence’, which rankings classically view as their habitat. Global rankings, on the other hand, are also moving in the direction of ‘multiranks’, allowing users to adapt rankings to their own preferences. Noteworthy is also the introduction of the IREG audit, a QA instrument for rankings.

Recommendations

In the area of QA, seven recommendations (REC) have been formulated. REC 1 proposes the furthering of the European dimension in QA, inclusive of its instruments (ESG, EQAR, etc.). Likewise, and as a step beyond this, a European QA area should be developed step by step (REC 5). All QA efforts will ultimately fail if HEIs do not develop their own quality culture (REC 2). QA methodologies must also be constantly adapted to educational developments such as lifelong learning and massive open online courses (REC 3). The current trend in QA to move towards enhancement-oriented methodologies is welcome and to be reinforced (REC 4). QA reports should be understandable to informed lay persons (REC 6). We also recommend strengthening empirical research into the impacts of QA measures (REC 7).

With a view to global rankings in general, we are recommending to improve information on what the rankings measure and what they do not (REC 8) and to entrust this task to a tobe-created European mechanism for the QA of rankings (REC 9). We propose to simplify and scale down U-Multirank, in order for it to become sustainable (REC 10), to create a new business model for it for the time when EU support will have run out (REC 11), to enhance its visibility, possibly by cooperation with a media company (REC 12), and to conduct more research on globally comparable teaching-related indicators and the data collection for such indicators (REC 13).

Link to the full study: http://bit.ly/563-377

Please give us your feedback on this publication

Selection of tables:

This slideshow requires JavaScript.


Leave a Reply

%d bloggers like this: