Skip to main content
SearchLoginLogin or Signup

9. Indicators

Openness requires transparency. This principle applies both internally and externally. Both forms of accountability require organisational procedures and protocols for assessing the status of the open knowledge institutions by means of indicators.

Published onMar 08, 2019
9. Indicators
·

9.1 Can We Evaluate Openness?

Openness requires transparency. This principle applies both internally and externally. The members of an open knowledge institution need to know about the status of their organisation and their relationships with other institutions, groups and individuals. They also need to be able to assess their own progress towards the goals that the institution has established via an open process of consultation and deliberation. This creates internal imperatives of accountability of the organisation vis à vis its members. At the same time, as a public interest institution, an Open Knowledge Institution is externally accountable towards its relevant communities and society. Both forms of accountability require organisational procedures and protocols for assessing the status of the open knowledge institutions by means of indicators.

However, establishing such protocols always involves a trade-off between the possible accuracy and quantifiability of certain indicators and their effects on perceptions and resulting behaviour. As has been well theorised and empirically demonstrated (Holmström, 2017), closed quantitative indicator systems necessarily result into two problems which are especially detrimental in the context of openness: (i) indicators lead towards the relative neglect of all activities that are not measured by the indicator, and (ii) agents will try to game the system. If both effects come together, serious organisational pathologies occur.

As a consequence, an effective system of indicators for Open Knowledge Institutions must combine internal progress evaluation with qualitative indicators for external communication and reporting. This remains open in the sense that it leaves room for contextualised adaptations to the environment and nature of the respective open knowledge institutions. Furthermore, it is imperative that the indicators be matched to the concepts that the institution hopes to incentivise and in keeping with an open ideology.

9.2 Challenges in Evaluation

Within the broader scope of open knowledge and the institutions that support and sustain it, a wide range of qualities and practices can contribute to a shift for a university towards becoming an Open Knowledge Institution. Many of these activities and qualities already exist in some institutions and in some places, but with inconsistent implementation and without coherence across activities. In the specific case of universities and colleges, these activities and qualities are often not directly supported by the organisation as a whole or as a strategic priority. Rather, they are undertaken by individuals within the institution, often without recognition and in addition to their other (metricised) responsibilities.

The practices and qualities necessary to support universities in a transition towards becoming open knowledge institutions are progressive and forward-looking. They involve a spectrum of activities that includes engaging with new communities and mediating new forms of conversation in order to engage new audiences and participants. These forward-facing practices are often at odds with the dominant, if unstated, expectations about what universities and colleges do, and who they are for.

Existing forms of evaluation generally reinforce dominant perspectives and power structures, including the geographical dominance of the global North in traditional metrics. The conservative orientation of existing evaluation systems in universities today is further reinforced by the growth of external threats. Funding is tightening, knowledge itself is increasingly politicised and contested. National and governmental goals can seem aligned with these agendas. This creates disincentives for experimentation and innovation in relation to collaboration beyond the institution and can restrict new approaches to scholarly communication. Risk management favours conservatism. At the same time, open knowledge agendas offer a route to increasing the diversity of university funding and support sources, as well as engaging with broader publics, including policy and opinion makers, and becoming part of a more collectively determined and knowledge-guided future.

Existing rankings and their relation to 'quality' signalling are, of course, seen as crucially important for universities and their administrators. Universities direct their knowledge and research output towards the defined set of activities and dissemination formats that feature in high-profile rankings. They do this in the hope of signalling status and prestige – and in so doing, ensuring their appeal to students and research funders. The exclusive use of specific data providers in some ranking systems can drive university policy to demand publication in specific – invariably traditional, Western, and STEM-focused-venues. One example is the use of Scopus data by the THES World University Rankings. This narrows incentives for publication in formats and venues that might be more accessible to wider publics - for example, in scholar-led open access journals, popular media, policy papers, or reports to government. It also reinforces existing regional power hierarchies between the North and the South, as well as disciplinary divisions and practices. This also increases the ability of disciplines to enforce boundaries by determining what can, and cannot, be published within influential journals.

Efforts to prevent unfair comparisons when measuring the ‘reach and impact’ of individual scholars and their work are also problematic. One example is the normalising of citation scores with reference to an author’s home discipline. These do not increase the fairness of the evaluation system. Rather, these strategies can themselves reinforce assumptions and biases, particularly for those conducting research across disciplinary boundaries. Even those within traditionally defined disciplines can be disadvantaged if they work in ways that do not conform to disciplinary norms. Work on issues considered to be local concerns by prestigious institutions, including, for instance, neglected tropical diseases, is often discounted. Activities involving mediation and communication are also often neglected, including the creative and performing arts, many forms of research-led teaching and community engagement. This system of evaluation can particularly push work directed at community building, including activities to support diversity, to the background or sometimes even underground.

Rankings create additional issues for universities with medium and lower world rankings, who seek to distinguish themselves not by being the same as traditionally highly rated institutions, but rather by being different. Creators of current ranking algorithms and reports are unlikely to either recognise or validate new measures that showcase differences to the advantage of these universities. The desire of such institutions to demonstrate their difference is thus countered by their simultaneous need to continue to place themselves as well as possible within existing ranking systems. Once again, this disadvantages many universities outside the traditional centres of academic power and prestige.

The homogenising effect of rankings and their perverse effects on university strategies and decision making pose a serious challenge to any effort to refine or redefine the role of a university or of universities - including providing incentives for universities to change in ways that are congruent with the principles and protocols of Open Knowledge Institutions.

9.3 A Framework for Open Indicators

In order to take practical steps towards transforming universities into Open Knowledge Institutions, we must acknowledge the ways in which current rankings drive organisational change. This information needs to be used to anticipate how openness can be introduced into existing rankings. Comparison across institutions may be crucial to developing interest in and momentum for system-wide changes. Any framework must balance these needs, providing for the positive opportunities that arise from competition and aspirational comparisons, while allowing an institution to follow its own path and local needs towards its future. 

There are three strands of evidence that we might use to evaluate a university. The first of these is evidence that a university is developing and implementing policy that speaks to this overall shift, whether in response to external pressure from government and funders, from community or public demands, or internally. Policy, strategy, and other public position statements are clearly not a direct sign of change occurring, but they are a signal of intent and a proxy for organisational and institutional support of change.

The second strand of evidence emerges in the university's actions. Is the institution putting in place platforms and systems that support mediation, engagement, diversity and network building? Is provision for an institutional repository made and appropriately resourced? Is there visible support for data management and sharing? Is there support and expertise provided for crafting communications to speak with and effectively listen to appropriate communities?

The final strand will be evidence of outcomes and change. What evidence can we draw on as indicators or proxies of actual change? In some areas, such as assessing the degree of open access to formal traditional publications, this is becoming easier. As shown earlier, there has been a significant increase from many institutions over the past decade.

A framework for open indicators, therefore, must include these three strands of a developmental framework: (i) policy and narrative signalling intent; (ii) action and investment that signals a prioritisation of change; and (iii) measurable outcomes that result from these efforts:

Table.1 A framework for open indicators

One particular objection to this framework might be that action often precedes policy and organisational statements. Individuals will often be acting, sometimes without organisational sanction, to pursue an open agenda. However, such activities are not 'organisational' precisely because they are not incorporated into the organisational narrative. The principle of subsidiarity supports the development of these local initiatives in the sense that it seeks to create an environment in which they are not prevented but until they are adopted by the organisation they do not signal organisational activity. They are not yet institutionalised.

9.4 Institutionalising Open Indicators

As suggested above, the aspects of an open knowledge institution that we have identified currently are generally disregarded as valuable or measurable criteria within existing rankings and university evaluations. To some extent, it might be argued that publication numbers or citations function as proxies for the mediation of knowledge. However, these numbers have become so associated with concepts of excellence that they are now regarded as defining it. Their value as measures of knowledge mediation is questionable on two grounds: first, the limitations of citations in themselves as measures of any single quality; and second, the severely limited range of the users of research that citations report on. Other proxies that might have value for evaluating progress towards being an Open Knowledge Institution have been investigated (e.g. via the EU Open Science Platform) but these are frequently limited in scope when compared to the ambition of the Open Knowledge Institution agenda.

To avoid replicating past mistakes, an open knowledge framework must adhere to several principles:

  • Adaptability and like-to-like comparison: Since the aim of a scoring system is to unify and compare across various entities, its framework (or underlying proxies) need to be flexible enough to adapt to different geographical settings in the target group. This is a particularly difficult challenge as institutions have vastly different management models, financial structures, student input, etc. In situations where homogeneity is neither desired nor possible, classification and normalisation systems should exist to allow like-to-like comparison.
     

  • Generalisability: The test of a good indicator is the degree to which it serves as an adequate proxy for the underlying concept. For global indicators, it is important that the indicators represent the whole of the theoretical population in order to make inferences. For example, existing university rankings often fail to fully represent disciplines such as humanities and social science, and research from the global South is less likely to feature than that from the North. Global indicators must have global reach.
     

  • Standardisation: Several indicators/proxies can be imagined which demonstrate varying dimensions of openness. An open framework should be careful in standardisation not to give undue priority to certain dimensions over others. Furthermore, it should avoid combining indicators where the underlying concept is not the same.
     

  • Orthogonality: Information provided across various indicators is highly likely to overlap. Existing ranking systems often aggregate across such indicators without addressing this problem. Hence, they create bias towards some criteria and undermine performance in other areas. A framework for open indicators should include a well-defined process for indicator selection, utilising appropriate statistical procedures to ensure data underlying the indicators are as orthogonal as possible.
     

  • Qualitative vs quantitative data: Qualitative data should not be neglected in favour of quantitative metrics. Although this complicates other aspects of the framework (e.g., standardisation and generalisation), a framework for open knowledge indicators must triangulate several sources of data to represent the complex and dynamic system of knowledge production.
     

  • Thoughtful design of scoring systems: Another interesting, and potentially important, issue surrounds the way in which scores are assigned. Most current scoring systems utilise a bottom-to-top scoring approach. This is where the baseline score for an indicator is zero and points are awarded according to activities signalling the desired outcome for the indicator. Sedigh (2017) proposed an opposite approach where each indicator is assigned a score of 100 to start with, and then points are deducted for the lack of desired activities.

A Forward-Looking Framework

Most existing evaluation frameworks for institutions today look backwards, based on reporting of a limited set of outputs. Here we want to assess the orientation of a university to an unknown future, where new communities are engaged in ways that are difficult to predict. The explicit challenge to doing so is that any fixed framework for evaluation will be inherently conservative.

There is much interest in predictive analytics of academic work, but there is also little evidence that these do more than reinforce existing power structures, rents and inequities. Systems based on predicting future performance, with a focus on present obsessions, from currently available data that is based on trajectories of success from the past, do little to help us to challenge and diversify existing systems and their closed nature.

If we only examine those examples of success found in our own local traditions, we can easily miss developments in other spaces and systems. The examples of Action Dialogues in South Africa have already been discussed. Another example includes SciELO, the Open Access publishing platform in Latin America, which remains relatively unknown outside of South America and Southern Africa. More than that, many of the activities and practices that align with Open Knowledge Institutions may be deliberately hidden within the institution, operating under the radar to avoid scrutiny based on more traditional objectives.

We will need to identify institutions that bring those activities to the forefront, both locally, and more widely. We will need a model that helps us identify the signals that an institution is supporting these activities.

9.5 Signals of Openness

As noted above, a strong framework for open knowledge indicators requires the incorporation of many categories of information. Paramount to this is expanding the value proposition for research: not only incentivising publication in open access venues but also investing in and rewarding work that is translational and focused on broader impacts beyond the research community. The way in which the campus engages with the outside community will be a major dimension of openness, which can be measured through active engagement (e.g., partnerships) as well as more passive engagement (e.g., social media).

Investment in infrastructure is a key element in a strategy for openness. The creation of repositories is historically an element of openness; however,  a global connection of open knowledge institutions will give rise to fundamentally more advanced and expanded networked opportunities for making research available to other scholars and to the public. The physical campus is also an element that can be investigated, looking at physical accessibility as well as spaces for open engagement.

Universities are centres of learning. Therefore, openness will be evaluated in terms of the composition of the study body as well as the engagement in open educational activities (e.g., participation in online courses). At the institutional level, the university will be rewarded for the adoption of policies for openness, not unlike those established for journals (https://cos.io/our-services/top-guidelines/). These standards should seek to be comprehensive: not only incentivising openness in one dimension of the university but cutting across all university activities. Table 2 provides an illustrative example of potential data sources. This is not meant to be comprehensive, but to provide examples of ways in which open knowledge indicators might be constructed.

Table 2: Examples of potential indicators

Comments
2
?
Elizabeth Gadd:

Considering openness is inherently collegiate and collaborative, engendering openness through competition doesn’t sit well with me. If we’re looking to create a new kind of culture in a new kind of OKI, can we think of new ways of incentivising openness rather than falling back on traditional and unhelpful competitive approaches? The problem with competitions and rankings is that there is only one winner. But openness is something everyone can win at. An audit/threshold/concordat type approach is better I think - then everyone has an equal chance of success.

?
Elizabeth Gadd:

I understand the logic here, but this is a depressing thought. I think we outsource the measuring of openness to rankings at our peril. They are not accountable to anyone.