QUALITY AND RELEVANCE OF RESEARCH
All scientific research seeks to achieve, underlyingly, two objectives: quality and relevance. Quality refers to the internal scope of the area in which the research is carried out. It is about its depth, scope, the extent to which it sheds light on different subjects, solves historical problems and challenges. As a rule, those who give their opinion on quality are experts in the same area of research, through what is known as “peer judgment”. Relevance relates to applicability to areas outside of research development and its importance to society.
Scientists tend to focus primarily on quality, although they appeal to relevance when seeking to be covered by specific lines of funding. Both quality and relevance are imperfectly measured, and it could not be otherwise, as there is no exact way to measure one or the other. Therefore, every measurement is approximate and we can only indicate parameters that, according to common sense, seem quite correlated with one or the other.
There are spectacular examples of mistakes made by the current evaluation system regarding both quality and relevance. It is necessary to bear in mind this essential uncertainty in the assessment, as acting based on absolute certainties, when such certainties do not exist, leads to tragic errors. Quality assessment takes place essentially through peer judgment, when the research result is submitted for publication. This system has its own dynamics: it is imperfect, it is subject to opportunism, semi-fraud, the exchange of favors and various types of manipulation. However, nothing better has yet been found. In fact, it can be said, with a dose of irony, that the countless defects it has are, if well appreciated, virtues, as they allow scientists who are not successful to disqualify the evaluation system due to its obvious defects, sometimes with reason. They thus obtain the stimulus to continue working despite failures, a stimulus they would not have had if they had been rejected by a perfect system.
It is not possible to do without quality assessment, as the risk would be to fall into much worse assessments. In its most simplistic version, the evaluation carried out today consists of counting publications and citations. The other side of the coin is “relevance”. By this we refer to criteria that come from outside the area, since “internal” relevance would be confused with what we call “quality”. The correlation between quality and relevance exists, but it should not be overestimated. Almost everything that has a lot of quality ends up being relevant and, probably, nothing that doesn't have quality will have any relevance. But there are notable exceptions. The fact is that relevance is judged from a point of view external to the area. When judging relevance, the result is, as a rule, funding, just as the judgment of quality results in publication, an award, or a laudatory citation.
When the university distributes resources internally, stimulates research areas, sets up laboratories or hires, it inevitably becomes involved in judgments of quality and relevance. Today, we are witnessing a change in national financing policy, whose dominant criterion seems to no longer be quality but rather relevance. However, like the judgment of quality, the judgment of relevance is subject to terrible, perhaps even greater, errors. It is of fundamental importance that the set of actions and support programs is balanced, not making traditional development programs unfeasible, which are not conditioned by criteria of direct and immediate relevance – a danger embedded in the new model.
In view of the impotence to judge equitably, recipes for making as few mistakes as possible must maintain a balance between criteria of quality and relevance and have as ingredients formal mechanisms that are democratic, impartial (judgments are always external) and ethically irreproachable. When it comes to making mistakes, it is better to make mistakes with good intentions.
|