Authorship in science: room to improve
In science and academia, authorship is everything: ‘publish or perish’. In only three words, it transmits a very clear message about the pressure researchers and academics face: If you do not publish, you perish.
Some institutions have quotas: a minimum number of papers that a researcher should publish per year. Quotas such as these are problematic, but they are not the biggest problem.
Given that more and more researchers are competing for the same (or fewer) number of jobs, the pressure to do more than the next person is high. The same can be said about research grants and fellowships. With funding success often below 10-15%, researchers must distinguish themselves from the rest of the pack.
The quantity vs quality dilemma
I don’t believe there is direct relationship between quantity and quality. Some researchers publish very few papers and they are of poor quality. Other research publish a high number of papers and they are of excellent quality. However, everything else being equal, I do feel that continuously striving to publish more papers can lead to lower quality research, and it definitely leads to the dilution of authorship and scientific contributions.
Personally, I have felt this dilemma in action. As I progress in my career, I now have the opportunity to contribute to more projects. And, given the current incentives and pressures, I feel pressure to say yes to every opportunity that comes my way. I see successful researchers involved in a myriad of studies, collaborations and other professional activities. If I want to be like them, successful, surely I have to take on as much (or possibly) more than them.
But the more I take on, the more I feel I am not able to give each project and task my best effort. I increasingly feel spread thin, bouncing around from task to task. I often find myself wishing I had more time to read about a given topic. I would like to learn some of the new experimental or computational techniques, and see how they might help me solve new problems. But the pressure to do more, all geared towards publishing more papers, is chronic and overwhelming. It also makes me wonder how others are doing it? How can other researchers publish so many papers? Have they discovered a time vortex that double or triples the amount of time they have in a day?
A few consequences
Poorly conducted research
The pressure to publish more papers, often with no additional time devoted to research, can lead to poorly conducted research. Much has already been said about questionable research practices and irreproducible results, thus we won’t reiterate those points here.
Poorly written and poorly prepared papers
Another consequence of the pressure to continuously publish more is that papers are at times rushed. I have reviewed well over 100 papers, many of which were submitted to top physiology journals. The thing that still surprises me is when a successful senior researcher authors a poorly prepared paper. On many occasions I wonder whether the researcher actually read the paper.
Dilution of authorship contributions
Given the pressures that exist, it is not surprising that some researchers actively seek out authorship, even when it is not warranted.
I have been forced on more than one occasion to add authors that did not deserve authorship. However, as the junior person, my objections were overruled. And yes, I am aware authorship is not a black and white issue. What I am talking about here is gift authorship, pure and simple. In one case, a well funded senior researcher appeared to invest his research funds on equipment, graduate students and post-doctoral researchers in other labs. There was little to no contribution to any parts of the scientific process. Then, when the research was done, he expected a return on his investment, which in this case was authorship on the papers.
There are guidelines of what constitutes authorship, but these are open to interpretation. And at present, there is little incentive to stop the practice of gift authorship, or, less nefarious, diluted authorship: where a researcher has contributed to a study, but only very little.
Do journals care how many authors appear on a paper? Probably not. Are journals in a position to ascertain whether someone does or does not deserve to be listed as an author? No.
Do universities and research institutes care how many authors appear on a paper? Probably not. They simply want someone from their institute to be listed on a paper because it can then be counted as output for them. Are universities and research institutes in a position to ascertain whether someone does or does not deserve to be listed as an author? Probably not.
Do funding agencies … you get the idea.
The problem is that the only people who actually know who deserves to be an author on a paper are the researchers themselves.
Given that all researchers know the key bean being counted by the bean counters is the number of papers you publish, there is a strong motivation to be listed as an author on papers, even when the contribution was very small.
A person can do 80% or 0.8% of the work on a study and still be a named author. In my field, it can be assumed that the first and last author contributed substantially to the study. But what about the other authors? How much did they actually contribute?
To be named as middle author on a paper where you contributed 30-40% is, from a bean counting perspective, equivalent to a paper where you contributed 3-4%. Thus, unless a person is driving the research and will be the first or last author, there is little incentive to work harder. Yes, yes, I know lots of people who do drive the research, me included. But in a Darwinian, survival of the fittest evolutionary model of science, it would be more profitable, given the current incentive system, to devote the least amount of time to largest number of papers. And if collaborators have figured this out for themselves, they could include each other on each other’s paper and greatly increase their research output for very little additional effort.
While this is an extreme view, one that only applies to a minority of researchers, a less extreme version likely applies to good chunk of researchers, whether they are conscious of it or not.
I don’t want to sound overly negative, but I don’t see things changing any time soon. The pressure to publish took decades to get to this intensity. It will take time to change the policies (and people) that keep the current incentives in place.
The people who are in a position to create change got there because they found success in the current system and its incentives. Thus, they may not want major changes to the game they know so well.
Universities and research institutions are ranked in part by the number of papers they put out, especially in what are considered ‘top journals’. Some even hand out monetary prizes for each paper published in one of these top journals, the ones that are considered in the university ranking algorithms. Universities are literally dangling money in front of researchers in order to move up the ranks of their silly list. But, rankings are big money when it comes to attracting new students, so a university’s bottom line is in part putting undue negative pressure on researchers. In some counties, universities also get money from the government for each paper their researchers publish. This creates a huge top-down pressure to publish more. Major changes will have to take place for these systems to change.
Another major problem that needs to be tackled is that of assessment. How are we to assess researchers if we cannot simply rely on their output? Some argue we should also consider the impact of their work. Impact is hard to assess. And if a researcher is named on 5 times more papers than the next person, despite contributing little to these other papers, surely it increases the chances that one of these papers will lead to something that can be highlighted as a marker of impact. Personally, I would rather researchers get assessed on the quality of their work. Indicators of these are the registration of research protocols, making code or other analysis tools publicly available, and complying with reporting quality checklists. Then, when all of these things are in place, impact and output can be assessed. Without a foundation built on sound research practices, there will be room for questionable research practices to creep in.
An after thought…
But, change does not always have to be big to have a major impact.
An idea struck me when I was preparing this post. Currently, it is nearly impossible to determine a researcher’s contribution to a paper where they appear as a middle author. In fact, it is not always clear how much the first or last author contribute. Thus, as a way to remove this ambiguity, it would be trivial to request authors, when submitting their paper, to numerically specify each author’s contribution.
Each paper would have a 100% contribution to distribute amongst its authors. Thus, it would become clear if a person is listed as an author contributed 0.1%, 1%, 10% or 100%.
Who would decide on these percentages? The people who are in a position to make this judgment: the authors themselves.
It is true that, if such a system were in place, people would use questionable tactics to try and get additional percentages assigned to them. But this would likely be the exception.
Another way to implement this idea would be to send out a survey to each author once a paper is submitted. Each author would be asked to independently distribute the 100% across all authors. Then, once all authors have responded, the median value (or some other clever approach) could be used to compute each author’s contribution to the paper.
While not perfect, such an approach would go a long way to equalizing the playing field. A person with 10 papers, each with 70% contribution, could be compared to a person with 10 papers with 40% contribution and 20 papers with 5-10% contribution.
Importantly, this approach somewhat removes the grey area of whether or not some contributed enough to be listed as an author. Someone who has done very little will simply have a very small percentage attached to their name. And in a paper with 100 authors, yes, some authors will necessarily have contributed less than 1% to a study.
So maybe journals should do away with asking for Author Contributions to be listed based on specific research items. Rather, they should simply have a list of initials and percentages. Even more dramatic would be to have the percentage appear in the list of authors on the first page!