A guest blog post by Sabina Alam (@Sab_Ra), Director of Publishing Ethics and Integrity, Taylor & Francis Group
Nothing fosters collaboration like a global emergency, and this is the effect we’re experiencing with the COVID-19 pandemic too. Ethics approvals procedures for research studies at various institutions have been accelerated (e.g. Health Research Authority) with guidance also developed by the WHO, and thousands of volunteers have put themselves forward for vaccine trials to help increase the chances of developing a vaccine for COVID-19 as quickly as possible.
Editors and publishers have prioritized COVID-19 related content to ensure there’s no undue delay in assessing and disseminating peer-reviewed articles, and many publishers are also collaborating to ensure the content is rapidly reviewed and also freely available.
Informing the public
The lay public, who are hungry for the latest research regarding COVID-19, usually rely on media sources to keep up with the latest findings. This is why it’s crucial that the findings are reported in the context of the particular study design, setting and size, taking confounders and limitations into account.
Since the COVID-19 pandemic, many of my friends and relatives who aren’t involved with the medical/scientific research communities, have been asking questions about when “the vaccine reported on the news” will be available to everyone (and why does it take so long to develop anyway?)” ; why there’s polarized views about wearing a mask in public; why their doctor isn’t recommending the “alternative” remedies being touted as the cure; was COVID-19 engineered in a lab; is it safe for their children to go back to school?, etc.
Most people are aware of the prevalence of “fake news” but not everyone is aware of how to spot it. And what if it’s not really “fake” per se but instead reported in a way that prioritizes selling hope or fear over fact, and over-stating the research findings while ignoring any limitations?
How has the published research been assessed?
The decision to trust research findings involves being aware of the mechanism of assessment based on how the information has been disseminated – i.e is it in a reputable peer-reviewed journal, or in a pre-print server/repository where it most likely hasn’t been peer reviewed? If it’s in a journal that conducts open peer review, then the reviewer reports and/or the editorial decision letter may be available too (e.g. F1000 Research, BMC series medical journals, research articles in the BMJ). Also, were there data availability requirements to help inform editorial decision-making? The Surgisphere scandal in particular has highlighted the importance of verifying data underlying reported research, and different journals have different data availability requirements to support verification.
COVID-19 has also led to an increase in submissions to journals. However, the increased submissions have also placed additional pressure on an already burdened peer review system. For this reason, some journals are adapting how they handle the peer review process of COVID-19 articles and to ensure transparency, the journals should state any different assessment processes applied (e.g. on journal homepages, or via footnotes). COPE has also provided guidance on this, as it’s important that we don’t drop scholarly standards by prioritizing speed over verification.
Research study designs come in many shapes and sizes
By its very nature, clinical research involves many ways of answering different types of clinical questions – i.e. there are many different study designs, and each one comes with pros and cons. See the AMS press release labelling system (which has been adopted by Taylor & Francis and other publisher press offices globally) for a summary of these, but broadly these can be observational or interventional/experimental in design.
When considering the level of evidence a study offers, one of the first questions to ask is, do these research findings report correlation or causation? In other words, does the study show that there could be a link between specific factors, but which hasn’t been measured directly or if a specific factor(s) caused an outcome?
It’s also important to look at the strength of the findings – e.g. is the sample size adequate? If conducted within humans then are demographic factors, age ranges, health status, and other variables accounted for and appropriate? Are the findings able to be generalised to a wider population? If not, has this been reflected within the conclusions? In other words, are the conclusions in line with the available data?
Case reports and series
When COVID-19 was first being reported, we saw this in the form of individual clinical case reports or small case series. At that time, the disease wasn’t even given a name, but physicians were recording and reporting unusual symptoms which raised the alarm that there might be a new virus. As we continue to understand the disease, case reports remain a valuable way of disseminating clinical observations within a clinical setting. It’s important to remember these reports are based on the observed outcomes of patients treated according to standard clinical decision processes. So, although a collection of case reports can provide valuable insights into understanding COVID-19, the “evidence” of how effective a particular clinical pathway was, is limited to an observation.
Simulation or modelling studies
Once COVID-19 was recognised as a new disease, the next main phase of managing it was to try to predict how the virus would behave and be transmitted within the population. This is where simulation or modelling studies come in, which is a particularly helpful study design in situations where we’re still trying to understand a disease. Within the UK, modelling data from Imperial College directly impacted the policies set-out by the government on how to manage the pandemic.
Studies can also be qualitative, e.g. questionnaire-based studies, surveys etc, which convey a set of experiences, opinions, etc. The value of these studies is that personal level information can be focused upon, but the conclusions or the findings shouldn’t be extended beyond the level of evidence the study can provide.
Analysing observational data
Some larger quantitative studies will be observational – i.e. where the researchers aren’t applying any specific interventions, but instead are observing the cohort of participants within a particular setting at a given time (cross-sectional) or over a period of multiple time-points. These types of studies can only measure correlation, and so any conclusions of the study findings need to be reported within that context. In epidemiologicalstudies however, the study designs allow for much more comprehensive (often policy-influencing) results, as these will systematically measure the distributions, patterns and determinants of health and disease events in specified populations. This is one of the ways the track and trace system for COVID-19 works, by monitoring rising incidence of COVID-19 positive cases in specific areas.
To help us combat the disease, we need effective drugs and /or vaccines. As there’s no existing ones for COVID-19, the race is on to develop these. As any new potential treatments need to be tested for safety and efficacy, clinical trialsneedto be conducted. These are studies (following protocols approved by ethics committees) which are designed to test new treatments on a health outcome under controlled conditions. A clinical trial comes in many forms, with the “ideal” being a randomised, place-controlled trial (RCT), as it’s designed to reduce bias and remove as many confounders as possible. But it’s not always possible to implement a RCT, and so the trial might not be randomised, or it might not have a control arm, etc. This means that although the trial might report interesting and high impact findings, there will be more confounders and limitations that need to be accounted for.
Because of the level of risk clinical trials come with, these are also carefully conducted in phases (typically up to 4), with the progression to the next phase being dependent on the success and safety of the previous phase. To ensure transparency, most medical journals will only publish results from clinical trials which have been prospectively registered in a suitable registry, which freely provides information about the protocol and intended primary outcomes. An example of current COVID-19 related studies can be found at the WHO International Clinical Trials Registry Platform.
Synthesizing all available evidence is another important way of putting the impact of separate research findings into context. This can be done via carefully designed systematic reviews which appraise all relevant findings related to a specific research question, using strict inclusion and exclusion criteria. The combined data derived from the systematic review can then be further statistically analysed by a meta-analysis technique in order to determine statistical significance of a specific combination of research findings. To ensure transparency and avoid any unnecessary duplication of effort, researchers conducting systematic reviews are encouraged to register their studies in the PROSPERO database, which already includes a lot of COVID-19 related content.
With the wealth of research going on across the globe at the bench as well as the bedside, and the race against time to understand the disease, it’s never been more important to report the findings within the context of the actual strength of the data. This is not an exhaustive list of all the ways the research is being conducted, but reflects some of the main types being conducted, all with the aim of leading us out of this pandemic.