OR WAIT null SECS
Best practices for assessing publications.
In science and medicine, understanding the historical context and cornerstone findings of a field or topic is crucial to developing new hypotheses and unlocking next level insights for scientific research to inform strategic planning such as pipeline development. The leading approach in the scientific arsenal to explore past and current state knowledge is a systematic literature review, however with the advent of the internet, and the overwhelming expansion of for-profit-journals plus alternative publication methods it has become difficult to identify the optimal inclusion criteria for an impactful review.
In a world migrating towards data-driven decisions, it is crucial to have ‘good’ data supporting those decisions. Now, more than ever, understanding “what ‘good’ means in science and data, is an imperative question. This article seeks to evaluate the utility of commonly used strategies for choosing which publications to incorporate into a literature review and provide guidance on the best practices for assessing publications and their data for the purpose of a literature review.
A literature review is a compilation of key published information regarding a specific topic, often focused within a specific period of time. Dissemination of information in the form of literature reviews is a pivotal part of the scientific process—it allows new researchers the ability to quickly gain a high-level understanding of a field, and compiles and condenses potentially decades of research into an easily digestible format. For life sciences companies (e.g., clinical trial sponsors, CROS, etc.) literature reviews serve two main purposes: 1) to ‘get a leg up’ on the competition by understanding present and potential future state of a field, and 2) to support strategic decisions within pipeline development.
Systematic literature reviews are a specific type of review that utilizes a set of defined and reproducible ‘rules’ developed prior to undertaking the review to ensure a rigorous methodology and to minimize bias. One approach that has been used to date to determine the value of a publication is the calculated impact factor.
The Impact Factor was initially developed by Eugene Garfield at the Institute for Scientific Information (ISI). Conceptualized as a way to track the citations of individual publications, Garfield argued that the index would make it easier for scientists to find relevant articles and assess how the scientific community at large received various publications.1 Impact Factor, as compiled today by Clarivate Analytics, is the average of the sum of the citations received in a given year to a journal's previous two years of publications, divided by the sum of “citable” publications in the previous two years.
In essence, if a journal published papers that were then cited by many other publications, the Impact Factor would be higher. While Garfield’s intentions were good—creating a systematic approach to assess the inherent utility of published works—there were unintended consequences to Impact Factor popularity. The scientific community began to uphold the Impact Factor of a journal as the ‘gold standard’ metric when assessing the quality of research and researchers alike.
The heavy reliance on Impact Factor has drawn criticism over the years for many reasons, though the most salient issues are as follows:
The Impact Factor calculation only accounts for citation frequency of articles within the first two years post publication.2 Fields and journals that generate articles that get cited over a longer period of time rather than immediately are at a serious disadvantage based on the two-year time limit.
There are several types of published articles such as review articles and letters to the editor that are not accounted for in a journal’s total article count but do count for the journal’s citation average. If a journal tends to publish a lot of these types of articles, it may dramatically increase the Impact Factor.2
It is perhaps not surprising that some fields can generate large datasets faster than others. This may give researchers publishing in these fields an artificial boost in the number of publications they can generate in a short amount of time. For instance, researchers in computational modeling may be able to generate one large data set that generates multiple journal articles in quick succession. These researchers would be likely to publish in the same journals and cite their own previous works along with the works of others in the field.
Meanwhile, clinical journals may have low citation counts in part, because the intended audience is not generally publishing prolifically, and because clinical research timelines tend to be longer, which means we often see years between publications.
This turnaround time bias puts many journals catering to specific fields at a disadvantage in comparison to fields that have higher citation counts. The good news here is that Clarivate Analytics is actively trying to mitigate this particular issue with the 2021 launch of the new Journal Citation Indicator (JCI), meant to account for the different rates of publication and citations across fields, however critics of the Impact Factor maintain that this new approach has its own potential issues and can be misused.3
Anyone who has gone through the peer-review process for their own published work can tell you that the peer reviewers often fundamentally shape the end product prior to publication. In some cases, reviewers may request certain citations be incorporated into an article before it can be published. Researchers often wear many hats in their careers. A reviewer may also be a journal editor, or the author of a recommended paper, which is an inherent conflict of interest.
While fields such as cellular biology and physics tend to disseminate new information almost exclusively through journal articles, social science fields such as sociology and history are more likely to use alternative dissemination strategies. Books, wikis, and the like are completely excluded from Impact Factor calculations, which negatively impact humanities specifically.
While the Impact Factor was a good idea in theory, the world of science has become too vast and our publication methodologies too diverse to depend on this singular metric in judging good science.
A subset of the Impact Factor is the number of citations an individual publication may accumulate over time. Many researchers may consider this a treasured metric of the utility of published findings. The thought process is that if many other researchers find the work important enough to warrant a citation, then it must be a pivotal work, in theory making it ‘good science.’
Unfortunately, this assumption discounts the human element inherent in journal-based publication. Famous or popular researchers will often be cited far more often than others. Papers that feature a ‘hot topic’ or new and exciting methodology may be incorporated simply to capitalize on public interest. Friends and colleagues may cite each other as a courtesy. The human element makes it especially critical to assess the research thoughtfully and rigorously.
To conduct a productive systematic literature review that considers more than just the Impact Factor or frequency an article has been cited, it is important to begin with a clear understanding of why the review is needed, such as the needs of a potential audience or what data is need for a pending decision, and to have a well-structured research question formulated to begin the process. Sometimes the most challenging task of a literature review is determining the review's focus and creating a well-defined question the research aims to answer. This is an important step that will guide many aspects of the review process and streamline an often-overwhelming process.
While there are several strategies for developing a clear and focused research question, in clinical research and development using the PICOTS (Patient Population, Intervention, Comparator, Outcome, Timing/Setting, Study Design/Study Characteristics) framework is helpful.4-6 Each element in the PICOTS framework focuses the question, ultimately yielding the most applicable search results. Additionally, well-defined PICOTS reduce the risk of potential bias. Common examples of bias in research include selection bias, interview bias, recall bias, citation bias, and performance bias. Common bias mitigation strategies in research include blinding, randomization, and increased sample sizes.5
After the question is clearly defined, it is crucial to know how and where to search for information. First the well-defined research question needs to be translated into keywords (Hint: words used in your PICOTS framework work well!).6 Having descriptive and relevant keywords and phrases is the cornerstone to an effective search and using synonyms and alternative text often elicits more material.
Next it’s important to set some search boundaries. This is helpful because setting boundaries before beginning research helps to establish inclusion and exclusion criteria for a literature review. Developing inclusion and exclusion criteria will help determine the relevant information for the review results. Some researchers suggest that each PICOTS component has inclusion and exclusion criteria to reduce potential bias. Standard inclusion and exclusion criteria include language, date, exposure of interest, geographic location, participants, peer review, reported outcome, setting, study design, and publication type.7-9
Once the criteria are set, it is time to start searching. There are web-based search engines like PubMed or Google Scholar and numerous databases available for literature searches, including evidence-based databases for integrated information available as systematic reviews and databases for articles initially published in peer-reviewed journals. At the same time, it is essential to be as comprehensive as possible; knowing where to go and where not to go for information is vital. As no one database can explore all the literature in a field, several different databases should be utilized to provide the most comprehensive and balanced review.6,9
The National Institutes of Health (NIH) recommends that at a minimum, PubMed, or Medline, Embase, and the Cochrane Central Trials Registry need to be searched.9 Having a list of journals and databases relevant to your research questions helps keep the process organized and moving forward.
Extracting information from research papers can be very tedious and time-consuming and requires guidance on how to approach and break down a research paper to efficiently and effectively gather and record information. It is pertinent to take notes and document the process and progress while executing research. Below is a suggested method for the identification and validation of article selection for literature reviews.
Typically, the abstract would have already been reviewed in the process of deciding whether or not the article meets the literature review inclusion and exclusion criteria. However, if the abstract has not been reviewed yet, it is important to read the abstract, if available, to obtain a high-level overview of why and how the research was conducted and the outcome of that research. Between the title, abstract and any listed keywords the authors have chosen to provide, you should be able to determine whether a publication may be a good fit for your review.
We would recommend skipping the introduction section when determining whether a publication should be included in a literature review. The introduction of a research paper often includes an overview of the research topic under investigation and why the research was conducted and may summarize prior research however, it will rarely offer you evidentiary support for how well the current research was performed or how relevant the work is to your review topic.
At this point in assessment of the article under review, there is a general understanding of the topic of investigation, why and how the research was conducted, and the implications of the findings. While reading this section, it is important to look for red flags that indicate poor quality of research including incomplete or unclear methods, small sample sizes, missing description of the statistical analysis performed. Ask yourself: Do I understand what experiment or intervention was performed enough to do the same thing? The methods section of a research paper should go into enough detail to allow another researcher to repeat the research. If the answer is no, or unclear consider excluding the publication from your review.
Data visualization can provide readers with general understanding of what was measured during the investigation without reading through multiple paragraphs. If the methods appear to be sound, the next step is to look at the data representations in the article, including tables and figures. Tables and figures often contain bulk data without the bias of interpretation from the investigators.
It is important to consider whether the figures and tables match up with the methods used—all methods used should be reflected somewhere in the figures of the publication. As well, figures should always reflect the statistical analysis that has been completed, with error bars, p-values, or asterisks denoting significance in the footnotes. Review all figures before reading the researcher interpretation of the data to get an unbiased picture of the outcomes. Only once you have a solid understanding of the data should you move on to reading through the results section to see if your takeaways align with the authors.
The results section should describe the data as it is displayed in the figures and tables and the statistical analysis used to interpret the data into findings. Read the results section to fill in any gaps in understanding high-level details of the research paper, and to determine if your understanding of the data aligns with that of the authors. If your interpretation of the data does not match the authors, but you have sufficient subject matter understanding of the methodology applied, you may consider not including the article in your review.
The conclusion, often called discussion in a research paper, summarizes the results of the research, discusses limitations, describes how the research was conducted, and how the findings expand, oppose, or validate current knowledge of the topic in investigation. Occasionally the discussion will also include the implications of the findings, as well. It is important to note any limitations of the research described in this section as they can give insight into the quality of the research. If for instance, a clinical study had started out with a diverse patient population, but most of the patients who completed the study were white, non- Hispanic the discussion section should denote this as a potential limitation to interpreting the results broadly. This would not be a ‘deal breaker’ for inclusion into your review if the article meets your other criteria, but it should be taken into consideration and potentially noted in your review.
Lastly, review the citations/references and pay attention to the references used in the introduction. References used in different sections of a publication often signify different things to the reader. For instance, references used in the introduction can give a baseline for comparing findings of the current research with historical research. The references/citations section can be a good source to find additional articles to include in your literature review that you may have missed in your initial keyword search.
Once you have completed these five steps and finalized your short list of publications that meet your literature review criteria, it is crucial to go back and read the papers in their entirety to ensure full comprehension. Taking notes regarding the key findings, limitations and implications of each article will provide you with a great starting point to outline your review.
Literature reviews are a pivotal part of the scientific process, allowing researchers the ability to quickly gain a high-level understanding of a field, and life science businesses the information to make strategic, data-driven decisions. While there are ‘rules’ to follow to ensure a rigorous methodology and minimized bias occurs during the process of conducting (or authoring) a literature review, the background information, steps provided for creating a well-structured research question, and methodology to effectively examine publications can assist with adherence to the ‘rules’ and provide a straight-forward, structured approach to the literature review.
Emily Moyers, Resource Manager, April Purcell, Consultant, Clinical Operations, and Cortney Miller, Senior Manager, Consulting Operations; all with Halloran Consulting Group