SRDR+ as a Tool to Teach Evidence Synthesis Principles and Methods


Image of James Scott Parrott James Scott Parrott | August 26, 2021






When I first began teaching the principles and methods of evidence synthesis (i.e., how to conduct a systematic review) over a decade ago, I cast around to find an online platform learners could use to extract information from an article for their systematic review. At the time, I was not particularly mindful of the andragogical utility of any of the tools I considered. Rather, I was focused on finding something that


  1. Had a relational database structure
  2. Was customizable while still providing enough initial structure to support someone new to the process
  3. Was relatively inexpensive (free was ideal).

While something like Microsoft Excel® (which the prior instructor had been using) fit the bill on the last count, it was annoyingly fussy to try to set up a sheet or sheets to account for the multiple-to-one structure required to capture the multiple measures per outcome, multiple outcomes per arm, multiple arms per study, and so on.


That’s when a colleague asked if I had ever heard of the Agency for Healthcare Research and Quality (AHRQ)-funded Systematic Review Data Repository (SRDR). I tried it, loved it, and ultimately adopted it for my course. Since that time, I have had over fifty learners successfully carry out their systematic review projects using SRDR (and now, SRDR+).


I have realized since that SRDR+ can be used as a scaffold to teach key concepts in evidence synthesis.


Part of the challenge of authoring a systematic review and meta-analysis is that you need to have a clear sense of what you need before you need it. In other words, we canonically imagine that the steps of the evidence synthesis process flow from our PICO to extraction template construction, to data extraction, to data analyses, and finally to the conclusion (the story, if you will). In fact, the process is much less linear.


  • How and why we formulate a PICO (or other type of evidence synthesis question) depends on where we imagine our conclusion—or story—will “fit” within the larger context of what is known and what is done. So, it is our (at this point) vaguely imagined story that informs our PICO.
  • When structuring the data extraction template, we must have a sense of what pieces of information we need to collect. Generally, learners are at a loss as to precisely what information their future story demands.
  • At the data extraction stage, we need to have a clear sense of the best format for this information (i.e., Is free text fine or should the information be collected in a more structured or discrete format?).
  • At the data analysis stage, the type of story we are aiming for influences our analytic methods (i.e., Is this a quantitative, qualitative, or mixed methods analysis? Do we anticipate effect modifiers? Might aspects of the context of the intervention have an influence?)

In other words, the imagined contours of our final story influence every step of the process (see figure).


Figure 1 - Systematic Workflow

So, how can using SRDR+ as a teaching tool help learners navigate these challenges?


In a word: "scaffolding". Scaffolding is the educational strategy of breaking up the learning into discrete steps and, importantly, providing a tool or structure for each step.


Taylor and Hamdy argue that scaffolding is “necessary because the sheer volume and complexity of knowledge to be acquired often leaves the learner standing on the threshold (in a state of liminality)”. In other words, learners new to the complex process of evidence synthesis may be bewildered by the challenge of determining where to start. A tool like SRDR+ provides a structure that can be leveraged to cut through some of this confusion (obviously, with direction).


But how? I’ve found that the structure of the SRDR extraction template forces learners to approach articles in a new way—reading analytically.


What types of information do I need? Many post-professional graduate learners seem to have only the vaguest notion of the structure of research articles. So, the idea of extracting particular types of information from an article is a bit mystifying. “Information” is just a big nebulous blob. Early in the course, when we begin talking about data extraction, the first step is to help learners understand the basic structure of research articles, and we’ve found that a helpful way to do that is to use the default tab structure of the SRDR+ extraction template as a framework. For learners who developed the bad habit of “reading research” by glancing at abstracts and (maybe) Discussion sections, learning how to analyze (and begin to evaluate) research articles by focusing systematically on study design, arms, arm details, sample characteristics, outcomes, and results (all tabs in SRDR+) is a new experience. The tabular structure of the SRDR data extraction forms help learners grasp the idea that research articles have a fairly standard structure. Rather than expect that learners know how to analyze an article, we demonstrate the act of analytically reading an article by walking through the tabs of the extraction template and identifying the corresponding section of an example article. This helps learners grasp the notion that there are standard types of information and that they appear in pretty much the same sections of most research articles.


What questions should I be asking of the study? Even knowing that research articles have a reasonably standard structure and communicate common types of information does not mean that learners know which pieces of information they will need to be able to carry out their project. We typically run into two kinds of misunderstandings.


  • First, most learners initially think that they need to try to capture all the information authors report (e.g., extracting all arms even though some are of no interest for their project, or extracting all outcomes that authors report rather than only those needed to answer their PICO question). Demonstrating to learners how to set up the Suggested Arms and Suggested Outcomes in the template building process helps them to focus in on only those arms or outcomes that matter and ignore those that do not.
  • Second, learners are generally insensitive to potentially crucial aspects of intervention delivery or exposure characteristics that may affect outcomes. For instance, how might the setting of the treatment delivery make a difference? Might the timing of the intervention make a difference? Helping learners ask (and extract) detailed information on the intervention or exposure in the Arms Details tab sensitizes them to the notion that not all intervention approaches are “created equal” and that these differences can have important implications for reported results.

How should I capture information from the article? For most learners, the default approach to extracting information from articles is to capture information in free-text format. While this is unavoidable for some types of information, it will prove incredibly inefficient—even a hinderance—later in the process when the learner seeks to find patterns in the data (e.g., via subgroup analysis). We use the range of question structures available within SRDR+ to help learners think through possible classifications of different types of information. For instance, entering a free-text description of the treatment setting is likely to be much less useful analytically than creating a question (or series of questions) that capture different setting characteristics (e.g., clinic, hospital, community).


None of these challenges will be unfamiliar to people who teach and train others in evidence synthesis methodology. Moreover, we have found that merely telling learners or trainees the best methodology is not nearly enough. Learners need to be shown how to avoid the typical pitfalls and provided resources that channel them away from bad habits, and SRDR+ is a highly effective tool for doing this.


If you haven’t already done so, please give SRDR+ a shot. If you have any questions, contact us on this blog, via email (srdr@ahrq.hhs.gov), or via our Twitter page!


Disclaimer: The Systematic Review Data Repository Plus is funded by the Agency for Healthcare Research and Quality (AHRQ). Authors of this blog post are solely responsible for its content, which does not necessarily represent the views of AHRQ or the U.S. Department of Health and Human Services (DHHS). Readers should not interpret any statement in this blog post as an official position of AHRQ or of DHHS.




James Scott Parrott, MPH, CPH
SRDR Administrator
Brown Evidence-based Practice Center
Brown University School of Public Health




Enhancements to SRDR+, the Next-Level Replacement For SRDR

Image of Bryant Smith Bryant Smith | April 14, 2021


SRDR+ is a free, open source, online, collaborative platform for extracting systematic review data, archiving the data after the review is complete, and sharing the data with others (including other systematic reviewers, clinical guideline developers, healthcare professionals, patients and health policy makers).

Over the last couple of years, we’ve made several improvements to SRDR+. The advancements made were, in large part, based on feedback from users who’ve used both SRDR+ and its predecessor (SRDR) regarding what they think are the main advantages of the newer platform. We summarize some of the main advancements below.

SRDR+ allows safe and secure data extraction for various types of systematic reviews
When data extractors are extracting data, they need not worry about losing unsaved extracted data. All data entered into SRDR+ forms are autosaved.


The autosave feature on SRDR+ is a major advantage. On SRDR, there were times where I lost a page of outcomes due to a malfunction in the "Save" button. SRDR+ is also a lot faster in loading tabs and processing data than SRDR.”

Shivani Mehta, MPH candidate, Center for Evidence Synthesis in Health, Brown University School of Public Health


SRDR+ extraction forms can be tailored to meet the needs of various types of projects
We have made the data extraction form development capabilities in SRDR+ flexible so that the forms can be customized to various types of systematic reviews, such as those evaluating treatment effectiveness, diagnostic accuracy, and epidemiology. SRDR+ extraction forms support an array of question types, including text boxes, multiple-choice checkboxes, single-choice radio buttons, dropdown questions, and matrix formats. Entire sections of extraction forms can also now be added or removed.

We’ve also introduced data field dependencies such that the platform allows subsequent questions to appear to data extractors based on responses to previous questions. Data field dependencies can be used for any question in any section of SRDR+ extraction forms.

“Building and customizing extraction forms on SRDR+ is much, much more intuitive than it was on SRDR. I love the ability to add extra columns and rows to any questions type. Also, allowing dependencies among questions is a wish come true – provides the ability to streamline extractions (not all analysts are great at knowing whether a question sometimes applies/sometimes not), and also allows for the use of multiple risk of bias tools (with a screener question at the beginning of the ROB tab).”

James Scott Parrott, PhD, Professor, Rutgers University School of Professional Studies


The new and improved Data Comparison Tool in SRDR+ is useful for teams that conduct dual independent data extraction of studies and thus need those data compared and adjudicated
Comparison Tools shows users discrepancies between data extracted from multiple team members side-by-side and allows users to resolve those discrepancies efficiently.

“The comparison tool is worth the price of admission! It used to take me ages with SRDR to try to consolidate the data offline after first exporting it to Microsoft Excel. Now, with the consolidation tool, much more streamlined. I love, love, love this tool!”

James Scott Parrott, PhD, Professor, Rutgers University School of Professional Studies


The Outcomes section of SRDR+ allows for clear definition of Outcomes
The structured approach that SRDR+ uses to define outcomes aligns with the outcome definition structure used by study registries such as ClinicalTrials.gov. Defining outcomes on SRDR+ extraction forms clarifies the data extraction process, promotes consistency in how data are extracted, and helps to reduce bias and errors in the systematic review process.

“SRDR+ provides ample space and options to define and edit outcomes and their definitions, which is great!”

Monika Reddy Bhuma, BDS, MPH, Research Associate, Center for Evidence Synthesis in Health, Brown University School of Public Health


The Results section of SRDR+ Extraction Forms supports entry of outcome measure data for multiple time points and for comparisons within and between study arms
Data extractors can choose from pre-existing outcomes or add new outcomes. This helps promote consistency in the data extraction process across what are typically multi-person data extraction teams.

The Between Arm Comparison section of SRDR+ allows for easy input of multiple comparisons, and more clearly distinguishes when two or more study arms are being compared.”

Wangnan Cao, PhD, Research Associate, Center for Evidence Synthesis in Health, Brown University School of Public Health


SRDR+ allows data export into Microsoft Excel or Google Sheets
Individual sections of SRDR+ data extraction forms (e.g., Design, Sample Characteristics, Outcomes, Results) are exported as separate sheets in the same Excel file or Google Sheets document for easy handling, linking, and formatting for analyses.

“ I like the flexibility that SRDR+ provides for data exports. Having completed four meta-analyses using the long download formatting, I think that it works pretty well.”

Scott Parrott, PhD, Professor, Rutgers University School of Professional Studies


If you haven’t already done so, please give SRDR+ a shot. If you have any questions, contact us on this blog, via email (srdr@ahrq.hhs.gov), or via our Twitter page!

Disclaimer: The Systematic Review Data Repository Plus is funded by the Agency for Healthcare Research and Quality (AHRQ). Authors of this blog post are solely responsible for its content, which does not necessarily represent the views of AHRQ or the U.S. Department of Health and Human Services (DHHS). Readers should not interpret any statement in this blog post as an official position of AHRQ or of DHHS.



Bryant Smith, MPH, CPH
SRDR Administrator
Brown Evidence-based Practice Center
Brown University School of Public Health




SRDR+ Presents New Importing Capability for FHIR-formatted Citation Files for Systematic Reviews

Image of Ian Saldanha Ian Saldanha | March 5, 2021


Supported by AHRQ, the Systematic Review Data Repository Plus (SRDR+) is working towards eventually becoming the first FHIR-enabled platform for conducting and sharing the results of systematic reviews. As part of this work, handling citations of included studies in systematic reviews just got easier!

Study-related citation information (author names, article titles, page numbers, and other identifying information) is crucial for identifying studies and referencing them in systematic review reports.

Once researchers are ready to screen and extract data from eligible studies, they must first import their citations into platforms, such as SRDR+. Citation files can take the form of a variety of file formats.

For the purposes of screening and subsequent data extraction, SRDR+ is now capable of importing citations saved in the following file formats: Research Information Systems (.ris), comma-separated values (.csv), a list of PubMed IDs (.txt), EndNote file (.enw), and the newly supported, HL7 FHIR file format.

The HL7 FHIR file format is a brand new standard, developed by the HL7 Fast Healthcare Interoperability Resources (FHIR) endeavor, which defines how healthcare information can be exchanged between different computer systems regardless of how the information is stored within those systems. Furthermore, healthcare information saved in the HL7 FHIR file format is secured in such a way that only those with a need and assigned privileges to access the information are allowed to.

We anticipate that the HL7 FHIR file format will eventually become widespread in the field of evidence synthesis.

PICO Portal is an example of a systematic review platform that is now supporting the HL7 FHIR file format. SRDR+ users will be happy to know that they can now seamlessly import citation data from such platforms directly into SRDR+, where they can then continue working on their systematic reviews.

Give it a shot! Try importing citations using the HL7 FHIR file format and let us know how it’s going. Contact us on this blog, via email (srdr@ahrq.hhs.gov), or via our Twitter page!

The Systematic Review Data Repository Plus is funded by the Agency for Healthcare Research and Quality (AHRQ). Authors of this blog post are solely responsible for its content, which does not necessarily represent the views of AHRQ or the U.S. Department of Health and Human Services (DHHS). Readers should not interpret any statement in this blog post as an official position of AHRQ or of DHHS.



Ian Saldanha, MBBS, MPH, PhD
SRDR Director
Assistant Professor, Brown Evidence-based Practice Center
Brown University School of Public Health




The newly updated Systematic Review Data Repository Plus (SRDR+)

Image of Iam Saldanha Ian Saldanha | March 7, 2019


A lot of human capital and monetary resources are spent conducting systematic reviews. At the same time, there has been strong momentum towards promoting open science and data sharing. Technology can help us realize these ideals and improve the efficiency of the systematic review enterprise. A particularly inefficient systematic review step is data extraction from primary studies. Data extraction can be improved, for both doers and users of systematic reviews, through a data system that is robust, user-friendly, and amenable to sharing.

The Brown University Evidence-based Practice Center (Brown EPC) recently launched an updated version of the Systematic Review Data Repository. This new version, called SRDR+ (https://srdrplus.ahrq.gov), is a free, powerful, online, collaborative system for extracting and archiving study data during systematic reviews. To our knowledge, this is the only system of its kind that is free to anybody around the world. We therefore consider SRDR+ to be a community resource.

The new features of SRDR+



We have made various improvements to SRDR+’s functionality as well as its look and feel in response to advancements in systematic review methodology and technology. SRDR+ has the flexibility to meet the needs of various types of systematic reviews, such as those evaluating treatment effectiveness, diagnostic accuracy, and epidemiology.

SRDR+ also allows users to delete and/or add “tabs.” Tabs help organize questions related to various elements of a study, such as design, participant characteristics, and quality. In addition, users can click a button and load current versions of entire quality assessment tools with their specific items as well as instructions for filling out those items. User can also delete specific items.

SRDR+ also makes it easier to define outcomes clearly. The new version offers a structured approach that is consistent with outcome definitions in study registries such as ClinicalTrials.gov. This can provide clarity about the data that need to be extracted, promote consistency in how those data are extracted, and help minimize bias in systematic reviews.

SRDR+’s new Data Comparison Tool is perhaps the most anticipated and exciting addition to SRDR+. This automated tool displays data extracted by multiple members of the team side-by-side and automatically flags discrepancies that need resolution. Anyone who’s done this process manually knows how incredibly time consuming it can be.

Another major efficiency gain is related to the complete revamp of SRDR+’s underlying open-source code. This revamped code enables considerably faster page-loading and saving.

More than just a data extraction tool



SRDR+ isn’t just a tool for organizing a systematic review’s data extraction process. SRDR+ also functions as a repository of previously-extracted data. As of March 2019, the data extracted in more than 130 systematic reviews (for more than 13,000 primary studies) have been made public. This means that future systematic review teams working on similar topics can reduce countless hours spent on data extraction by re-using these data. This resource is especially invaluable for teams conducting updates of systematic reviews.

SRDR+ also allows users of systematic reviews, such as guideline developers, policy-makers, patients, and the general public, to access study data that might be relevant to their decision-making processes. A list of systematic review projects with data that are publicly available through SRDR+ can be found here:https://srdr.ahrq.gov/projects/published.

We encourage you to try the new version of SRDR+ here:https://srdrplus.ahrq.gov.To learn more or request a demonstration session, please email the SRDR+ team atsrdr@ahrq.hhs.gov.You can also follow SRDR+ on Twitter (@SRDRPlus).

The Agency for Healthcare Research and Quality (AHRQ) has funded the development and maintenance of SRDR since 2012.



Ian Saldanha, MBBS, MPH, PhD
SRDR Director
Assistant Professor, Brown Evidence-based Practice Center
Brown University School of Public Health