Johns Hopkins Nursing Evidence-Based Practice Model

Resources for Learning More about the JHN EBP Model

Forming Your Practice Question

If your question doesn’t fit into the PICO framework, review our Formulating Your Research Question page on our Expert Searching Guide.

When setting out to do an EBP project, you’ll need to have a well-developed research question. The JHNEBP Model’s Appendix A - PET Management Guide, supplies you with a checklist to ensure that you have thought through all the steps and have a winning team in place prior to the start. PET stands for Practice Question, Evidence, Translation.

When framing the EBP question, consider ideas such as:

  • What is the problem, and why is it important to fix it?
  • What is the current practice?
  • What kinds of evidence or study types will help answer the question?

Is your question a background question or a foreground question?

Background Questions - These are usually broad and used in the beginning. Background questions can be refined and adjusted as continue to develop the search. Background questions frequently assist in identifying best practices.

Foreground Questions - These types of questions are focused, with specific comparisons of ideas or interventions. Foreground questions can provide specific evidence related to the research question. Background questions can turn into foreground questions as the review progresses.

This process can be identified in the JHNEBP Model, Appendix B - Question Development Tool PICO. After you’ve completed Appendix A and Appendix B, complete Appendix C - Stakeholder Analysis Tool. This form is used to identify key stakeholders that can support decision-making, serve as subject matter experts, or implement change.

Searching for Evidence

To find the evidence, you will need to search for it. It will depend on what resources you have access to through your institution, but it is always a best practice to search more than one resource. This is because different resources index different topics and journals.

Use your question framework or JHNEBP Question Development Tool to determine the major elements of your question. Think about how authors might write about these concepts. There may be many terms to describe just one idea.

Use the Searching for Evidence with ABCDE or another search approach to conduct a thorough, effective, and comprehensive literature search, and be sure to work with a librarian if you can.


Now it’s time to critically appraise and take action on the evidence you found through the search. The JHNEBP Model has several tools available to help you grade the evidence and see the process through to the finish line.

  • Appendix D - The Evidence Level and Guide outlines three levels of evidence with quality ratings and describes each in a rubric.
  • Appendix E - The Research Evidence Appraisal Tool helps you decide if the evidence is quantitative or qualitative, and how to use that evidence to support your topic.
  • Appendix F - Sometimes you’ll find literature that is not primary research. Appendix F walks you through the steps of grading non-research evidence with the Non-Research Evidence Appraisal Tool.
  • Appendix G - You’ve read the research and appraised the evidence. Now it’s time to put it all together with the Individual Evidence Summary Tool.
  • Appendix H - The Synthesis Process and Recommendations Tool helps you make sense of the strength of the evidence toward a particular recommendation.
  • Appendix I - The Action Planning Tool ensures that you have a team in place to help you champion and implement change.
  • Appendix J - Finally, the Dissemination Tool guides you through ways you can disseminate your findings at conferences, in publications, in social media, and more.

Additional Tools for Critical Appraisal

In addition to the evidence levels and grading found in the JHN EBP Model Tools, you may find that other tools can offer additional guidance and understanding:

  • CASP Checklists: This set of eight critical appraisal tools are designed to be used when reading research. These include tools for Systematic Reviews, Randomised Controlled Trials, Cohort Studies, Case Control Studies, Economic Evaluations, Diagnostic Studies, Qualitative studies and Clinical Prediction Rule.
  • Cochrane Collaboration’s RoB 2: Version 2 of the Cochrane risk-of-bias tool for randomized trials (RoB 2) is the recommended tool to assess the risk of bias in randomized trials included in Cochrane Reviews.
  • The JADAD Scale for Reporting Randomized Controlled Trials: Jadad, A. R., Moore, R. A., Carroll, D., Jenkinson, C., Reynolds, D. J., Gavaghan, D. J., & McQuay, H. J. (1996). Assessing the quality of reports of randomized clinical trials: is blinding necessary? Controlled clinical trials, 17(1), 1–12.
  • GRADE Working Group: The Grading of Recommendations Assessment, Development and Evaluation (short GRADE) working group began in the year 2000 as an informal collaboration of people with an interest in addressing the shortcomings of grading systems in health care. The working group has developed a common, sensible and transparent approach to grading quality (or certainty) of evidence and strength of recommendations.
  • JAMA Series on Step-by-Step Critical Appraisal: Links to the ‘User’s Guides to the Medical Literature’ series of articles designed to promote incorporation of evidence into practice.
  • JBI Critical Appraisal Tools: JBI’s critical appraisal tools assist in assessing the trustworthiness, relevance, and results of published papers.
  • Newcastle-Ottawa Scale: The Newcastle-Ottawa Scale (NOS) is an ongoing collaboration between the Universities of Newcastle, Australia and Ottawa, Canada. It was developed to assess the quality of nonrandomised studies with its design, content and ease of use directed to the task of incorporating the quality assessments in the interpretation of meta-analytic results.
  • OHAT Risk of Bias Rating Tool: The OHAT Risk of Bias Rating Tool can be used for human and animal studies.
  • Oxford Centre for Evidence-Based Medicine Levels of Evidence: The CEBM Levels of Evidence framework sets out one approach to systematizing this grading process for different question types.
  • SYRCLE’s Risk of Bias Tool: This tool is based on the Cochrane RoB tool and has been adjusted for aspects of bias that play a specific role in animal intervention studies.
  • U.S. Preventive Services Task Force: The U.S. Preventive Services Task Force (USPSTF) assigns one of five letter grades (A, B, C, D, or I). The USPSTF changed its grade definitions based on a change in methods in May 2007 and again in July 2012, when it updated the definition of and suggestions for practice for the grade C recommendation.