Cases in context: levels of evidence
In thinking about how case reports "fit in" to the types of evidence available to answer clinical questions, I thought it might be useful to do a quick "refresher" post on the kinds of literature available.
Many also turn to a graphic to represent how the levels of evidence all fit together, in terms of relative strength of methodology (e.g. this evidence pyramid developed by the University of Washington Health Sciences Libraries, which was in turn adapted from this pyramid by the University of Virginia Health Sciences Library).
If you search for "evidence pyramids" or levels of evidence you'll find there's a little bit of "wobble" in how these are constructed -- the various authors arrange some of the levels differently, particularly at the bottom end of the pyramid.
So, the levels of evidence in roughly a hierarchical order, in broad categories, starting at the top and working our way down (with links to additional definition):
Summing/collating the "best" evidence (methodological rigor, relevance)
- systematic reviews and meta-analyses (also see "How to read a paper: Papers that summarise other papers")
- practice guidelines; consensus statements authored by groups of experts (e.g. NIH Consensus Development Program)
Primary literature, i.e. the evidence from actual clinical studies
- randomized clinical trials
- prospective or retrospective cohort studies; case control studies
- other observational studies (e.g. ecologic studies, cross-sectional designs)
- case series; case reports; reviews of reported cases (a case report or case series with a summary of other cases reported in the literature, usually accompanied by a table summarizing these other cases, e.g. this article from The Oncologist)
Though this is a rough approximation of how these types of evidence fit together in terms of relative quality, there are serious problems with looking at these levels as "absolutes" - the stated study design or article type provides only a rough indicator of potential quality; the true quality of a given study/article depends on the design, execution and reporting of the study in the paper, as well as how relevant it is to the question at hand.
Coming soon - We'll continue this discussion, considering strengths and weaknesses of each type of evidence, and where other kinds of literature fit in (e.g. traditional review articles, structured abstracts plus critique, textbooks, letters to the editor, etc.)...
Many also turn to a graphic to represent how the levels of evidence all fit together, in terms of relative strength of methodology (e.g. this evidence pyramid developed by the University of Washington Health Sciences Libraries, which was in turn adapted from this pyramid by the University of Virginia Health Sciences Library).
If you search for "evidence pyramids" or levels of evidence you'll find there's a little bit of "wobble" in how these are constructed -- the various authors arrange some of the levels differently, particularly at the bottom end of the pyramid.
So, the levels of evidence in roughly a hierarchical order, in broad categories, starting at the top and working our way down (with links to additional definition):
Summing/collating the "best" evidence (methodological rigor, relevance)
- systematic reviews and meta-analyses (also see "How to read a paper: Papers that summarise other papers")
- practice guidelines; consensus statements authored by groups of experts (e.g. NIH Consensus Development Program)
Primary literature, i.e. the evidence from actual clinical studies
- randomized clinical trials
- prospective or retrospective cohort studies; case control studies
- other observational studies (e.g. ecologic studies, cross-sectional designs)
- case series; case reports; reviews of reported cases (a case report or case series with a summary of other cases reported in the literature, usually accompanied by a table summarizing these other cases, e.g. this article from The Oncologist)
Though this is a rough approximation of how these types of evidence fit together in terms of relative quality, there are serious problems with looking at these levels as "absolutes" - the stated study design or article type provides only a rough indicator of potential quality; the true quality of a given study/article depends on the design, execution and reporting of the study in the paper, as well as how relevant it is to the question at hand.
Coming soon - We'll continue this discussion, considering strengths and weaknesses of each type of evidence, and where other kinds of literature fit in (e.g. traditional review articles, structured abstracts plus critique, textbooks, letters to the editor, etc.)...
Labels: evidence based medicine
0 Comments:
Post a Comment
<< Home