Wednesday, February 06, 2008

An attending's perspective on information in medicine

This week’s JAMA has an interesting piece written from the perspective of an attending physician, considering how the role of the mentor in clinical medicine has evolved with increasing availability of information (mentions PDAs, UpToDate, PubMed, among other things).

An excerpt:
It has become increasingly clear to me that with the information revolution in full throttle, the role of the clinical attending has changed drastically and continues to evolve. Besides using rounds to discuss many of the social, ethical, and professional issues surrounding a patient's care, I increasingly find myself teaching less about the current state of information and more about how things have changed and how our understanding of an illness or treatment has evolved to where it is currently. I teach about multiple portals—how there is no single way to approach a case and how the one we choose may not be the only or even the best strategy despite our attempts to get the facts right and review the relevant data. I have the distinct impression that my mentors possessed a degree of certainty that in hindsight I am not sure was warranted. In this era of evidence-based medicine, I am more likely to point out how scanty the evidence actually may be when making a decision. Although I may refer to the "classic" article in a particular field, all too often I will point out how in retrospect it looks much less convincing than when it was first published just 10 years ago. Rather than giving my team answers, I am more likely to ask them to formulate a question that interests them regarding a specific case, then investigate the data, and report back to the group. The group can then try to digest this information and place it in the context of the case at hand.
Reference:
Horowitz HW. The Interpreter of Facts. JAMA 2008;299: 497-498.

Labels:

Thursday, November 01, 2007

October case posted: The evidence behind vancomycin dosing

The October 2007 issue of the JMLA is up in PubMed Central, including the October case study, "Approaching and analyzing a large literature on vancomycin monitoring and pharmacokinetics."

An excerpt from the case:

At morning rounds in your hospital's intensive care unit, a resident from the team presents a 55-year-old woman (weight 129 lbs) with a past medical history of multiple sclerosis, cerebellopontine angle meningioma, hypothyroidism, and a neurogenic bladder requiring a Foley catheter. This patient was transferred from her nursing home 3 days ago with a fever and altered mental status. Results from the nursing home bacterial culture of the patient's urine revealed Gram negative rods. Bacterial culture of blood drawn from her peripheral intravenous (IV) line at the nursing home indicated Gram positive cocci. Blood cultures redrawn upon hospital admission are still pending and require confirmation.

According to the patient's chart, she began empiric treatment at the nursing home with vancomycin (1,000 milligrams [mg] intravenously every 12 hours) and piperacillin-tazobactam (3.375 g IV every 6 hours) for urosepsis 4 days ago. The patient's current serum creatinine is 0.56 micrograms per deciliter (mg/dL) (normal range: 0.6–1.1 mg/dL) [1], and her estimated creatinine clearance is 104 milliliters per minute (mL/ min) (normal range: 88–128 mL/min) [2]. Her current body temperature is 97.2° Fahrenheit. Today is day 4 of this patient's vancomycin and piperacillin-tazobactam regimen and hospital day 3.

In reviewing the plan for the next twenty-four hours, the attending physician notes that the patient currently has a standing order for a laboratory test of the vancomycin trough level in her serum, with the blood sample to be taken just prior to the next dose of the drug. On day three of antibiotic therapy, the patient's serum vancomycin trough level was eleven mcg/mL, and, on day four, the trough was eighteen mcg/mL. The institution's target range for the serum trough level of vancomycin is five to twenty mcg/mL.

The attending physician initiates a discussion with the team—including a fellow, three residents, a pharmacist, a dietitian, the unit's nurses, and you, as the team's librarian—about monitoring of vancomycin. The clinician queries the team about the rationale for the standing order for vancomycin trough monitoring. The residents indicate that they often order this lab test when a patient is receiving vancomycin in an attempt to ensure therapeutic effectiveness and to prevent adverse effects of the drug but are not aware of any documentation behind the practice. The pharmacist comments that clinical practice can sometimes evolve before supporting evidence exists and that standards of practice at a hospital may not always be supported by evidence from the literature. In response to this discussion, the group asks you to identify any evidence supporting or disproving the practice of routine monitoring of trough levels in patients being treated with vancomycin in the adult critical care setting. Figures 1 and 2 provide elaboration from the team's attending physician and pharmacist on the significance of this question to clinical practice on the unit.
Additional discussion to follow soon!


Reference:Lee P, DiPersio D, Jerome RN, Wheeler AP. Approaching and analyzing a large literature on vancomycin monitoring and pharmacokinetics. J Med Libr Assoc. 2007 October; 95(4): 374–380.

Labels: ,

Friday, September 28, 2007

Clinical Guidelines and Evidence-Based Medicine

Several medical bloggers have been writing about clinical practice guidelines in the context of evidence-based medicine recently.

First, a refresher - clinical guidelines are intended to inform clinical decision-making. They are generally developed after a review of the medical evidence by experts in a particular field or an organization, and sometimes include expert opinion. They do not create or present new evidence, but generally summarize the quantity and quality of existing evidence, and add a bit of expert opinion on what the recommended course of treatment might be based on those findings. A lengthy definition of evidence-based medicine can be reviewed online, but it essentially boils down to using the triad of best evidence, clinical expertise, and patient preference to guide medical care.

Respectful Insolence points out that problems arise when a guideline attempts to apply findings from a very specific patient population to a more broad one, or vice versa. DB's Medical Rants has a series of three posts on the topic, including discussion of how a patient with multiple diagnoses makes correct interpretation and application of a guideline on one specific diagnosis more difficult. Similarly, Notes from Dr RW reminds us of the PICO system of EBM, and how guidelines may not adequately represent the "P" part - the patient/population. Dr RW notes that simply following guidelines is *not* true evidence-based medicine.

All of these bloggers make an important point - guidelines alone do not evidence-based medicine make, because they may not take into account the patient's preferences, may not represent all or the newest of the evidence, and may not be appropriate to the specific patient situation. Guidelines may serve as a good knowledge-building starting point on a topic, but following them exactly with various patients misses the three-fold nature of evidence-based medicine - patient, provider, and proof.

For more on EBM, check out these resources:
-Introduction to Evidence-Based Medicine, from the Duke University Medical Center Library
-Evidence based medicine: what it is and what it isn't, editorial in BMJ
-Evidence-based medicine: a commentary on common criticisms, commentary in CMAJ

Labels: ,

Friday, August 17, 2007

More on the TRIP database

The JMLA this year has published a couple of items on the Turning Research into Practice database (TRIP), including a usability study and a resource review, linked below.

A summer entry in the TRIP blog, Liberating the literature, includes 10 tips for searching TRIP.


Related:
Meats E, Brassey J, Heneghan C, Glasziou P. Using the Turning Research Into Practice (TRIP) database: how do clinicians really search? J Med Libr Assoc. 2007 Apr;95(2):156-63. free via PubMed Central archives

Resource review by Trina Fyfe, J Med Libr Assoc. 2007 April; 95(2): 215–216. free via PubMed Central archives


(thanks to Stephen Barnett and the Evidence-Based Nursing and Midwifery blog)

Labels: ,

Thursday, March 01, 2007

Cases in context: levels of evidence

In thinking about how case reports "fit in" to the types of evidence available to answer clinical questions, I thought it might be useful to do a quick "refresher" post on the kinds of literature available.

Many also turn to a graphic to represent how the levels of evidence all fit together, in terms of relative strength of methodology (e.g. this evidence pyramid developed by the University of Washington Health Sciences Libraries, which was in turn adapted from this pyramid by the University of Virginia Health Sciences Library).

If you search for "evidence pyramids" or levels of evidence you'll find there's a little bit of "wobble" in how these are constructed -- the various authors arrange some of the levels differently, particularly at the bottom end of the pyramid.

So, the levels of evidence in roughly a hierarchical order, in broad categories, starting at the top and working our way down (with links to additional definition):

Summing/collating the "best" evidence (methodological rigor, relevance)
- systematic reviews and meta-analyses (also see "How to read a paper: Papers that summarise other papers")

- practice guidelines; consensus statements authored by groups of experts (e.g. NIH Consensus Development Program)

Primary literature, i.e. the evidence from actual clinical studies
- randomized clinical trials

- prospective or retrospective cohort studies; case control studies

- other observational studies (e.g. ecologic studies, cross-sectional designs)

- case series; case reports; reviews of reported cases (a case report or case series with a summary of other cases reported in the literature, usually accompanied by a table summarizing these other cases, e.g. this article from The Oncologist)

Though this is a rough approximation of how these types of evidence fit together in terms of relative quality, there are serious problems with looking at these levels as "absolutes" - the stated study design or article type provides only a rough indicator of potential quality; the true quality of a given study/article depends on the design, execution and reporting of the study in the paper, as well as how relevant it is to the question at hand.

Coming soon - We'll continue this discussion, considering strengths and weaknesses of each type of evidence, and where other kinds of literature fit in (e.g. traditional review articles, structured abstracts plus critique, textbooks, letters to the editor, etc.)...

Labels:

Friday, February 16, 2007

When is observational data "enough"?

An article by Glasziou et al. in today's British Journal of Medicine, "When are randomised trials necessary? Picking signal from noise," considers when observational data is sufficient in establishing the effectiveness of a treatment.

It gives a number of examples to illustrate situations in which RCTs are probably unnecessary, when the signal to noise ratio is unlikely to be explained by other biases (i.e. inferences about a direct relationship between the treatment and improvement in the condition are large, in the right sequence, and not likely to be due to other influences) - the examples range from laser treatment of portwine stains to tracheostomy for tracheal obstruction to blood transfusion for severe hemorrhage-related shock.

For a more tongue-in-cheek look at this issue, try this article: Smith GC, Pell JP. Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials. BMJ. 2003 Dec 20;327(7429):1459-61. PubMed abstract
CONCLUSIONS: As with many interventions intended to prevent ill health, the effectiveness of parachutes has not been subjected to rigorous evaluation by using randomised controlled trials. Advocates of evidence based medicine have criticised the adoption of interventions evaluated by using only observational data. We think that everyone might benefit if the most radical protagonists of evidence based medicine organised and participated in a double blind, randomised, placebo controlled, crossover trial of the parachute.

Labels: ,