Friday, September 28, 2007

A Healthy Dose of Skepticism

The Wall Street Journal recently published two pieces (one article, and one on the WSJ Health Blog) on the basic theme of whether scientific research can be trusted, and referring to an essay by John P. A. Ioannidis, Why Most Published Research Findings Are False.

Ioannidis's points are neatly summed up by this passage in the introduction to the freely available essay:
The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance.


In other words, the methods matter, as does bias and the type and size of the finding. In his recent article, in the Wall Street Journal, Robert Lee Hotz points out that while research methods matter, it is also procedurally difficult for peer reviewers and editors to tease out inappropriate techniques and conclusions:

"...findings too rarely are checked by others or independently replicated. Retractions, while more common, are still relatively infrequent. Findings that have been refuted can linger in the scientific literature for years to be cited unwittingly by other researchers, compounding the errors...No one actually knows how many incorrect research reports remain unchallenged."
In addition, trials that do not demonstrate findings that are suitable to the trial sponsors may be shuttered and never reported in the literature. A related and lengthy piece in a recent New York Times Magazine, "Do We Really Know What Makes Us Healthy?" used hormone replacement therapy as an example throughout to illustrate why it can be difficult to make sense of research findings, including that the first, dramatic reports are likely to garner media attention, while later clarifications may not.

The bottom line? A healthy dose of skepticism is needed when reading research reports, and a good working knowledge of bias, research methodology, and other methods can go a long way.

Clinical Guidelines and Evidence-Based Medicine

Several medical bloggers have been writing about clinical practice guidelines in the context of evidence-based medicine recently.

First, a refresher - clinical guidelines are intended to inform clinical decision-making. They are generally developed after a review of the medical evidence by experts in a particular field or an organization, and sometimes include expert opinion. They do not create or present new evidence, but generally summarize the quantity and quality of existing evidence, and add a bit of expert opinion on what the recommended course of treatment might be based on those findings. A lengthy definition of evidence-based medicine can be reviewed online, but it essentially boils down to using the triad of best evidence, clinical expertise, and patient preference to guide medical care.

Respectful Insolence points out that problems arise when a guideline attempts to apply findings from a very specific patient population to a more broad one, or vice versa. DB's Medical Rants has a series of three posts on the topic, including discussion of how a patient with multiple diagnoses makes correct interpretation and application of a guideline on one specific diagnosis more difficult. Similarly, Notes from Dr RW reminds us of the PICO system of EBM, and how guidelines may not adequately represent the "P" part - the patient/population. Dr RW notes that simply following guidelines is *not* true evidence-based medicine.

All of these bloggers make an important point - guidelines alone do not evidence-based medicine make, because they may not take into account the patient's preferences, may not represent all or the newest of the evidence, and may not be appropriate to the specific patient situation. Guidelines may serve as a good knowledge-building starting point on a topic, but following them exactly with various patients misses the three-fold nature of evidence-based medicine - patient, provider, and proof.

For more on EBM, check out these resources:
-Introduction to Evidence-Based Medicine, from the Duke University Medical Center Library
-Evidence based medicine: what it is and what it isn't, editorial in BMJ
-Evidence-based medicine: a commentary on common criticisms, commentary in CMAJ

Labels: ,

Thursday, September 20, 2007

Drug safety news letter from the FDA

The US FDA has launched the first issue of its new Drug Safety Newsletter this week. It will be published quarterly with the goal of keeping "our medical community posted ... about selected postmarketing drug safety reviews, important emerging drug safety issues, and recently approved pharmaceutical products."

The first issue covers postmarketing surveillance data on:
- rituximab (Rituxan), an immunosuppressant used primarily to treat non-Hodgkin's lymphoma or rheumatoid arthritis: reports of progressive multifocal leukoencephalopathy associated with this agent ("rare fatal demyelinating disease that is caused by a viral infection of the brain following reactivation of the JC or BK polyomavirus (also known as papovavirus) present in about 80 percent of adults")

- modafinil (Provigil), a CNS stimulant used to treat narcolepsy, shift work sleep disorder, and sleepiness symptons associated with obstructive sleep apnea/hypopnea syndrome: reports of serious skin reactions

- temozolomide (Temodar), used to treat astrocytomas, a kind of brain tumor: aplastic anemia (bone marrow doesn't produce enough blood cells)

- deferasirox (Exjade), an oral chelating agent used to treat chronic iron overload due to blood transufions: GI, renal, and hematologic adverse events

The main newsletter page will include an archive of issues, and you can also sign up for email updates as new issues are posted.

Labels: ,

Tuesday, September 18, 2007

More on hospital quality

The WSJ Health blog discusses hospital quality today too ("More hospitals lag than leap on quality"), noting the release of the 2007 Leapfrog Top Hospitals and Survey Results.

On the survey page, there's a link to the survey methodology which makes for interesting background reading about what indicators they used and how they gathered their data. As noted in the article mentioned in our post earlier today, the Leapfrog data tends to be more infrastructure focused.

Labels: ,

Judging the quality of hospital surgical services

A study by Leonardi et al. published in this month's Archives of Surgery looks at the various sources of general surgery data for comparing hospital performance in the US.

The highlights:

- The authors identified 6 websites containing this kind of data, including 1 government site (CMS's Hospital Compare), 2 nonprofit sites (JCAHO's Quality Check, the Leapfrog Group's Hospital Quality and Safety Survey Results), and 3 proprietary sites (names withheld)

- Sites were rated on accessibility (cost, sign-up required/not required, visibility in terms of where the site appeared in their Google search results); data transparency (data source, statistical/analytical methods, risk adjustment methods); and appropriateness (variety of quality measures employed)

- Government and non-profit sites fared better on accessibility and were also the most transparent in terms of their data sources, statistical methods, and risk adjustmen descriptions.

- The proprietary sites were rated as more "complete" regarding the appropriateness measure, ie. the variety and types of measures used to evaluate quality.

- None of the sources provided real-time data; all data was at least one year old.

- The study also looked at consistency, i.e. whether they got the same results from the different sites, for several procedures. For laparoscopic cholecystectomy, the sites were consistent; hernia repair data varied more widely due to lack of data; and the different evaluation sites provided conflicting results on colectomy.

- Limitations of included sites: lack of working definitions for quality-related terms (e.g. complications); inadequate procedure-level information; concerns about timeliness of data; fragmented and inconsistent data sources supporting the sites

The authors' overall conclusions: Current hospital quality data sources may provide inconsistent results and utilize "suboptimal" measures of quality in some instances. Given the apparent trend toward patient consultation of such indicator sites, surgeon involvement in data gathering and resource refinement may be key ways to improve the utility of these resources.

Reference: Leonardi MJ, McGory ML, Ko CY. Publicly Available Hospital Comparison Web Sites: Determination of Useful, Valid, and Appropriate Information for Comparing Surgical Quality. Arch Surg 2007;142 863-869. Abstract (full-text requires subscription)

Labels: ,

Wednesday, September 05, 2007

Learning more about genetics: SNPs

One of our future JMLA cases will focus on a genetics/molecular biology scenario. Until then, from time to time we'll post links to "background" resources and topics in this area as we gear up for the case study.

Sandra Porter of Discovering Biology in a Digital World has authored a great post explaining what a single nucleotide polymorphism is - Genetic Variation I: What is a SNP?

For those interested even more explanation, here are a few great links:
- The Human Genome Project: SNP Fact Sheet
- NCBI: SNPs: Variations on a Theme

And if you want to know a few places that you can find SNPs...
- dbSNP and Entrez SNP
- OMIM
- Human Gene Mutation Database
- HapMap

Labels:

Tuesday, September 04, 2007

MLA Launches Social Networking Blog

MLA's Task Force on Social Networking Software unveiled its new blog on August 21st, intended to serve as a "communication device between the task force and MLA members." According to the "About" page, "The scope of this blog includes current awareness information, social networking @ MLA, social networking applications evaluations, task force updates, and IT support information."

Early posts address the MLA social networking survey, anonymous blog authoring and commenting, and writing for the web.

MLA website redesign