Tuesday, May 15, 2007

More debate on the utility of case reports

The Scientist posted a news item yesterday, "Case reports: Essential or irrelevant?", discussing reaction of members of other editorial teams to the launch of the new Journal of Medical Case Reports (a BMC title).

Does the medical literature need more case studies? A new journal is betting it does, even as editors at other journals say the answer is no.

Historically, case reports have proven extremely valuable to clinicians faced with diseases they knew little about. But in an age where countries spend more on research than ever before investigating both rare and common diseases, some experts argue that the obscure nature of many case reports makes them of little value to the average practitioner.

The article includes editorial staff commentary from the Lancet, the British Medical Journal, the New England Journal of Medicine, and the American Journal of Medicine.

Labels:

Monday, May 14, 2007

Positive and Negative Predictive Value

Our April case study also refers to positive predictive value and negative predictive value (see this post for discussion of sensitivity and specificity), and defines them as follows:
"The positive predictive value represents the probability of a positive test result indicating the true presence of disease."

"The negative predictive value represents the probability of a negative test result indicating that the disease is truly absent."
Thus, while sensitivity refers to the likelihood of a person with a disease testing positive for disease, positive predictive value (PPV) refers to the likelihood of actually diagnosing a disease in those who have it. Likewise, while specificity refers to the likelihood of a person without a disease testing negative, negative predictive value (NPV) refers to the likelihood of the test confirming that a person without disease doesn't have the disease.

These values are calculated as follows:
Positive predictive value = (number of people who have disease and test positive for it)/(number of people who have disease and test positive for it PLUS the number of people who don't have the disease and test positive for it)
OR
The number of people who have a disease and test positive for it divided by the total number of people who test positive for the disease (regardless of whether they have it)
OR
How reliable is the test when it indicates that someone has a disease?
OR
When you test positive, how likely is it that you really have the disease?

Negative predictive value = (number of people who don't have the disease and test negative)/(number of people who don't have the disease and test negative PLUS number of people of who have disease and test negative)
OR
The number of people who don't have the disease and test negative divided by the total number of people who test negative for the disease (regardless of whether they have it)
OR
How reliable is the test when it indicates that someone does not have a disease?
OR
When you test negative, how likely is it that you really don't have the disease?

So, if a test has a PPV of 95%, then 95% of people who test positive really have the disease (so 5% test positive but really aren't). If it has an NPV of 85%, then 85% of people who test negative really don't have the disease (so 15% really have the disease but test negative).

Additional Information:
-How to read a paper: Papers that report diagnostic or screening tests
-Sensitivity and Specificity: Medical University of South Carolina (scroll down for PPV/NPV from an HIV testing example)
-Predictive Values: Michigan State University

Labels: , ,

Sensitivity and Specificity

The April case study mentions both sensitivity and specificity, statistical measures you may encounter as you read medical research papers. As the case states:

"Sensitivity represents the probability of a positive result for the novel diagnostic test in people who definitely have the disease in question, as defined by the gold standard test."

"Specificity is the probability of a negative test result for the novel diagnostic test in people who definitely do not have the disease, as defined by the gold standard."
In more detail:
Sensitivity is the likelihood that a test will be positive in those who really do have disease. For example, if you took a pregnancy test, the sensitivity of that test would give you an idea how often the test will turn up positive when you really are pregnant (and so the test would be accurate).

Specificity is the likelihood of getting a negative result in those who really do not have the disease. In our pregnancy test example, knowing the specificity would let you know how often you could expect a negative pregnancy test result when you really are not pregnant (and again, the test will be accurate).

Note that both definitions make reference to a "gold standard" test - that would be the reference test for determining whether someone does or does not have a disease. Measures of sensitivity and specificity for an alternative diagnostic test (e.g. a newly available type of diagnostic test) are measured against that gold standard.

Calculating these values requires a bit of math:

Sensitivity = (number of people who have disease and test positive for it)/(number of people who have disease and test positive for it PLUS the number of people who have the disease and test negative for it)
OR
The number of people who have a disease and test positive for it divided by the total number of people with the disease who are tested
OR
How good is the test at identifying people who really do have the condition?

Specificity = (number of people who don't have the disease and test negative)/(number of people who don't have the disease and test negative PLUS number of people of who don't have disease and test positive)
OR
The number of people who don't have the disease and test negative divided by the total number of people without disease who are tested
OR
How good is the test at identifying people who really don't have the condition?

Remember, when we talk about the number of people who actually have the disease, we are using our gold standard test (such as an older, established test) and comparing it to our new test of interest (such as clinic-based pregnancy testing versus a home pregnancy test).

Sensitivity and specificity are generally expressed as percentages or decimals. For example, if a pregnancy test has a sensitivity of 0.91 (or 91%), then 91% of those who are pregnant will test positive. If our specificity were only 0.50, only 50% of those who test negative would actually not be pregnant.

For more information and tables that may make the formulas a bit more clear, visit any of the following sites:
-Glossary of EBM Terms > More on Sensitivity & Specificity: Centre for Evidence-Based Medicine
-How to read a paper: Papers that report diagnostic or screening tests
-Sensitivity and Specificity: Medical University of South Carolina, using an HIV testing example

Labels: , ,

Friday, May 04, 2007

More on acute pancreatitis

Surgeonsblog is the blog of a "mostly retired general surgeon," Sid Schwab, who shares stories of past experiences with surgery, patients, families, and the healthcare system, full of interesting and informative anecdotes from a surgeon's perspective.

A couple of Surgeonsblog posts relevant to this month's Journal of the Medical Library Association case study, "Using the literature to evaluate diagnostic tests: amylase or lipase for diagnosing acute pancreatitis?":

- Surgeons and Sweetbreads: an in-depth (and consideration of the anatomy of pancreas and surgical intervention in the patient with acute pancreatitis:
The good news is most of us will never have a reason to find out. The bad news is we all walk around with a self-destruct button in us, and I'm not getting all Freudian here. Of all the vital organs, there's only one that can -- sometimes with only the slightest of provocations -- turn on us and literally become our worst nightmare: it can eat us alive, from the inside. All the while, doing only what it thinks it's supposed to do.
- and a follow-up post, Pancreas stuff, #2:

It's that combination of highly unfortunate location and the power of self-digestion that turns the upper abdomen into a seething and distorted mess. Imagine a nicely-tended garden overtaken by sewage. Think of trying to find your way through a mine-field, knowing a misstep could cause death, while wearing size twenty shoes, and blindfolded. Compare being required to reach into a shallow pan of water to find by feel a couple of well-defined objects, with groping into hot mush, mittened and scared...

...Tucked behind the stomach and colon, that space is clean and quiet, opens sort of magically; and its backside is -- ideally -- that pink and normally-firmer-than-normal organ, the pancreas. There for your viewing pleasure. With acute pancreatitis, not only is that space completely obliterated, it's filled with indistinguishable stinky goo, and the edges of the stomach and colon -- out of which you'd dearly like to stay -- are absolutely undecipherable, unrecognizable, and half-digested. Not good.

Labels: