Category Archives: Medical

New statistical model lets patient’s past forecast future ailments

Analyzing medical records from thousands of patients, statisticians have devised a statistical model for predicting what other medical problems a patient might encounter.

Like how Netflix recommends movies and TV shows or how Amazon.com suggests products to buy, the algorithm makes predictions based on what a patient has already experienced as well as the experiences of other patients showing a similar medical history.

“This provides physicians with insights on what might be coming next for a patient, based on experiences of other patients. It also gives a predication that is interpretable by patients,” said Tyler McCormick, an assistant professor of statistics and sociology at the University of Washington.

The algorithm will be published in an upcoming issue of the journal Annals of Applied Statistics. McCormick’s co-authors are Cynthia Rudin, Massachusetts Institute of Technology, and David Madigan, Columbia University.

McCormick said that this is one of the first times that this type of predictive algorithm has been used in a medical setting. What differentiates his model from others, he said, is that it shares information across patients who have similar health problems. This allows for better predictions when details of a patient’s medical history are sparse.

Continue reading

Tagged , ,

“Medical errors reportedly kill more people than traffic accidents”

The National Institute for Health and Welfare (THL) estimates that medical malpractice imposes costs of about a billion euros a year in Finland.
The THL estimates that between 700 and 1,700 people die each year as a result of medical error. This significantly exceeds the number of traffic fatalities.
About EUR 400 million is spent on additional hospital costs. The THL calculates that half of the additional hospital costs could be eliminated through better instruction.

Continue reading

Tagged ,

Innumeracy

In one study, Gigerenzer and his colleagues asked doctors in Germany and the United States to estimate the probability that a woman with a positive mammogram actually has breast cancer, even though she’s in a low-risk group: 40 to 50 years old, with no symptoms or family history of breast cancer.  To make the question specific, the doctors were told to assume the following statistics — couched in terms of percentages and probabilities — about the prevalence of breast cancer among women in this cohort, and also about the mammogram’s sensitivity and rate of false positives:

The probability that one of these women has breast cancer is 0.8 percent.  If a woman has breast cancer, the probability is 90 percent that she will have a positive mammogram.  If a woman does not have breast cancer, the probability is 7 percent that she will still have a positive mammogram.  Imagine a woman who has a positive mammogram.  What is the probability that she actually has breast cancer?

Gigerenzer describes the reaction of the first doctor he tested, a department chief at a university teaching hospital with more than 30 years of professional experience:

“[He] was visibly nervous while trying to figure out what he would tell the woman.  After mulling the numbers over, he finally estimated the woman’s probability of having breast cancer, given that she has a positive mammogram, to be 90 percent.  Nervously, he added, ‘Oh, what nonsense.  I can’t do this.  You should test my daughter; she is studying medicine.’  He knew that his estimate was wrong, but he did not know how to reason better.  Despite the fact that he had spent 10 minutes wringing his mind for an answer, he could not figure out how to draw a sound inference from the probabilities.”

Continue reading

Tagged , ,

An Epidemic of False Claims

John Ioannides writes

“False positives and exaggerated results in peer-reviewed scientific studies have reached epidemic proportions in recent years. The problem is rampant in economics, the social sciences and even the natural sciences, but it is particularly egregious in biomedicine. Many studies that claim some drug or treatment is beneficial have turned out not to be true. We need only look to conflicting findings about beta-carotene, vitamin E, hormone treatments, Vioxx and Avandia. Even when effects are genuine, their true magnitude is often smaller than originally claimed…

Continue reading

Tagged ,

Medical error is the 3rd biggest cause of death in the US

In fact 225,000 to 284,000 Americans die each year as a result of medical errors:

“The health care system also may contribute to poor health through its adverse effects. For example, US estimates of the combined effect of errors and adverse effects that occur because of iatrogenic damage not associated with recognizable error include:
• 12000 deaths/year from unnecessary surgery
• 7000 deaths/year from medication errors in hospitals
• 20000 deaths/year from other errors in hospitals

• 80 000 deaths/year from nosocomial infections in hospitals
• 106 000 deaths/year from nonerror, adverse effects of medications

Continue reading

Tagged , , ,

Paul Meier, Statistician Who Revolutionized Medical Trials, Dies at 87

“Paul Meier, a leading medical statistician who had a major influence on how the federal government assesses and makes decisions about new treatments that can affect the lives of millions, died on Sunday at his home in Manhattan. He was 87 “

 

more (NYT)…

Tagged ,

Suspicious pattern of too-strong replications of medical research

Howard Wainer writes:

This article from 2005 (by Zhenglun Pan, Thomas Trikalinos, Fotini Kavvoura, Joseph Lau, and John Ioannidis) is brilliant.

It is well established that the size of many experimental effects diminish over time. So, we often see an initial investigation of some new treatment has a large effect, but subsequent replications show a much smaller effect. The ‘blame’ for this is often laid at the door of publication bias — that the sampling distribution of the effect might be Gaussian with a mean just slightly above zero, and so many studies of the treatment can’t get published because they have small or null results. Suddenly a study gets results from the high tail of the dist’n and is published in an A-list journal with fanfare. Now that it is in the literature subsequent attempts at replication can also get published — so out of file drawers come the other studies, often done before the alpha study — but in B-list journals.

Enter the attached paper. The Chinese scientific literature is rarely read or cited outside of China. But the authors of this work are usually knowledgeable of the non-Chinese literature — at least the A-list journals. And so they too try to replicate the alpha finding. But do they? One would think that they would find the same diminished effect size, but they don’t! Instead they replicate the original result, even larger. Here’s one of the graphs:

Continue reading

Tagged , ,

Overdiagnosis and Overtreatment

Here is a nice article on overtreatment by Kevin McConway:

Screening for disease was in the news again in the UK last week. According to the BBC, a 20-year Swedish study of screening for prostate cancer showed that screening brought no benefit. (The actual study report didn’t put it quite so baldly, but effectively did conclude there was no benefit.) This came just a couple of days after the Alzheimer’s Disease Society asked that the NHS should offer checks for dementia to everyone (in the UK) when they reach the age of 75. Both these news items reported contrasting views on whether these screening checks are in fact advisable.

Why is that? You might think that it’s surely better to know whether someone has a disease than not to know, and if some sort of screening or check can give this information, well, why not just do it?

Continue reading

Tagged , , , , ,

Testing for prostate cancer… Is that necessary?

From Dance with Chance blog:

The recent publication of two large-scale studies of prostate screening in the US and Europe attracted our attention. After all, (a) this was something we wrote about in our book, and (b) we are – all three of us – men of a certain age.

In the US study, 38,343 men received annual PSA (Prostate Specific Antigen) blood tests while 38,350 men were assigned to a control group. After 7 years of follow-up, the incidence of prostate cancer death per 10,000 person-years was 2% in the screening group – that is 50 deaths. In the control group it was 1.7%, or 44 deaths. Not particularly significant then.

The European study involved some 180,000 men. This time, however, the screening group was billed as having a 20.7% improvement in the survival rate over the control group. This seems impressive and a good reason for continuing with PSA screening – in Europe at least.

So are European doctors better at screening than their US colleagues? Or is it just that numbers can be deceptive?

There were 214 deaths from prostate cancer in the screening group and 326 in the control group. This means 112 fewer deaths in the screening group, or a relative improvement over the screening group of 20.7% – found by dividing 112 by 540 (540 = 214 + 326).

In absolute terms, however, the improvement is much less impressive, since there were 72,890 subjects in the screening group and 89,353 in the control group. This means 7.1 fewer deaths in the screening group than in the control group, for every 10,000 people. If we consider that the study lasted some nine years, this is a tiny improvement of less than one person per year.

Continue reading

Tagged , , ,
Follow

Get every new post delivered to your Inbox.