Showing posts with label BadScience. Show all posts
Showing posts with label BadScience. Show all posts

Sunday, December 26, 2010

.

Follow-up on placebos

I wanted to do a follow-up to Friday’s post about a ridiculously flawed study of the effect of placebos. I have two follow-up points.

Point 1: The study is not blinded, but that is not a problem, as I see it. There are two types of blinding used in studies:

  1. Blinding relative to the subjects: the subjects don’t know which group they’re in (and sometimes don’t know specifically what’s being studied). This reduces the influence of the subjects’ expectations and biases on the results.
  2. Blinding relative to the researchers: the researchers don’t know which group a given subject is in. This reduces effects that the researchers’ behaviour toward the subjects will have on the results.

For this study, the whole point was not to blind it to the subjects. And since the subjects are either being treated or not, there’s nothing to blind relative to the researchers, either. The problem with the study isn’t the lack of blinding, but the bias that the instructions to the subjects provides.

Point 2: I see a reasonable way to evaluate what these researchers want to evaluate, which fixes the bias problem — or, more accurately, which makes that bias part of the study itself. It involves what I’ll call half blinding relative to the researchers.

Get, say, 300 subjects, and divide them into three groups of 100. Group A gets no treatment, just as one group got in the existing study. Groups B and C each get identical bottles labelled PLACEBO. Each bottle contains identical pills. Each bottle contains an instruction sheet that only the subjects see, and each subject is told to read his sheet and not to discuss with anyone what it says.

The instruction sheets for group B would say, This bottle contains placebo pills made of an inert substance, like sugar pills, that have been shown in clinical studies to produce significant improvement in IBS symptoms through mind-body self-healing processes.

The instruction sheets for group C would say, This bottle contains placebo pills made of an inert substance, like sugar pills, that have no effect whatsoever. They will not affect your symptoms in any way.

Now, if groups B and C have similar results, we can legitimately say that placebos have their effect even when the subjects know they’re getting them. But if, as I suspect, group C has results comparable to those of group A, then what we’ve shown is that telling people the placebos are effective is the same as not calling them placebos — setting up the expectation of effectiveness is what gives the results.

If you want to call that mind-body self-healing processes, that’s fine with me. Well, almost fine: it’s not healing; it’s symptom relief. That your mind can help you feel better is no surprise. But it’s not going to heal your cancer, repair your liver damage, or kill viruses. Unless, perhaps, someone can show, with a properly designed study, that it can.

Friday, December 24, 2010

.

Someone’s high on placebos

On Wednesday, this paper, published in PLoS ONE, hit the popular news in the medicine/science category, with articles such as this one from MedPage Today and this, from Reuters. The headlines are consistent, implying that the study has shown that the placebo effect works even when patients know that they’re getting placebos.

From the MedPage article:

While this may seem counterintuitive, the results suggest that physicians don’t have to resort to deception to harness the power of the placebo effect, Kaptchuk explained in an interview.

Conventional wisdom says that in order for a placebo to work you have to convince the patient that they are taking a real drug, he told MedPage Today.

However, the brain appears to respond to placebo in a way that conditions like IBS are susceptible to, Kaptchuk noted.

Counterintuitive, indeed, and enough so that it merits some serious scrutiny. My first thought was that the headlines are misrepresenting the study and the claims of the researchers, a common problem in the reporting of research in the popular press. But it seems that they are, indeed, reporting exactly what the researchers are saying, here.

I’ll note that the paper appears in a Public Library of Science (PLoS) journal, rather than in some other, more respected medical journal (such as NEJM or BMJ). I’ll also note that the lead researcher, Ted Kaptchuk, is associated with Harvard’s Division for Research and Education in Complementary and Integrative Medical Therapies, a connection the press just puts forth as Harvard, downplaying the fact that he does complementary medicine.

Those aren’t reasons to ignore his research or his results, of course. But they are points that should make us ask questions. On the other hand, we should ask questions with any study; that’s what science is about.

Now, the study is on IBS, which is a very subjective, catch-all condition of unknown etiology. They mention in the article that they expect this effect to work also with conditions such as fibromyalgia and chronic pain — also subjective conditions of unknown etiology — and depression and anxiety, more highly subjective stuff. And the study itself is entirely subjective, using self reporting exclusively, and not measuring anything.

That could be OK. If we’re looking for cures, we need to measure; if we’re looking for symptom relief, well, if you think you feel better, then you feel better. Where I have to scratch my head, though, is in wondering what their hypothesis was.

When you go into a study, you go in with a hypothesis, which your study might support or disprove. Even if it seems like you’re just wildly trying any drug that might work, you have a hypothesis: This drug might work. What’s the hypothesis, here? Placebos might work even if the patients know they’re placebos, is a valid hypothesis, I suppose, but to whom would it occur to even try that?

The answer shows up in the study: it would occur to people who think they can show mind-body self-healing processes. OK... again, a valid thing to consider. But that’s where we get into some problems.

We start with some very typical problems with medical studies... again, from MedPage:

Limitations of the trial included a relatively small sample size, too short duration to examine long-term effects, and the possibility of self-selection bias in that the trial may have selectively attracted IBS patients interested in mind-body interventions.

Many, many studies are too small, too brief, and suffer from various sorts of selection bias, and this one is no different. But to get the real kicker, we have to go into the paper itself and see how the placebos were presented to the patients:

Patients were randomized to either open-label placebo pills presented as placebo pills made of an inert substance, like sugar pills, that have been shown in clinical studies to produce significant improvement in IBS symptoms through mind-body self-healing processes or no-treatment controls with the same quality of interaction with providers.

Read that again. The patients were not told just that they were getting placebos. They were not told that what they were getting is an inert substance with no medical effect. They were told that what they were getting has been shown in clinical studies to produce significant improvement in IBS symptoms.

In other words, these pills are having exactly the same placebo effect as is well documented in pretty much every other medical study that involves placebos. And, to put the point forth directly, far from making this deception-free, they are deceiving the patients in the same way patients are deceived in every other administration of placebos.

Let’s line it all up:

  1. You recruit people who have a condition that makes them feel horrible, that no one can tell them the cause of, and that no one has effective treatment for.
  2. You give them a pill and you tell them that this has been shown to provide significant improvement for their condition.
  3. They report that the pills did, indeed make them feel better.
  4. You claim that you did not deceive them, because you told them they were getting placebos.

Bullshit. Whether or not you put the label placebo on it, you told them they were getting effective treatment. That biased the outcome, and the result should be no surprise to anyone. And it says... nothing. This study is worthless garbage.

Wednesday, April 14, 2010

.

How about the Grapefruit and Avocado diet?

Last week, there was a small item in the New York Times science section, Science Times, answering the question, “Other than celiac disease, is there any reason to avoid gluten in the diet?” There are reasons other than celiac disease for people to be intolerant of gluten, a protein in wheat and some other grains, but, as the Times item says, evidence is just not there to support health benefits of a gluten-free diet if you are not one of those affected.

Despite that, the gluten-free diet has become a fad, with all sort of claims from general well-being, to anti-cancer properties, to autism prevention behind it. None of the claims are supported by science, only by folk tales, by books and magazine articles and self-help lecturers.

But, of course, when you start talking about fad diets, you’ll get the faddists coming out of the woodwork. And so we have a letter to the editor, from a California reader:

Rather than casting doubt on the people who take on the economic, social and other hardships associated with a gluten-free diet, The New York Times should use its public platform to applaud them for taking responsibility for their health and encourage doctors to take a professional interest in what they can learn from these earnest patients.

— Tracy Haughton, Mill Valley, Calif.

Sigh. No.

Rather than believing quietly whatever bullshit people come up with, the New York Times should be using its public platform to cast doubt on that which has no evidence. And doctors should be looking at things that really work, and taking a professional interest in what they can learn from studying the real effects of foods and drugs. There’s no value in paying much attention to some vague story about how Aunt Gertrude took on the economic and social hardships of a fad diet, and her arthritis feels much better now.

People should take responsibility for their own health, and that includes eating properly. But before we start arbitrarily removing things from our diets — or, if you like, as we try removing them — we should be pushing for real studies of the effects, not accepting anecdotes and fuzzy science.

Wednesday, February 17, 2010

.

It increased by how much?

In yesterday’s meetings, someone presented some data, showing us rates of sending spam, month by month, broken down by country. The summary page showed the change from six months ago to last month. Here’s an example of the summary table, with made-up numbers:

Spam by country, as % of total
CountryJul 09Jan 10Change
Parador39%17%-22%
Slobovia4%8%+4%
Gumana2%< 1%-1%

Maybe you already see the problem with this.

A change from 4% to 8% does not represent a 4% increase. It’s a 100% increase — the spam from Slobovia has not gone up by 4% in six months, it’s doubled. And similarly for the other numbers: Parador showed a 56.4% decrease.

What is true is that Slobovia’s contribution to the total has increased by 4 percentage points (which is not the same as saying that it’s increased by 4 percent). And if it’s clear that that’s what you’re saying, it’s a fine thing to say... we can then debate which number is more useful, and the answer to that will depend upon what we’re using the numbers for.

But it’s very misleading to list those numbers in a table like that, and entirely wrong to say that “the amount of spam from Slobovia has increased by 4% since last July.”

Gotta be careful with percentages.

Thursday, November 05, 2009

.

To err is human; to explode, divine

Interesting article in Tuesday’s New York Times. Apparently, the Iraqi security forces have taken to using divining rods to search for explosives, against the advice of U.S. trainers and advisors.

The small hand-held wand, with a telescopic antenna on a swivel, is being used at hundreds of checkpoints in Iraq. But the device works “on the same principle as a Ouija board” — the power of suggestion — said a retired United States Air Force officer, Lt. Col. Hal Bidlack, who described the wand as nothing more than an explosives divining rod.

Still, the Iraqi government has purchased more than 1,500 of the devices, known as the ADE 651, at costs from $16,500 to $60,000 each. Nearly every police checkpoint, and many Iraqi military checkpoints, have one of the devices, which are now normally used in place of physical inspections of vehicles.

“Nothing more than an explosives divining rod,” of course, presupposes that divining rods aren’t effective. The Iraqis think otherwise:

The Iraqis, however, believe passionately in them. “Whether it’s magic or scientific, what I care about is it detects bombs,” said Maj. Gen. Jehad al-Jabiri, head of the Ministry of the Interior’s General Directorate for Combating Explosives.
Hm. “I don’t care whether it’s magic or scientific,” certainly sets a skeptic’s BS-dar beeping wildly. Has anyone looked at this seriously? Well, yes, the U.S. government’s explosives-detection experts have:
Dale Murray, head of the National Explosive Engineering Sciences Security Center at Sandia Labs, which does testing for the Department of Defense, said the center had “tested several devices in this category, and none have ever performed better than random chance.”
Our Iraqi explosives minister, though, dismissed such studies with an appeal to authority — his own:
“I don’t care about Sandia or the Department of Justice or any of them,” General Jabiri said. “I know more about this issue than the Americans do. In fact, I know more about bombs than anyone in the world.”
And that settles that.

The company that’s selling these “devices” to the Iraqis, of course, has nothing to say to the Times:

Jim McCormick, the head of ATSC, based in London, did not return calls for comment.
...but here’s what their brochure says:
ATSC’s promotional material claims that its device can find guns, ammunition, drugs, truffles, human bodies and even contraband ivory at distances up to a kilometer, underground, through walls, underwater or even from airplanes three miles high. The device works on “electrostatic magnetic ion attraction,” ATSC says.
Human bodies, distinguishing dead ones from live ones, it seems. And truffles; that’s handy. But isn’t it interesting that the list of things it “finds” are all things that we want to find. The list is so diverse that it would certainly have to also include a lot of things we don’t care about that would throw its false positive rate through the roof. But maybe it’s only certain illicit stuff that has magic fairy dust magnetic ions.

OK, so... what do we know about divining rods?

Well, we know that they were debunked so long ago that there don’t seem to be recent peer-reviewed papers on them. Nature has this study from 1971. It’s behind a paywall, but the summary isn’t:

Experiments organized by the British Army and Ministry of Defence suggest that results obtained by dowsing are no more reliable than a series of guesses.

But there’ve certainly been lots of controlled tests since then, even if they weren’t peer reviewed. And every credible test shows that divining rods are complete bunk. There’s no validity to them at all. And, in fact, the James Randi Educational Foundation made a statement about this very device last year:

No one will respond to this, because the ADE651® is a useless, quack, device which cannot perform any other function than separating naïve persons from their money. It’s a fake, a scam, a swindle, and a blatant fraud. The manufacturers, distributors, vendors, advertisers, and retailers of the ADE651® device are criminals, liars, and thieves who will ignore this challenge because they know the device, the theory, the described principles of operation, and the technical descriptions given, are nonsense, lies, and fraudulent.
That seems clear.

What studies have shown about divining rods is that they likely “work” through a form of cognitive bias. The length of the rod, along with how it’s held, amplifies the results of small arm movements. Those movements are largely involuntary, but they can inadvertently — and without the knowledge of the user — direct the rod to something we’re expecting to find. Of course, when we don’t know whether there’s really anything there, we’re either directing the rod randomly, or we’re guessing (which amounts to directing the rod randomly as well).

Or, perhaps put another way, we’ve been well trained. We’ll give General Jabiri the last word here:

During an interview on Tuesday, General Jabiri challenged a Times reporter to test the ADE 651, placing a grenade and a machine pistol in plain view in his office. Despite two attempts, the wand did not detect the weapons when used by the reporter but did so each time it was used by a policeman.

“You need more training,” the general said.

Yeah... more training. That’s the ticket.

Sunday, October 18, 2009

.

Limp telephones

Man using soup-can telephone in a Progresso TV adThere’s a new series of TV ads for Progresso soups, in which people are depicted talking to kitchen staff using soup-can telephones. The image to the right (click to enlarge) comes from this ad in the series.

Didn’t the people who put the ads together ever actually do this in grade school? Doesn’t everybody know that you have to pull the string tight for it to work? You won’t hear a thing with the string limp, as it is in that image you see there.

Yes, I know it’s just a silly advert. But, well, how hard would it have been for them to get it right?

I ask you....