My 13-year old son is home-schooled. Last week he started a science experiment of his own design. He decided to test a few of the additives that people put into fresh flowers to make them stay fresh longer. The protocol consisted of filling 4 identical glasses with 8 ounces of water at the same temperature. One glass was left alone as a control. The second glass had 1 tablespoon of sugar added. The next had two aspirins added. And in the last we added one packet of the plant food that came with the flowers. Into each glass we placed one small flower, with the stems cut to the same length. All the glasses we placed in the window sill in the kitchen.
I thought he'd come up with a pretty good protocol and I was looking forward to seeing his results. Well a week went by and all four of the flowers showed no real change. My son was a little upset. He wondered what had gone wrong and wanted to start over rather than write up his conclusions. I took advantage of this teaching moment to explain that he needs to complete his observations and turn in the report. I them explained that this is probably how 99% of all science experiments end. Astronomers don't find new asteroids every time the look into the sky. Research doctors don't see measurable effects of new drugs every day. Even though Aaron's experiment did not product the effect that he wanted he still had learned something. As his experiment was designed ,5 days is not enough time to measure any difference in the effect of the chemicals being tested. Although mundane this is important information that future researchers could use to improve their experiments. Aaron agreed and is currently writing up his conclusions.
Science is sometimes victim to publication bias. Like Aaron they are hesitant to publish studies that don't have dramatic results. Dramatic or not the results are still science and those result should be published. Personally, I think that there is just as much value in a study that says "Acupuncture does not have a measurable effect on pain under controlled tests" is just as valuable as a study that say, "An aspirin a day will lower your chance of a heart attack". Both give me concrete practical data that I can use to live a better life.
I probably took Aaron's report a little more personally than I should have. When I was in High School I worked with my father, a CDC microbiologist, on a science project. I hypothesized that military labs would be less accurate than civilian labs in testing for certain diseases. I took data from all over the world that had already been collected and just analyzed it in a way nobody had thought of before. My dad was really excited for me and even thought that I'd win the science fair. I got an "Honorable Mention". One of the judges said that I should have actually done some of the lab work myself for better marks. Never mind that I had a sample size of several thousand lab tests, she didn't think it was science because I used a computer rather than a test tube. Another said it would have been more impressive if I hadn't disproven my hypothesis. He actually suggested that I should have rewritten my initial hypothesis so it looked I had successfully predicted the result. Not only did I find this suggestion unethical it is not what real science is all about. In the long run I didn't walk away felling like my project was a failure. I learned that there is no statistical difference in the ability of military labs to detect certain diseases or civilian labs' ability to detect the same diseases.
My dad eventually took my report and got it published in Morbidity and Mortality Weekly Report, a peer reviewed science journal. That meant more to me than the "Honorable Mention" from my science teachers.
Trusting RFK Jr. to Research Vaccines is Like Trusting a Hungry Python to
Babysit a Kitten
-
If RFK Jr. "researches" vaccines, he will certainly "discover" they cause
autism. It's possible that this "research" will be used as justification to
rev...
1 day ago
I had to explain this to a Psychology Ph.D. candidate. She didn't understand why her advisor had recommended that she include certain aspects of our study in an article we were submitting for a conference. I asked her, "if you knew someone else had already tried what we did and failed, would you have designed the experiment any differently?" You could see her "ah ha" moment.
ReplyDelete