The Power of Negative Thinking

Article

The Column

ColumnThe Column-01-08-2015
Volume 11
Issue 1

Incognito discusses how investigating negative data can lead you to an unexpected positive result.

Pages 2-5

Incognito discusses how investigating negative data can lead you to an unexpected positive result.

This month I've been experimenting - I've been working with instruments and analytical methods to find a whole new way of doing something that will save time and resources as well as (hopefully) produce better analytical data. I don't get to do this very often, but it's always exciting when I do, and I realize how lucky I am to be in a position to undertake such "blue sky" research.


(PHOTO CREDIT: PM IMAGES/GETTY IMAGES)

So, I was a week or so into the project and had performed several expensive hardware re-engineers and undertaken a lot of experimental work - that morning's data was in front of me. It was a defining experiment, one which would decide the feasibility of the project and reflect the quality of decisions taken to this point. The results showed exactly the opposite of what we had hoped or were expecting. The data demonstrated that pretty much everything that had been done up until then was a complete waste of time. It showed (in my mind at least) that I didn't have a clue about analytical science and I should go home and reconsider my career options. It wasn't a good lunchtime.

I stared at the data for an hour (yes, I know yet another luxury that few of us are afforded), unable to fathom what could have produced data that was the polar opposite of our expectations. I checked the data acquisition conditions, instrument settings, and configurations to make sure I hadn't made a mistake. With no real insights into the cause, I figured that perhaps configuring key instrument or hardware settings in exactly the opposite way (totally against intuition or scientific reasoning) would give us a better insight into the problem. I repeated the analyses and left the instrument to run whilst I did some "proper" work.

The data from this last, some may say desperate, experiment showed exactly what I had expected to find in the initial experiments. On further study and with much consternation, it led me down a thought process that eventually brought me to the cause of the problem, a further re-engineer of the equipment, and some very successful experiments that I hope will result in the significant strides forwards I described above.

This negative result caused me to think differently, against my perceived wisdom, against what I understood, against the "theory". The negative results could be explained and a perfectly reasonable "alternative" theory proposed, which has led to a successful result.

This is all very well, but have you stopped to consider the last time (during your routine work) that you really considered the cause of a negative result, the out-of-specification data, or the cause of an instrument failure? We are taught (I hope!) that negative results in science can sometimes teach us more than positive ones. Because I bothered to think about my negative result (admittedly, because I had spent a lot of my employer's time and money on the work to that point), I was able to re-think the situation and come up with a solution.

Some may call it desperate tinkering - I know my old professor would have thought so! Of course, the trick here lies in knowing what is negative data and what is bad data. Negative data is the method failing to achieve a separation because the stationary phase chosen does not satisfactorily resolve the analytes under the conditions chosen. Bad data is putting the incorrect column on the system or making up the eluent with the incorrect pH. Negative data is the assay showing that the tablet contains an active ingredient amount that is above the specification limit. Bad data is making up the concentration of the test solution incorrectly.

The final piece to this particular jigsaw is to ensure our data are "within experimental error". When was the last time you estimated, or reported, that your data were within experimental error? When you give an high pressure liquid chromatography (HPLC) or gas chromatography (GC) result to a colleague, what do they suppose is the error associated with that experiment? Do you know? Could you explain it to them? Do you only ever get asked about experimental error when the result is out of specification by a small amount, in the hope the sample is fine and it's just a quirk of the analysis?

Let us suppose that we have established that the data is negative and the results are qualified by an informed estimate of error that confirms this negativity. What can we do with it? Learn from it, increase our personal experience, and progress as a scientist. But is there a mechanism to deal with negative results so your colleagues can also learn? Sure, the formulator goes away to make another batch of tablets or the development chemist a different synthetic pathway - but what about your colleagues in the laboratory? Can they learn from your negative experience? Is there a mechanism for you to share the negative data to make your workplace more efficient?

I often come across (but rarely read I have to admit) the Journal of Negative Results in Biomedicine, which is "an open-access, peer-reviewed, on-line journal that provides a platform for the publication and discussion of unexpected, controversial, provocative and/or negative results in the context of current tenets".1 Why don't we have one of these in analytical science?

I read with interest in The Economist last year2 that negative results account for 14% of all published scientific papers, down from 30% in 1990. I also read in the same article two interesting facts. Firstly, it is a rule of thumb among biotech venture capitalists that half of everything that is published in their application area cannot be replicated. Secondly, that in 2012, Amgen were able to reproduce only 6 of 53 "landmark" findings in cancer research.3

I'll pose the question again in a wider sense - is it possible for us to share our negative results in a way that can provide a positive outcome for our local and global colleagues? Do we try hard to verify if we have generated a negative result or bad data? Do we measure, estimate, and report experimental error to verify if our data is negative or bad? Does our literature encourage the publication of negative findings to point the way to future success, or to close off paths which would otherwise be wastefully explored?

That's a lot of questions and I don't have answers to any of them, but would welcome your thoughts and input. I know that if I hadn't acted on my negative result I would not have explored alterative theories and options, and a project that ultimately proved successful (I hope!) would have been terminated. Sound familiar?

References

1. Journal of Negative Results in Biomedicinehttp://www.jnrbm.com

2. The Economist, 19 October 2013, How Science Goes Wrong [http://www.economist.com/news/leaders/21588069-scientific-research-has-changed-world-now-it-needs-change-itself-how-science-goes-wrong]

3. http://www.nature.com/nature/journal/v483/n7391/full/483531a.html

 

This article is from The Column. The full issue can be found here>>

Related Content