My mother always said that when life gives you lemons, make lemonade. Okay, she says a lot of corny stuff, my mother, but she's usually right. And clearly, a number of Sticky Wicket readers agree.

In my previous column (J. Cell Sci. 124, 2513-2514), I sketched out the dilemma of my colleague Vesicle Vera, who was in a quandary about how to write her paper. If you'll recall, she had two solid lines of evidence supporting her hypothesis, and a third line of evidence that wasn't as convincing: electron micrographs that supported her theory about half of the time, but others manifesting unusual structures that her model could not explain. Her boss was urging her to include the EM data, but only the ‘representative’ (read: supportive) images. Vera thought this was too misleading but worried that showing such ambivalent data – which, after all, might just be occurring for technical reasons, as opposed to because her model was wrong – might undermine her entire message.

When I polled you all for advice, I received a variety of thoughtful replies – with a surprising amount of consensus. All of you agreed that in science, it's always best to be as open as possible, so Vera's best course of action was to hold nothing back: the good, the bad and the ugly. This was, you thought, the most ethical way to go.

A reader I'll call Immunity Irene put it best:

“[To] claim one nice image is what you consistently get is fraudulent, and to pretend you never did the experiment at all is bordering on the decidedly iffy!”

Iffy indeed, Irene. So much for the morality of the issue: if Vera has to bite the bullet, how could she deal with the problematic data on a practical level?

Handily, one reader (let's call him EM Edward) offered step-by-step advice on how to proceed:

“Regarding your conundrum with the evidence lines, I would suggest the following (been there, done that).

  1. Be honest. You want to promote science not by deliberately misleading, therefore…

  2. You show nice images but mention that these are the nice images, maybe show an example of a bad image. You cannot say much about the fraction of material which shows the right thing and which shows the wrong thing, as EM is like describing the earth by looking at a square meter.

  3. Propose a hypothesis why not all your images are nice.

  4. De-emphasize the image evidence with the other evidence.

  5. Conclude that there is some evidence to support your hypothesis, but mention that it is not completely black and white and that there may be further research needed to find out why it sometimes may not work.”

Oh, Edward. It sounds so straightforward when you put it like that – but doesn't it depend on how grumpy your referees turn out to be? One reader, calling himself Cantankerous in Cambridge, offered this fascinating bit of advice and anecdote on that very topic:

“I admit, I was once in a similar quandary myself. It was a really nice little story, with nice consistent data. Except for that one experiment, the results of which stood up and insisted on contradicting the rest of the evidence. So I know the over-riding temptation to file it away in the lab book, put it in the drawer and never speak of it again. But do you know what? We didn't. Our conscience told us that this was darn right dishonest, and against the very spirit of this great enterprise in which we are engaged. Or was it that we realised the referees might ask for the experiment, and we'd then look a bit silly (and incredulous) showing the contradictory result? In hindsight, who knows what the balance between the two arguments was – I'll let you decide (but please be kind).

As it turned out, the reviewers didn't pick up on the contradiction, seeming to accept our (possibly weak) explanation. Even better, a post-publication review of the work even lauded us for our “honest reporting” of such spurious data. So its all very straightforward you see – honestly report your findings, draw the best-fitting conclusions with the data you have, and then let the rest of the field move on from there. Right?

Well, no. First, I can see that on this particular occasion, we were winners in the great referee lottery. Second, we published the paper in a journal without particularly glossy pages, and without a branded title. I suspect that we were rather lucky to get away with it.”

I tend to agree with Cantankerous here. Vera needs to come clean, but there might very well be negative consequences. The trade-off for honesty could be a rough ride in the publication derby. And he wasn't the only reader who could envisage dire consequences. This from Hypothesis Harold:

“[A] manuscript needs to retain a narrative and, if it is to find its way to the highest echelons of publishing (which could make a postdoctoral career), then I would imagine that clarity and ‘punch’ are the key factors. This must inevitably lead to folk leaving data out. More reason for looking at citations of a paper rather than the impact factor of the journal.”

But some readers, bless'em, viewed Vera's frustrating data as lab lemonade: an unexpected outcome that could lead to more intriguing insights in the future. A reader I'll call Inositide Ian put it this way:

“These EM results could also be interpreted as something altogether more interesting, in that there are certain cellular conditions where these structures do exist and others where they do not. I think people are usually to quick to bury inexplicable data but it is exactly these bits and pieces that show us the way to the next exciting revelation. Practically, I would include the data showing that these structures exist but that their regulation is still not entirely understood. That makes for a great little paragraph for the discussion section and possibly the start of the next project/paper. I believe articles should not only show what is completely figured out but also these little teasers of things to come. That is what makes it exciting.”

Now, tell us the truth, Ian – you're a beaker-half-full kind of guy, aren't you? But you're not alone: Hypothesis Harold agrees:

“While at the moment this is about the current hypothesis, these EM data could work toward someone else's work in the future. Furthermore, these data might even have some greater significance for the underlying biology. It might avoid someone else coming to the same conclusion and not publishing their EM data on the same experiment. Repetition is vital in science but how to know that you are indeed repeating an experiment if it was never published in the first place?”

Like Harold, Cantankerous in Cambridge also emphasizes the fact that Vera's problem may actually be all of our problems:

“[O]f course, Vesicle Vera should include the data – the ‘representative’ images, and also the quantitation of the fact that actually, err, you only see this about half of the time. It doesn't sound like there is a good explanation for why this should be, but if it doesn't refute the rest of the story, why not put the data out there? Maybe the rest of the field can make more sense of it. Perhaps the experiment is telling us something that Vesicle Vera just doesn't have all the information to interpret yet. But maybe, somebody else does, or will. Isn't this the way science is supposed to work?”

Some of you used Vera's predicament to wax downright philosophical about the Meaning of Truth and the essence of the Scientific Method. Again, Cantankerous:

“Usually, to get papers published we all have to produce a battery of convincing, and above all entirely consistent results, with absolutely no frayed edges. The reason for this? That particularly fierce and relentless pit-bull terrier of modern biomedical science: the reviewers. They will snarl and tug away at these loose ends until the whole thing unravels, shaking their heads with a kind of frustrated rage that reduces your carefully crafted narrative into disordered shreds of random observations and disjointed technicalities. And more often than not, this tangle is accompanied by demands for even more data to cover every possible experimental angle that could support your conclusions.

Sadly, we all have to run this gauntlet, because the papers we publish are the only currency we have to pay for our escape from the torture of post-doctoral training into, well, the torture of an independent tenure-track position. And it seems to me that papers today have to be entirely self-contained and complete stories, detailing the full description and explanation of the phenomena we study. But how often does hindsight bear this out? How often do those beautiful and simple stories published in full colour on glossy pages turn out to be not quite as straightforward as they seem, when we start following them up in the lab? More often than not, I would say that history demotes all of our papers to a state of naivety – not wrong exactly, but certainly not painting quite the complete picture we thought it did at the time.

And do you know what? There's nothing wrong with that. As long as the experiments that we do are a reflection of the truth, a set of empirical facts that any other scientist can reproduce should they feel the need, then is it so bad if the interpretation turns out to be not quite what we made of it at the time? So I'm all for including the iffy data. Who knows what it will tell us down the line. Maybe nothing, but definitely nothing if we confine it to the lost land of the lab book. But then, this all depends on a referee seeing it that way. Which he/she probably won't, and so your paper will not come out in a shiny journal, and your career progression will take a big hit.

So, I certainly won't judge Vera too harshly if she decides to omit that loose end. But if she does decide to take the moral high ground, then good for her. And if she needs a fair and sympathetic referee, I'm more than happy to oblige. Just tell her to nominate Cantankerous in Cambridge.”

I'm loving the torture and dog-mangling metaphors there – images I'm sure all postdocs can relate to after a particularly long night in the lab. (Or is that just me?) I only wish I'd had all these wonderful thoughts when I was consoling Vera in the coffee room a few months back. But I am pleased to report that her story has a happy ending of sorts. She did manage to persuade her boss that they needed to show the contradictory micrographs and – drum-roll please – the referees took the news fairly well. Referee 3 – clearly an EM specialist – was not completely satisfied with the loose end, but suggested a tweak to the protocol that might resolve the ambiguity. Lo and behold, Vera tried the tweak and it does, indeed, appear that the strange structures are an artefact. As we speak, she's busy tidying up the manuscript for revision, pretty confident that she's managed to address all of the referee's criticisms.

So, this time at least, honesty paid off in a big way. It's one small victory for Vera, and one giant victory for Science.

Beakers of lab lemonade all round, I'd say. (No, no, no – not that one: it's my running buffer!)