A crusty old professor in my previous department used to say that science was a dog competition: you may have raised a mutt but, with a few combs and ribbons, you could win best of show.

A paper, the wisdom goes, should be a carefully crafted narrative in which you present the truth framed in a logical, consistent and – above all – aesthetically pleasing manner. Of course, the crusty old prof wasn't advocating telling fibs, but he certainly encouraged his students and postdocs to present only the best images and to talk them up effusively in the results section.

I'm not convinced that even the snappiest prose could make my last western blot look like anything other than a piece of film trampled by a swarm of insects let loose in an ink factory. But while I agree that presentation is very important, I often feel uncomfortable with the lengths some researchers go to massage their stories.

Take Vesicle Vera, for example. Vera has been grappling with a recalcitrant theory for over a year. She has carefully assembled two different lines of evidence that the biology is, indeed, playing out in the manner she hypothesizes. But a third line of evidence isn't anywhere near as solid. Over the months, she's spun out hundreds of electron micrographs, only about half of which display the sort of structures that would be consistent with her theory.

Her boss was pushing her to publish, but what was she to do with the iffy images?

Vera's instinct was to leave out the electron micrograph experiments altogether and come up with a different way of examining the morphology problem, perhaps one on the basis of another sort of imaging technique. After all, the other two techniques showed a rock-solid result, so it was possible that the electron microscopy (EM) wasn't revealing the postulated structures more often simply for technical reasons. If she actually showed the duff pictures, it would undermine her hypothesis.

But Vera's boss was firmly convinced that, because some of the images looked good, only the ones that didn't were failing for technical reasons. In other words, there was no reason not to display a few nice ones as a representative figure.

‘Representative.’ Now there's an adjective that scientists have imbued with an entirely different meaning than the Oxford English Dictionary ever intended. When a researcher says an image is ‘representative’, she almost always means “the best one I could find after spending hours straining down the microscope emitting nastier swearwords than a sailor on shore leave.”

Now, I see nothing wrong with displaying my prettiest blot for public consumption, provided that the overall message of the ones that didn't make the cut is consistent. I've even noticed a fad, in recent years, for embracing ugly blots in all of their glory: spots, smudges, crooked lanes, the works. But images – or anything showing the results from an individual cell – are in a different category. Of course, no referee is going to expect a black-and-white result. But an author can come clean about the inevitable grey areas by quantifying the percentage of the experiment showing the positive result, and then showing a few nice pictures to illustrate the best-case scenario. Obviously in Vera's case, it was indeed tricky: if the graph were to say 50% of the structures are iffy, the referees would understandably question the conclusions of the experiment and, possibly, the entire theory. But if Vera were to follow her boss's advice and left the graph out in favor of the best image while, at the same time, downplaying the sporadic nature of the structures she was claiming in the results section, it would be dangerously misleading.

And what of the third alternative – pretending the entire problematic EM expedition had never happened in the first place?

As Golgi Gal pointed out to Vera yesterday in the coffee room, there are a million experiments that a scientist does not attempt. If Vera had never embarked on the EM in the first place, she'd never know that its results aren't entirely consistent. So what was the harm in just leaving it out of the paper altogether?

This is a concept I've been grappling with myself recently. We all joke about doing ‘one experiment too many.’ If you bend over backwards to prove a theory multiple different ways, it's almost inevitable that you'll hit a snag. The problem is, though, that whereas an experiment you've never tried can't hurt you, equally, you can't undo a discouraging result once you've generated it. To pretend it never happened is, in its own way, a little bit fraudulent.

But Golgi Gal says we have to be realistic: nothing works one hundred percent of the time, and researchers the world over think nothing of abandoning an experimental tack that just isn't bearing fruit. Provided all the other lines of evidence support your theory, you shouldn't feel the need to beat your head against the wall. And you certainly shouldn't show damning evidence of it in your paper.

I confess that I'm still not certain what is the best way forward. I'm sure Vera's boss is wrong to only want to show the pretty pictures and imply they are universally applicable. But I can't decide: should I advise Vera to show the suboptimal results, or leave the experiment out and try something else?

Answers on an e-postcard, please (xgal@biologists.com).