This prompt arose in response to Stuart Ritchie’s response to a suggestion in an editorial “first published last year but currently getting some attention on Twitter” – that scientists should write their scientific papers as if they were telling a story, with a beginning, middle and end. The act of storytelling produces something entertaining by definition, but it isn’t the same as when people build stories around what they know. That is, people build stories around what they know but that knowledge, when it is first produced, isn’t and in fact can’t be reliably produced through acts of storytelling. This is Ritchie’s point, and it’s clearly true. As Ash Jogalekar commented on Twitter on Ritchie’s post
(This is different from saying scientific knowledge shouldn’t be associated with stories – or that only it should be, a preference that philosopher of science Robert P. Crease calls “scientific gaslighting”.)
Ritchie’s objection arises from a problematic recommendation in the 2021 editorial, that when writing their papers, scientists present the “take-home messages” first, then “select” the methods and results that produced those messages, and then conclude with an introduction-discussion hybrid. To Ritchie, scientists don’t face much resistance, as they’re writing their papers, other than their own integrity that keeps them from cherry-picking from their data to support predetermined conclusions. This is perfectly reasonable, especially considering the absence of such resistance manifested in science’s sensational replication crisis.
But are scientific papers congruent with science itself?
The 2021 editorial’s authors don’t do themselves any favours in their piece, writing:
“The scientific story has a beginning, a middle, and an end. These three components can, and should, map onto the typical IMRaD structure. However, as editors we see many manuscripts that follow the IMRaD structure but do not tell a good scientific story, even when the underlying data clearly can provide one. For example, many studies present the findings without any synthesis or an effort to place them into a wider context. This limits the reader’s ability to gain knowledge and understanding, hence reducing the papers impact.”
Encouraging scientists to do such things as build tension and release it with a punchline, say, could be a recipe for disaster. The case of Brian Wansink in fact fits Ritchie’s concerns to a T. In the most common mode of scientific publishing today, narrative control is expected to lie beyond scientists – and (coming from a science journalist) lies with science journalists. Or at least: the opportunities to shape science-related narratives are available in large quantities to us.
A charitable interpretation of the editorial is that its authors would like scientists to take a step that they believe to be marginal (“right there,” as they say) in terms of the papers’ narratives but which has extraordinary benefits – but I’m disinclined. Their words hew frustratingly but unsurprisingly close to suggesting that scientists’ work isn’t properly represented in the public imagination. The most common suggestions I’ve encountered in my experience are that science journalists don’t amplify the “right” points and that they dwell on otherwise trivial shortcomings. The criticisms generally disregard the socio-political context in which science operates and to which journalists are required to be attuned.
This said, and as Ritchie also admits, the scientific paper itself is not science – so why can’t it be repurposed to ends that scientists are better off meeting than one that’s widely misguided? Ritchie writes:
“Science isn’t a story – and it isn’t even a scientific paper. The mere act of squeezing a complex process into a few thousand words … is itself a distortion of reality. Every time scientists make a decision about “framing” or “emphasis” or “take-home messages”, they risk distorting reality even further, chipping away at the reliability of what they’re reporting. We all know that many science news articles and science books are over-simplified, poorly-framed, and dumbed-down. Why push scientific papers in the same direction?”
That is, are scientific papers the site of knowledge production? With the advent of preprint papers, research preregistration and open-data and data-sharing protocols, many papers of today are radically different from those a decade or two ago. Especially online, and on the pages of more progressive journals like eLife, papers are accompanied by peer-reviewers’ comments, links to the raw data (code as well as multimedia), ways to contact the authors, a comments section, a ready-reference list of cited papers, and links to other articles that have linked to it. Sometimes some papers deemed to be more notable by a journal’s editors are also published together with commentary by an independent scientist on the papers’ implications for the relevant fields.
Scientific papers may have originated as, and for a long time have been, the ‘first expression’ of a research group’s labour to produce knowledge, and thus perfectly subject to Ritchie’s concerns about transforming them to be more engaging. But today, given the opportunities that are available in some pockets of research assessment and publishing, they’re undeniably the sites of knowledge consumption – and in effect the ‘first expression’ of researchers’ attempts to communicate with other scientists as well as, in many cases, the public at large.
I think the 2021 editorial is targetting the ‘site of knowledge consumption’ identity of the contemporaneous scientific paper, and offers ways to engage its audience better. But when the point is to improve it, why continue to work with, in Ritchie’s and the editorial’s words, a “journal-imposed word count” and structure?
A halfway point between the editorial’s recommendations and Ritchie’s objections (in his post, but more in line with his other view that we should do away with scientific papers altogether) is to publish the products of scientific labour taking full advantage of what today’s information and communication technologies allow: without a paper per se but a concise description of the methods and the findings, an explicitly labeled commentary by the researchers, the raw code, multimedia elements with tools to analyse them in real-time, replication studies, even honest (and therefore admirable) retraction reports if they’re warranted. The commentary can, in the words of the editorial, have “a beginning, a middle and an end”; and in this milieu, in the company of various other knowledge ‘blobs’, readers – including independent scientists – should be able to tell straightforwardly if the narrative fits the raw data on offer.
All this said, I must add that what I have set out here are far from where reality is at the moment; in Ritchie’s words,
“Although those of us … who’ve been immersed in this stuff for years might think it’s a bit passé to keep going on about “HARKing” and “researcher degrees of freedom” and “p-hacking” and “publication bias” and “publish-or-perish” and all the rest, the word still hasn’t gotten out to many scientists. At best, they’re vaguely aware that these problems can ruin their research, but don’t take them anywhere near seriously enough.”
I don’t think scientific papers are co-identifiable with science itself, or they certainly needn’t be. The latter is concerned with reliably producing knowledge of increasingly higher quality while the former explains what the researchers did, why, when and how. Their goals are different, and there’s no reason the faults of one should hold the other back. However, a research communication effort that has completely and perfectly transitioned to embodying the identity of the modern research paper (an anachronism) as the site of, among other things, knowledge consumption is a long way away – but it helps to bear it in mind, to talk about it and to improve it.