26 Oct 2012

Post-Publication Peer Review: Where are we at?

In recent times post-publication peer review (P3R) has come to mean many different things. If traditional peer-review represents a 'filter then publish' approach, P3R comes in two flavours: 'publish then filter' (primary P3R) or 'filter then publish then filter' where peer review still takes place prior to publication (secondary P3R). In the traditional print journal space, we think of P3R as the correspondence and letters that appear after articles are published. Within the health sciences and other disciplines, it can be scaled up to the comprehensive rigour of systematic reviews and meta-analyses. At the opposite extreme, in the fluidity of the digital world it has come to mean comments, discussion and filtering through various social media channels, or more formalised structures such as Faculty of 1000.

Who are the reviewers?
Photo: AJC1
This is where the concept of authoritative objective review starts to become separated from what is essentially just uncoordinated dialogue and discussion (and in many cases, noise). And yet, there is potentially great value in this conversation, just as there is in the private correspondence between two researchers. When research is published it represents the germination of an idea, and one which will ultimately be shaped and refined over time by both the original author and the wider research community. The difference with P3R compared to pre-publication review however, is that in theory anyone (in an open access environment at any rate) can self-appoint themselves as a reviewer. What does 'reviewer' even mean anymore?

You can't argue with Science, right?
In this context, opening up the review process may spark concerns over quality control; after all how do we know whose opinion we can trust? Of late, there have been several high profile examples of 'experts' doctoring data. This is why open data sharing and archiving is so important, so that results can be independently reproduced and validated (or not as the case may be). Indeed the idea of tracking replicability as a method of post-publication evaluation is the idea behind the Reproducability Initiative, launched recently by PLoS, figshare and the commercial enterprise ScienceExchange. But not every discipline or subject lends itself so easily to the scientific rigour of a lab, not to mention the costs that would be involved if every study were to be re-tested. This is why it makes sense to transfer part of the burden of review to a wider post-publication network in an open way.

Looking at LIS in particular, evidence summaries (critical appraisals of recently published research) are a regular feature in Evidence Based Library and Information PracticeKloda et al. (2011)* find that these summaries often reveal more weaknesses than strengths, and building a culture of open public review in this way may help to improve the overall quality of research within the profession. P3R can also add value in other ways, for example by assessing the implications for practice rather than simply evaluating purely theoretical or methodological issues.

But whilst the research community in general may be interested in the value of P3R, are individual researchers? Several examples illustrate that article-commenting has generally failed to take hold in spite of many publishers' efforts to drive this activity online. Indeed, relying on this model alone, it is likely that a significant proportion of papers would attract no comments, reviews or discussion at all. At the moment there is no real incentive for individuals to contribute their time, even if it will benefit society as a whole.

Over time, traditional pre-publication peer review has proven itself to be relatively effective - even if inefficient - at what it is designed to do. Right now P3R can still play a valuable complementary role by supporting continuous dialogue and appraisal in an open way. Going forward, the structure and format of P3R can also be refined to help address some of its shortcomings (transparency; the authority of reviewers; incentive effects), for example by adding structured metadata to reviews or awarding some form of CPD credits to reviewers. Ultimately however, we all must share the responsibility to critically assess and evaluate what we read, and opening up access to this discussion - through whatever channel - can only be a good thing.

*Kloda, L.A., Koufogiannakis, D., & Mallan, K. Transferring evidence into practice: what evidence summaries of library and information studies research tell practitioners, 2011. In Information Research. (Published) [Journal Article (On-line/Unpaginated)].

2 comments:

  1. Thats quite an interesting post. I suppose one can understand why individuals would be wary of critiquing someone elses published work, especially if that critique caould be seen as criticism by a competitor of some sort.

    Positive reviews could likewise be seen as back-slapping or reciprocal and similiarly suspect.

    An interesting recent development from one field is the Digital Humanities Now web publication.
    This "showcases the scholarship and news of interest to the digital humanities community through a process of aggregation", essectially producing a feed of highlighted blog posts, articles and news.

    The best of these are then included in the more formal Journal of Digital Humanities. Another noteworthy aspect of the process is that volunteer 'editors at large' are recruited from the wider community to filter and approve diffeent pieces.

    For more details: http://digitalhumanitiesnow.org/about/

    ReplyDelete
  2. Thanks for the comment Padraic - nice to see a comment on a blog post on post-publication review :)

    I completely agree about the difficulties in comments being directly attributable - perhaps there could be some anonymised format such as a registration no. for each commenter, though it would reduce the transparency and openness.

    The DHN example is interesting - no doubt we will see several new potentials models emerge over the next while.

    ReplyDelete