Who are the reviewers?
![]() |
Photo: AJC1 |
You can't argue with Science, right?
In this context, opening up the review process may spark concerns over quality control; after all how do we know whose opinion we can trust? Of late, there have been several high profile examples of 'experts' doctoring data. This is why open data sharing and archiving is so important, so that results can be independently reproduced and validated (or not as the case may be). Indeed the idea of tracking replicability as a method of post-publication evaluation is the idea behind the Reproducability Initiative, launched recently by PLoS, figshare and the commercial enterprise ScienceExchange. But not every discipline or subject lends itself so easily to the scientific rigour of a lab, not to mention the costs that would be involved if every study were to be re-tested. This is why it makes sense to transfer part of the burden of review to a wider post-publication network in an open way.
Looking at LIS in particular, evidence summaries (critical appraisals of recently published research) are a regular feature in Evidence Based Library and Information Practice. Kloda et al. (2011)* find that these summaries often reveal more weaknesses than strengths, and building a culture of open public review in this way may help to improve the overall quality of research within the profession. P3R can also add value in other ways, for example by assessing the implications for practice rather than simply evaluating purely theoretical or methodological issues.
But whilst the research community in general may be interested in the value of P3R, are individual researchers? Several examples illustrate that article-commenting has generally failed to take hold in spite of many publishers' efforts to drive this activity online. Indeed, relying on this model alone, it is likely that a significant proportion of papers would attract no comments, reviews or discussion at all. At the moment there is no real incentive for individuals to contribute their time, even if it will benefit society as a whole.
Over time, traditional pre-publication peer review has proven itself to be relatively effective - even if inefficient - at what it is designed to do. Right now P3R can still play a valuable complementary role by supporting continuous dialogue and appraisal in an open way. Going forward, the structure and format of P3R can also be refined to help address some of its shortcomings (transparency; the authority of reviewers; incentive effects), for example by adding structured metadata to reviews or awarding some form of CPD credits to reviewers. Ultimately however, we all must share the responsibility to critically assess and evaluate what we read, and opening up access to this discussion - through whatever channel - can only be a good thing.
*Kloda, L.A., Koufogiannakis, D., & Mallan, K. Transferring evidence into practice: what evidence summaries of library and information studies research tell practitioners, 2011. In Information Research. (Published) [Journal Article (On-line/Unpaginated)].
Thats quite an interesting post. I suppose one can understand why individuals would be wary of critiquing someone elses published work, especially if that critique caould be seen as criticism by a competitor of some sort.
ReplyDeletePositive reviews could likewise be seen as back-slapping or reciprocal and similiarly suspect.
An interesting recent development from one field is the Digital Humanities Now web publication.
This "showcases the scholarship and news of interest to the digital humanities community through a process of aggregation", essectially producing a feed of highlighted blog posts, articles and news.
The best of these are then included in the more formal Journal of Digital Humanities. Another noteworthy aspect of the process is that volunteer 'editors at large' are recruited from the wider community to filter and approve diffeent pieces.
For more details: http://digitalhumanitiesnow.org/about/
Thanks for the comment Padraic - nice to see a comment on a blog post on post-publication review :)
ReplyDeleteI completely agree about the difficulties in comments being directly attributable - perhaps there could be some anonymised format such as a registration no. for each commenter, though it would reduce the transparency and openness.
The DHN example is interesting - no doubt we will see several new potentials models emerge over the next while.