24 Feb 2012

What we talk about when we talk about ebooks


Books are easy to describe. They have a familiar structure and physical form, a sequence of pages, a constant order. Once printed, their content is finite and permanent. When someone says they are ‘writing a book’, they usually mean something with a beginning, a middle and an end. Ebooks are not so easy to define however.

Is an ebook simply an electronic version of a printed book? At one time this was probably true. Books were printed first and then made electronic as CD-ROMs or pdfs to offer increased value, particularly in the case of technical books.

However a shift away from this traditional workflow has been gradually occurring, towards a model where content no longer begins as a traditional book, but rather as an epub file marked up in XML. Replacing headings, titles and tables with tags yields a very different looking ‘book’. In truth it is no longer a book as we know it, that is, a defined container.

How do you cite a quote from a book with no page numbers, where instead there is an infinite pane of content? The reality is that fixed structures such as page numbers are no longer necessary, and this creates so many more possibilities. Content can be changed, augmented, annotated, refined, updated – in the same way that thoughts and ideas develop and evolve as a continuous, infinite process. Chapters can be added seamlessly, rather than requiring a new edition. Some people still like the permanence of printed books. They feel solid and reassuring. But such durability is only required in a minority of cases. In many other contexts fluid and dynamic content presents an opportunity rather than a threat: the opportunity for sharing and developing content and ideas, and making information a social experience which continues to change and grow rather than one which ends when the book closes.

There are certainly some forms however, which still lend themselves to print and arguably always will: poetry; visually complex material; books where the physical experience and design is as much a part of the content as the information contained within them. These works will become even more valuable as pieces of art in both form and function, ultimately ensuring that printed books remain an intrinsic part of our cultural future.

21 Feb 2012

2012 Horizon Report

It’s 2012 now and the 2011 Horizon Report is yesterday’s news.

The new 2012 Horizon Report suggests six technologies with considerable potential for our focus areas of education and interpretation.

Time-to-Adoption Horizon: One Year or Less
  • Tablet computing (presents new opportunities to enhance learning experiences, i.e. for one-to-one learning, but also as a feature-rich tool for field and lab work, often times replacing far more expensive and cumbersome devices and equipment. Tablets supplement smartphones and are not considered a replacement. Ronan posted a piece a while back about Information literacy and the iPAD)
  • Mobile apps (the ubiquitous presence of always-connected mobile devices means that higher  education institutions are now designing apps tailored to educational and research needs across the curriculum. Check out Best of Mobile Higher EdWeb)
Time-to-Adoption Horizon: Two to Three Years
  • Game-based learning (research continues to demonstrate its effectiveness for learning, particularly with regard to its ability to foster collaboration and engage students deeply in the process of learning. An example is 3D Game Lab, a personalised learning platform that uses quest-based learning and game mechanics)
  • Learning analytics (is about joining a variety of data gathering tools and analytic techniques to study student engagement, performance, and progress in practice, with the goal of using what is learned to revise curricula, teaching, and assessment in real time. An example is SoLAR's Open Learning Analytics Course, a free online course that provides an introduction to learning analytics)
Time-to-Adoption Horizon: Four to Five Years
  • Gesture-based computing (enables students to learn by doing, i.e. by moving the control of computers from a mouse and keyboard to the motions of the body, facial expressions, and voice recognition via new input devices. An example is Mudpat, which enables localised haptic feedback when interacting with a touchscreen device)
  • The Internet of Things (refers to a category of small devices or methods that enable an object to be assigned a unique identifier via IPv6. Smart objects are interconnected items in which the line between the physical object and digital information about it is blurred. An example is the use of RFID student cards for attendance tracking

20 Feb 2012

Academic Librarians responsible for the probable "wanton destruction of printed books"?

I recently came across an interesting and provocative journal article (Colin Storey, (2011) "Bibliobabble?: The surge towards a print?less e-library recasts academic librarians as “rare book engineers”", Library Management, Vol. 32 Iss: 1/2, pp.73 - 84) that addresses the dangers of academic librarians destroying a huge amount of the printed word in colleges and universities around the world. Historically, the printed word has been repeatedly under attack from natural disasters (the burning of the library at Alexandra), military inasions (the Iraq National Library) in 2003 and totalitarian regimes who organised public burnings of books they disapproved of (the Nazis in the 1930's and 40's). Somewhat provocatively, Storey maintains that it will be the supposed guardians of printed books and serials-academic librarians themselves-who will be responsible for the destruction of countless books as academic libraries move, seemingly inexorably, towards a print-free e-library.

Storey claims that the massive weeding programme currently underway to get rid of less-used volumes is being pushed by university administrators, many of whom have little interest in their libraries. Whole categories of books are being disgarded; multiple editions and reprints of classic fiction may disappear completely. The author acknowledges that weeding has always been part of a librarian's job, and with good reason: editions were out of date and misleading; the subject matter was no longer relevant to teaching strategies; and print editions being worn out. Now, however, librarians are removing books because of the absence of currently perceived cultural importance. Storey believes that the amount of weeding needed to be done means that thorough checking of the worth of individual titles will not take place:

As they have always done, academic librarians may discard such books based upon the absence of currently perceived cultural importance. Yet the sheer volume of material being discarded at this juncture militates against title-by-title, copy-by-copy micro-decisions on future importance and need. There is simply no time and few resources for such finesse.
Storey claims academic libarians may well be repudiating their roles as traditional conservationists as they dismantle collections acquired by their conservation-minded predecessors all for the sake of short-term political and economic considerations within their institutions.

As we all know, weeding is a vital part of managing and maintaining a collection. There are few people who mind having their library's collection updated from vinyl to cd or VHS to DVD. However, the drive to a digital library will mean difficult decisions will have to be made with regard to keeping certain editions of books. As a lover of the printed book, part of me agrees with the author's fear of the academic library going more and more digital and the weeding out of large parts of the printed collection. However, there are a couple of items to take issue with. I believe few will mourn the loss of the printed serial. It is so much easier that all of these be maintained on databases for easy retrieval for students rather than gathering dust and taking up space in a library. Also, the author gives very little practical examples of what librarians can actually do to successfully counteract (assuming they want to) university administrators' wishes that print collections be dramatically reduced. In an era of austerity and cutbacks for academics worldwide, it would take quite a brave librarian to launch a campaign that goes directly against their employer's wishes. Storey simply says what they are doing runs against the ethos of their profession without outlining a strategy to successfully counter the perceived threat. Storey also does not talk about the benefits of e-books to academic libraries. Space is at a premium in most libraries and the introduction of material available online may free up space for newer, more relevant printed material to be purchased. Also, needless to say, if a book is available electronically, many students can access it at the same time instead of just a handful.

Overall, Storey makes an fine case for the preservation of existing print collections at academic libraries but one wonders as digital technology increasingly encroaches on reading patterns whether he is fighting a losing battle.

17 Feb 2012

The value of metadata

In recent days I have come across two different examples illustrating the importance of getting metadata 'right' in order to make digital content easy to find and discover.

More and more authors are now archiving their research in institutional repositories. In theory this 'green route' to open access makes research considerably more visible, but in practice this depends largely upon how Google indexes the content from a given IR. The authors of a paper recently published in Library Hi-Tech (1) suggest that IRs typically have lower indexing ratios because they use the Dublin Core element set rather than the metadata schemas recommended by Google Scholar. Artlisch & O'Brien argue that Dublin Core can not adequately express bibliographic citation information, and found that after transforming the metadata of a subset of papers from the University of Utah's IR in line with GS recommendations, the indexing ratio increased to over 90%. 

The value of metadata can be seen in a different context in a recent white paper from Nielsen (2). Nielsen is a commercial company selling a product - enhanced bookdata - so it obviously has vested interests, but the paper does offer some interesting data on the links between metadata and sales nonetheless. Making sure content is easy to find is not just about research impact and citations; in the book industry it is also about sales.The evidence presented by Nielsen indicates that more is definitely more when it comes to the link between metadata and book sales. Books in the top 100,000 in 2011 which has complete BIC basic elements including a cover image (one of the most crucial elements), typically experienced higher average sales than those without. Furthermore, adding enhanced descriptive elements such as long and short descriptions, author biographies and reviews (available via Nielsen's Bookdata Enhanced service) also resulted in increased sales: Publishers who subscribed to the company's enhanced metadata services experienced 11% growth in volume sales year on year, with 44 out of the 65 publishers in this category seeing positive growth in volume sales. This compares with an overall market decline of 2.7% (obviously correlation doesn't necessarily equal causation!).

So it is not just librarians who are interested in metadata, but ultimately publishing CEOs and their accountants as well.

(1) Kenning Arlitsch, Patrick Shawn OBrien, (2012) "Invisible Institutional Repositories: Addressing the Low Indexing Ratios of IRs in Google
Scholar", Library Hi Tech, Vol. 30 Iss: 1

(2) White Paper: The Link Between Metadata and Sales by Andre Breedt, Head of Publisher Account Management & David Walter, Research and Development Analyst, Nielsen. 25th Jan 2012.

Posted on Friday, February 17, 2012 | Categories:

14 Feb 2012

Google 'improves' searching for health information

And I say 'improve' for obvious reasons....

Usually when you enter a list of symptoms into a Google search box you end up with a hundred different diagnoses before you even get to the second page of results. However people still seem to love doing exactly that, so much so in fact, that Google has introduced a new algorithm to make it even easier to 'self-diagnose'. Now when you enter a typical symptom as your search keyword(s), a list of 'possibly' related health conditions will appear at the top of the page. You can read more about it on their official search blog, where they claim they are "improving health searches because your health matters."

What is Google doing? Giving users what they want? Or doing more harm than good by 'helping' people put a label on their symptoms before they have even got as far as a Wikipedia entry (let alone a PubMed Health plain language consumer summary)?

It is exactly this kind of thing that gives the idea of using Google to look up health information a bad name. There is no mention anywhere of the need to critically appraise the information you find, no caveats about checking if the information comes from a reputable source or a commercial pharma company pushing a product. Google can be a great tool for finding health information, but consumers often put so much trust in the search engine that it is likely they will instantly attach their symptoms to one or more of the 'possibly' related conditions suggested by Google. Maybe Google should be taking a step back on this one instead of adding fuel to the cyberchondria fire.

11 Feb 2012

HSLG Annual Conference 2012

This year’s HSLG conference in Athlone offered delegates a unique opportunity to hear the key findings and recommendations of the recently launched SHELLI report delivered by the Loughborough University research team themselves. Dr Janet Harrison and Claire Creaser highlighted the three strategic areas which should be priorities for health science libraries: increasing visibility; identifying appropriate KPIs & systematically collecting evidence on the impact of libraries on healthcare and patient outcomes; and staff & service development. A workshop session then explored delegates’ views on the possible ways of achieving these recommendations in practice, including ways of encouraging clinicians to provide regular feedback on how they use the information we provide.

Sarah Glover and Jenny Kendrick from the National Institute for Health & Clinical Excellence offered a fascinating insight into the rigorous and complex process of carrying out systematic review searching for both intervention procedures and shorter clinical guidance. It typically takes five working days to devise and assess the most appropriate MEDLINE search strategy for each clinical question, and this is then replicated for other databases including EMBASE and Cochrane Library. NICE also make use of the InterTASC ISSG Search Filters. For many in attendance including myself, literature searching represents a significant component of our day to day workload, and I picked up some interesting tips including the value of adding a narrative to search strategy documentation.

A series of lightning presentations by HSLG members encapsulated a flavour of the projects currently being undertaken in health science libraries across the country, including the successful introduction of a new bibliotherapy collection for staff in Tallaght Hospital Library (Jean McMahon) and a Pathways to Learning (PAL) initiative to extend library access for users in the Midlands (Michael Doheny, AIT). Anne Madden (SVUH) explained the concept of 'Teach Meets' as a way of sharing expertise in a more informal setting - particularly relevant for those finding it difficult to attend more formal CPD and training events. A couple of different Teach Meet groups have already got off the ground in Dublin and the Limerick/Shannon region. Nicola Fay offered her solution to managing the difficulties involved in purchasing journals across the HSE Midlands region, Niamh O’Sullivan from the IBTS showcased a novel way to promote library services through a children’s art competition, and I presented an overview of some useful free resources for producing online tutorials and e-learning content.

The conference concluded with Louise Farragher’s discussion of her emerging role as an embedded librarian in the Health Research Board, which underlined the importance of building relationships with researchers. Connecting people with information forms the backbone of any library’s strategy, however Louise’s experience served as a reminder that perhaps sometimes we need to focus less on the information and more on the people.

The conference also provided a useful opportunity to catch up on new products and resources. Clinical search tools which bridge the gap between summarised point of care products like Up-To-Date and traditional databases like MEDLINE, appear to be the latest trend. Elsevier’s forthcoming Clinical Key will allow users to search for and access the full text of all ‘clinically relevant’ Elsevier content (I wonder what their definition of ‘clinically relevant’ is?). Ovid’s recently launched OvidMD product looked particularly interesting. The interface allows users to search MEDLINE and a selection of Current Opinion journals, and the results are semantically ranked to ensure clinical content is prioritised above research-oriented material. Libraries that subscribe to Up-To-Date can also integrate this with the OvidMD interface to provide users with a single solution for both point-of-care queries and more detailed and in-depth information for clinical practice.

In short, another excellent conference organised by the hard-working HSLG!

6 Feb 2012

Print-on-Demand Book Machines

The print-on-demand espresso book machine is now making inroads in libraries across America. The machine can print, collate, cover and bind a book in a few minutes. The machine has access to EspressNet, a database which holds about four million titles that are in the public domain. Many of these are from Google Books, but some publishers such as HarperCollins, Simon & Schuster and Macmillan have made some of their back catalogue available as well. Three public libraries have already installed the machines, as well as several bookshops and two academic libraries. It also provides a way for aspiring writers to have a print run of their book made possible, providing they are willing to pay the cost. The video below gives an idea of the espresso book machine process:

There are some exciting potential advantages to the book machines such as: students being able to print off a college book that is available from their institution, but not at their campus without waiting for an inter campus/library loan; potentially, students could print off selected chapters from different books on required reading lists instead of photocopying from titles only available in print; and for book lovers in general, who may not wish to join the seemingly inexorable rush into using e-readers, this may be a way of combining the best of the print and digital worlds.

Detractors may point to the large cost of acquiring a machine, and purists may wonder where the direction of libraries is going: are we going from being a repository of knowledge to a digital kiosk vending operation?

4 Feb 2012

Bad Science?

I came across this infographic via the excellent Communicate Science blog, and it certainly makes for interesting reading. A couple of claims (taken from a variety of sources including a PLoS One Systematic review) in particular stand out:

  • Misconduct rates are highest among clinical, medical & pharmacology researchers
  • In a sample of 281 clinical psychology papers, 15% contained an error which would have changed the conclusion

As a medical librarian, the former is obviously a cause for concern. Promoting critical appraisal is a key element in evidence-based practice, but how can you tell if the original data is falsified or fabricated? You can't of course, unless all raw data is made publicly available and archived in a transparent way (and even then, it may still be difficult). The Dryad consortium offers a hopeful model for the future.

Given that a number of the statistics are based on surveys and self-reporting, it is likely that if anything, some figures understate the true extent of the problem. If one in three scientists admit to using questionable research practices, I wonder what the total really is, including those who don't admit to doing so when surveyed about such a sensitive topic?

If it is the pressure to publish the 'right results' in the 'right journals' to obtain funding which is in part fuelling this misconduct, then high impact journals like NEJM may be particularly vulnerable to researchers who massage data to further their career. In the world of medical research, recommendations and practice change rapidly. Last year a study in Archives of Internal Medicine examined the frequency of medical reversal by taking a sample of the articles published in the NEJM in 2009. The authors reviewed the conclusions of original research studies published during the year and found that:

  • 49% reported a new practice that was better than current practice and
  • 13% reversed a current practice

These figures underline the importance of new research in the field of medicine, and the very real impact it has on clinical practice. However if - as 'Bad Science' indicates - a proportion of such studies (For clarity, I am referring to studies generally, rather than those relating to the NEJM sample) are likely to be predicated on false data or questionable research practices, how do we really know when practice should be changed?  Where do you draw the line in investigating the rigour of what we already apparently 'know' to be true at the expense of innovation? These issues are complex enough without the threat of bad science hanging over the results as well.

Bad Science
Created by: Clinical Psychology