31 May 2012

UKSG Usage Statistics Training Seminar / DCU - 30th May 2012

This training seminar represented a hands-on introduction to accessing, collating, utilising, presenting and marketing e-resource usage statistics. The focus very much revolved around COUNTER-compliant e-resource usage data.  

Counting Online Usage of Networked Electronic Resources is an international initiative serving librarians, publishers and intermediaries by setting standards that facilitate the recording and reporting of online usage statistics in a consistent, credible and compatible way. For a full list of COUNTER-Code-of-Practice compliant vendors refer to the register of vendors (Journals and Databases; e-books and reference works).

The COUNTER Code of Practice Release 4 is scheduled for December 2013.
For the full list of relevant Release 4 e-resource usage reports refer to Table 1 in the step-by-step guide for vendors.

To make the job easier, COUNTER reports can be harvested via SUSHI. For more information refer to niso.org. JISC offers a 'beginners', free, open source programme with a web-based user interface to support the downloading/retrieval of COUNTER-compliant SUSHI reports.

The morning session tackled the publisher (IOP Publishing) and library perspective (TCD) on the importance of e-resource usage data for information providers. Publishers use usage data to inform their editorial strategy, understand their user base (site design, measure effectiveness of technological change, understand discoverability and understand usage abuse), as well as their sales and marketing strategy.

Libraries employ quantitative e-resource usage data in order to:
  • evidence e-resource use
  • demonstrate value for money
  • help justify expenditure on a resource
  • benchmark
  • inform collection development decisions (which to cancel/retain)
  • influence subscription management processes
Key metrics include:
  • cost per use (total cost divided by total no. of searchers' downloads)
  • use/cost per staff/student
  • journal collections performance
  • top performing titles
  • resource performance by type (e.g. e-journals, e-books, databases) and discipline
  • types of use (on campus vs. off campus)

A representative from JUSP (Journal Usage Statistics Portal) introduced their role and service scope. JUSP is a usage statistics service provider, i.e. a "one-stop shop" for libraries to view, download and analyse their usage reports from NESLi2.

The following challenges in collating and utilising e-resouces statistics apply:
  • Meaningful analysis and collation of usage statistics is labour intensive
  • Merging/filtering COUNTER-compliant and non-compliant e-resource usage reports can be challenging (many vendors are still not COUNTER compliant; examples include Justis/Firstlaw, Financial Times, WARC etc.)
It must be noted that usage statistics do not say anything meaningful about teaching and learning.
Also, quantitative usage statistics should never be considered in isolation of qualitative feedback from users. In this sense, e-resource related management decision making is not an exact (number crunching) science.

After a good coffee-and-sandwich lunch, the practical element of the training kicked in. This was, from my perspective, very useful.

We were given a workbook (feel free to use it yourself) and a selection of dummy COUNTER reports to play with (go ahead and use those too):
Overall, the training represented an interesting peek into the world of retrieving and managing e-resource usage data.

29 May 2012

The New Professional’s Toolkit - Bethan Ruddock (Review)

What is initially most refreshing about The New Professional’s Toolkit, is that instead of starting off with traditional ‘librarian’ topics such as Collection Development or Reference Services, chapter one delves straight into project management. Indeed from a cursory glance down the table of contents (or even the title itself where there is a notable lack of any reference to libraries or information), this could be a book about any number of professions; there are chapters on teaching, training & communicating, measuring performance, marketing your services, using technologies, networking & promoting yourself, and managing budgets. Whilst the content of the toolkit is always applied to the library and information setting, it emanates from a best-practice management context – something that is often lacking in books aimed at new entrants to the profession.

Not that there are in fact many such books, which is largely the USP of Ruddock's book; it neatly bridges the gap between college texts and more specialised in-depth monographs. Consequently, the tone of the book is very much underpinned by a pragmatic ‘tips & tricks’ approach, yet not in a lightweight or superficial sense. The result is a text that is very easy to dip in and out of as a reference source, and to find further references, useful websites and other resources for use in practice. You won’t find lengthy narratives or detailed information here, but rather short paragraphs and bullet points – exactly what you want in a toolkit. The recurrent focus on self-study and informal learning in the workplace in particularly valuable. For example, Ruddock emphasises that much of the theory and principles of topics like project management, budgeting or marketing which are taught through formal training courses, can also be learned by self-study using free online resources and websites, including those listed in the toolkit. It is a creative approach, and one that is particularly relevant in these times of rationalised training budgets.

The chapter on marketing services and engaging stakeholders is particularly well-focused. Here Ruddock discusses the advantages of pursuing a benefit-led approach: it’s no good telling people you have access to a new database, instead inform your users (and indeed more importantly your non-users!) how it will make their job easier or better, or how it will improve their research. Essentially, demonstrate to them how your services are directly relevant and of value to them as an individual. Ruddock also raises ideas such as the importance of timing your promotional activities, ‘engaging with the disengaged’ and using an embedded librarianship approach within an organisation to help build stronger relationships.

The chapter on technology is perhaps a little lightweight and formative for those seeking more advanced assistance however, and readers would likely be familiar with most if not all of the content from prior study. Indeed at just over two hundred pages in total, there is still significant scope for being a little more comprehensive in places, without it becoming a turgid task to plough through for the time-pressed reader. Furthermore, it was disappointing not to see any real advice in terms of writing annual reports or library policies, as many new professionals may have limited, if indeed any, experience in this regard.

The idea of a new librarian’s toolkit that is first and foremost about how to manage effectively rather than ‘how to do library stuff’ is timely and extremely welcome. The need to step outside the profession to borrow from and integrate best practice and ideas from other disciplines such as project management, marketing and I.T. is essential – even for new professionals - and Ruddock clearly recognises this. However most importantly perhaps given its target audience, The New Professionals Toolkit doesn’t drag you down a side-path of digressive background-reading, but instead makes you want to actually get on with the job.

The New Professional's Toolkit by Bethan Ruddock is published by Facet (May 2012; 192pp; paperback; 978-1-85604-768-5; £44.95)

28 May 2012

What can be done about the impermanence of the web?

Up until very recently we stored our pictures, documents and other treasured personal momentoes in photo albums, folders or, in more recent times, on hard drives and external portable hard drives. These days, many people are increasingly storing these items online using pay services such as Dropbox, Google Drive, Ubuntu One and many more. But what happens if you upload some of your personal files or writing onto a website for free and that website no longer exists or the company decides to delete it? Jason Scott and his Archive Team describe themselves as a "loose collective of rogue archivists, programmers, writers and loudmouths" who are "dedicated to saving our digital heritage".

They hear about failing websites and rush in to salvage what they can before user content is deleted and lost forever. They don't not seek permission from the site owners and generally only try to save files that are publicly available. They use the analogy of a library service closing down and burning all the stock to explain what they see as the reckless vandalism of the internet operators:

"It's like going into the library business and deciding, 'This is not working for us anymore,' and burning down the library."
People tend to believe that anything they submit to the web will be kept safe by the various web operators. Vast amounts of data have been lost due to changes in ownership, attacks by hackers and abrupt shutdowns of services. GeoCities was closed down by Yahoo in 2009, resulting in the loss of 38 million homemade pages.

The Archive Team saved what it could salvage from GeoCities as well as Poetry.com, Flip.com and Friendster.Some people may be aware of the Wayback Machine, operated by the Independent Archive. This service allows users to see archived versions of web pages over time. Scott is an employee of the Independent Archive but operates the Archive Team independently.

Scott maintains that apparently silly or throwaway content found on sites like GeoCities can have unanticipated cultural value. Geocities was for many people their first experience in uploading content to the web and much of value may have been deleted when Yahoo pulled the plug on the service. Scott is deeply skeptical of sites like Facebook, Flickr and Google. Google has already withdrawn support for Google labs, a site dedicated to experimental projects. He makes the salient point that free services simply cannot be relied upon to store your data in the Cloud.

It makes sense for us to not be limited to one company for the data we store online. There are signs that companies are opening up from their proprietary policies and making it easy for us to switch to competitors' products. Google Takeaway allows users to export their files easily. Facebook also has a "Download a Copy" function for the photos and other content users have on the site.

Ultimately, it seems that we ourselves need to take responsiblity for backing up our important data in a number of places. We simply cannot rely on free service providers on the web to save it for us. Just as information has died in the past-such as books going out of print-so it will continue to do so if users do not take steps to store it properly. Perhaps the Archive Team may be better served in educating internet users about potential loss of files and what they can do to avoid this. Perhaps those external hard drives and photo albums weren't so old fashioned after all.

27 May 2012

This business of tracking library usage…

…is challenging and time consuming. We all want to understand what our respective audiences are up to. Collecting realistic usage data should be easy and swift without taking up too much individual staff time.

The basics revolve around finding out about:
  • Nature and frequency of assistance we provide
  • What are our busiest hours (by the hour/daily/weekly/monthly)?
  • How do our busiest hours change over the semester?
  • Identify reference staff training needs based on patrons’ FAQs
  • Sources of reference questions (e.g. phone, in person, IM, email)
  • Effective staffing patterns
  • Collection development needs

Libraries want to access this information in a timely and efficient manner. The mechanism of collecting stats must be simple. Reports from raw data should be compiled easily too. There’s libstats and desktracker.

You could also enter data into a spreadsheet from a form via Google Docs.
  • Simple data recording (minimise clicks via a grid of check-box options)
  • Instant stats compilation
  • Easy export to Excel
  • Quick customisation
  • Expandability (tracking variables)
  • Web accessibility
  • Built-in reference interaction time-stamp
  • Flexibility (enter more or less information)

Sunshine Carter (University of Minnesota-Duluth) came up with using Google Docs as a straightforward solution to the challenge of effectively collecting library usage data. This approach requires no budget and also no assistance of IT staff. It took her about one hour to get the show off the ground (same here when I tried it).

For data collection, I simply created a communal access point via Blogger (by invitation only). A separate “Help” section explains to users the meaning and scope of each tracking variable.

Blogger Screenshot

The interesting bit is to come up with local tracking variables that mean something to your library. The idea is to keep this high-level and straight forward. The more complex the solution, the more time it will take for reference staff to enter data.

Prior to implementation think backwards. Here are some suggestions:
  • What questions about library use do we have?
  • What statistics would help answer those questions?
  • What data would get us those statistics?
  • Carefully customise data collection fields
  • Ensure common data collection/terminology/practice across all library sites/offices

25 May 2012

4th International Conference on Qualitative and Quantitative Methods in Libraries

Just back from a spell at the 4th International Conference on Qualitative and Quantitative Methods in Libraries (22nd - 25th May, Absolute Hotel) in Limerick, where I presented some of the findings of my PhD and chaired a session on technology in libraries.

The QQML conference offered a wide range of topics: digital libraries, information literacy, research methods, staff management, technology, library management, collection policies and historical aspects were just some of the headings. Attendees from over 50 countries enjoyed the sunny weather and listened to more than 250 papers and looked at about 20 posters.

Some of the sessions seemed to have been put together somewhat arbitrarily - not all the papers fitted into the slot. Also, I only found out that I was to chair a session when I read the final programme... Organisational issues aside, all presentations I listened to were of high quality and very interesting. Professional librarians, academics and PhD students shared their findings and introduced new projects running in their organisations.

Although I only attended for one day and the programme was quite packed I managed to talk to a number of colleagues from around the globe. This is always an enjoyable exercise.

The book of abstracts is available online.

Free webinars to promote open access publication across Ireland

NECOBELAC and its partners will deliver two free webinars in Open Access Publication next month:

Webinar 1, (8 June 2012, 11am – 12pm): “Open access publishing: policies, advocacy and best practices”

Webinar 2, (22 June 2012, 11am – 12 pm): “Repositories: management, policies, advocacy and best practices”

To register for Webinar 1, click here

The webinars will consist of a 30 min presentation followed by an opportunity for participants to post questions and comments to the chat room.

As I have been fortunate enough to be involved in NECOBELAC training events recently, I can recommend the above as an excellent (and free!) opportunity to learn more about open access publishing and archiving, and to discuss how to promote it effectively to the research community.

24 May 2012

Guest post: DigCurV report and digitisation needs

Guest post by Giada Gelli, Assistant Librarian, Cataloguing and Digitisation at National Gallery of Ireland

How many of you still consider the practice of digitisation to be a new, unexplored frontier? And how many institutions, especially small ones and particularly in Ireland, are still struggling with the basics when it comes to digitisation projects or strategies?

Digitisation, meaning the creation of digital surrogates of existing physical materials (I will not include here the other even less explored area of preservation of born digital information) is not a new practice in libraries, archives and cultural institutions in general.

Digitisation of collections has been around for a couple of decades, countless journal articles and handbooks have been written on the subject, many workshops, seminars, conferences and symposia have been dedicated to it worldwide. Yet many institutions, small ones above all, seem to struggle with it still, while lack of knowledge and expertise on the matter seem to be constant barriers to the development of sound digitisation policies. But why is that? Why are the library, archive and cultural sectors not reaping the rewards of years of practice? DigCurV, the Digital Curator Vocational Education Europe Project, has just published a report on a survey conducted last year on the training needs in digital curation and preservation. The results are as much important as they are conspicuous. The main issues identified revolve around the usual suspects: lack of funding, (therefore) lack of training, (therefore) lack of expertise in-house and in the workforce at large.

‘In conclusion, the results of the research suggest a great demand for training in digital preservation and curation that arises from a severe lack of qualified staff in the field.’
 On one hand, it is true that anything that has to do with technology is so fast-paced and ever-evolving that there is a need for constant training and effective continuous professional development. It is also true that the area of digitisation is a very complex and particularly multi-faceted one. For example, take the skills that were individuated in this report as key for digitisation staff:
‘According to the participants, the skills and competences required for digital preservation and curation cover a broad spectrum that ranges from technical expertise, IT knowledge and digital preservation-specific skills to social skills, management skills, and knowledge of the organisation, the subject domain as well as library, archival or information science.’
Nonetheless, why is it so hard to establish a clear set of national guidelines that could be followed by all libraries, archives and heritage institutions? Despite the different nature of each individual special collection, the different types of materials that can be processed during a digitisation project (books, prints, paintings, slides, photos, objects etc.) and of course the different budgets and kinds of equipment available, there should be a unique framework for digitisation at national level. This set of guidelines promoted by an advisory body (be it academic, heritage or sector-specific like the Library Association of Ireland or the Irish Society for Archives) could advise institutions on gold standards and best practices for digitisation projects of all kinds. I wonder if such a body already exists in Ireland? And if not, should it exist?

23 May 2012

23 Things for Professional Development

If there is one thing you should do for your professional development it is signing up for 23 Things for Professional Development!

23 things for CPD logo

This is a free easy-to-follow online course organised by professional librarians who will show you 23 tools ranging from how to set up a blog (!) to becoming a reflective practitioner. It runs for 23 weeks from 7th May to 8th October. You can sign up quickly - and you're ready to go!

I followed the course last year and really learned a lot. It's a convenient way of experimenting with new ideas. If you missed a week you can always go back to it. You don't need to be a techie either. If you have never used a resource before, e.g. Twitter, you can follow the step-by-step instructions. Further reading is encouraged, and the page gives additional links for each module.

The course offers great advice: "Considering your own brand" (thing 3), for example, is something that I haven't done so far. Dodgy photographs, go away!

The people behind this excellent resource have recently been rewarded for their efforts by becoming runner-up for the The Credo Reference Digital Award for Information Literacy, presented at LILAC 2012.

Google Search Education

The importance of information literacy in the digital age is well known and now Google have officially recognised this. They have just launched Google Search Education. This site contains detailed lesson plans for educators to help their students become more proficient searchers. Detailed lesson plans are available by level and subject. Subjects include:

  1. Picking the Right Search Terms
  2. Understanding Search Results
  3. Narrowing a Search to Get the Best Results
  4. Searching for Evidence for Research Tasks
  5. Evaluating Credibility of Sources
As can be seen from the list of subjects, there is a hefty bias towards search strategy rather than evaluating the information found.

Google has hired search educators and librarians to create these lesson plans and this would make an excellent resource for librarians, particularly those working in a school. The following promotional video shows how librarians and search educators can move students from an overwhelming amount of data to a much more targeted search:

22 May 2012

TeachMeet – An exciting way to share ideas!

TeachMeet(s) have their origins in the field of Education but Aidin O’Sullivan (Institute of Technology, Blanchardstown) and Greg Sheaf, Trinity College Dublin) on behalf of LirHEAnet ran a fantastic session on Thurs 17th May 2012 at TCD.

The theme for the TeachMeet was “Innovative Training Methods”. Participants were not left disappointed as they were given some first-hand experiences and solutions to delivering training. 

Greg and Aidin kicked off the session with a speed training ideas round- very similar to a speed dating set up where participants get 2 minutes to talk to each other and then move on. Also it should be noted that as advertised there was lots of cake for attendees! The sugar rush combined with speed round got everyone very engaged for the start of the presentations.All of the presenters had excellent ideas here are some of them reviewed by me.

Una O’Connor from the Athlone Institute of Technology was the first presenter who gave an interesting presentation on LibGuides  and how she uses them to deliver user services. 

Niamh O’Sullivan from the Irish Blood Transfusion Service talked about presentation techniques. Included were examples of slides gone wrong and helpful hints on how to improve slides for effective presentations. Niamh, gives this talk to staff at the IBTS- this is a great idea for any LIS professional who support staff research projects. 

A review of Gaming techniques as tools for training was presented by Susan Boyle, Liaison librarian at UCD. Susan work in this area is now chapter in a  recent publication, Information Literacy Beyond Library 2.0. Edited by Peter Godwin and Jo Parker Editors, Facet Publishing.

Prezi  is a cloud-based presentation software that opens up a new world between whiteboards and slides. The zoomable canvas makes it fun to explore and share ideas. It is nice alternative to power point. Áine Lynch from (Institute of Technology, Blanchardstown gave a fantastic demonstration so much so that I couldn’t wait to get home to start playing around with it. 

This was the first TeachMeet that I had ever attended and will be on the lookout for others to attend. I learned a lot that day and was re-energized by the collaborative spirit that librarians share. Hats off the Aidin and Greg a great session!
Posted on Tuesday, May 22, 2012 | Categories: ,

18 May 2012

Old school social filtering: Harvard's Awesome Box

Harvard's Awesome Box is a really nice example of merging a more traditional side of the library with new technologies, and might offer an innovative solution for any libraries that are not having much luck encouraging readers to recommend or review titles with apps like Library Thing or OPAC plug-ins.

A simple box marked 'Awesome' is placed on the counter, just like a traditional returns box, where users can return books that they recommend. the titles are then scanned so other readers can see the most recently 'awesomed' books. It's a way that ensures that even those who aren't tuned in to the world of social media / networking can still connect socially with what others are reading and find useful. By placing the Awesome box in the library itself, users interact with the library in a way that could help to build relationships and a sense of community, particularly in public libraries or school libraries. However, perhaps the awesome box may need to be rebranded a little for the Irish market.

More details on the technical aspects of the project are available in the Harvard Library Lab's Grant Proposal form.

17 May 2012

Public Library funding and E-Books

This survey-informed infographic (courtesy of onlineuniversities.com) makes for interesting reading. It provides a profile of American adults’ use of e-books. For example, 56% of the e-book reading public read for work or school, whilst the majority (80%) read for pleasure. When it comes to sharing a book with others, 69% prefer printed books to e-books. When “reading books for travelling or commuting”, 73% of readers favour the e-book format.  What’s most notable about the survey is that American e-book readers “are more likely to be younger than 50, have some college education and live in a household that earns more than $50,000”. It seems that the better-off and educated section of society enjoys the benefits of e-book services.

What’s that got to do with public libraries? Public library services expressly follow the principles of the UNESCO Public Library Manifesto, which states the following:
The public library, the local gateway to knowledge, provides a basic condition for lifelong learning, independent decision-making and cultural development of the individual and social groups. In order to fulfil this basic condition, the public library shall be free of charge
In other words, the public library operates on the simple premise of providing equal and unconditional public access to all information resources on offer.

If you look at the current budget situation of public libraries in Ireland, the picture is bleak to say the least. Across the thirty-two library authorities, local expenditure on library stock is set to decrease by a depressing 12.1% in 2012 compared to the previous year. Translation: a lot of materials that should be acquired in order to maintain an informed citizenry will not be acquired. In contrast to this development, the report states that the demand for public library services is steadily on the increase – catch-22.

The two charts below speak for themselves. The first one shows how small a fraction of overall public library funding is actually spent on stock acquisition (6% for 2012).

Source: Public Library Authorities BUDGETED EXPENDITURE for 2012

The picture is somewhat personalised by the second chart below tracking the negative trend of per-capita expenditure on stock acquisition over the past four years.

Source: Public Library Authorities BUDGETED EXPENDITURE for 2012
The picture in Donegal is particularly distressing. Expenditure per capita on stock has dropped from E3.10 in 2008 to E0.29 this year (see table 3.1 on p.15 in the report for a comparison to other county council expenditures). Certain branches there are literally facing funding collapse even though demand for services is ever increasing.

So what about the expenditure on e-books in public libraries? The report does not break down how much is spent on e-books (7,6 Million in 2012 for books, serials, multimedia and e-books). Sure, e-books are a great thing altogether, but I wonder whether it wouldn't be a good idea to re-assess the role of e-books within the context of heavy budgetary pressures (if ebook spending in Irish public libraries continues to grow in the future as has been seen in other countries). Access is conditional as users require Internet access. In 2007, 57% of all private households in Ireland had an Internet connection. This has increased to 78% by 2011. However, the Internet is primarily used by young adults (16-29). Only 21% of the age category 60-74 used the Internet in the last three months (see Information Society and Telecommunications in Households 2009- 2011). The question is, what does the demand for e-books actually look like across all demographics of public library users in Ireland? To what extent is e-book expenditure justified under the current set of financial circumstances?

Joacim Hansson offers an interesting perspective on e-books in public libraries (SWEDEN Viewpoint: Considering e-books), albeit from a Swedish perspective. He argues that e-book publishers artificially position themselves as indispensable operators in the digital realm through aggressive commercial campaigns (I agree wholesomely as I experienced this first-hand at an EBSCO roadshow recently). The creation of technological stress encourages public libraries to spend funds on this new technology, which is born out of the simple belief that their patrons somehow need it. In reality, such a need does not really exist at this point in time as the demand for e-books is still limited in Sweden.

Whatever about the rationale for introducing e-books in academic libraries (see various previous e-book related posts on this blog); it certainly seems, from a Swedish perspective at least and quite possibly from an Irish one too, that public libraries ought to consider carefully how to spend their allocated stock budgets. After all, “access to e-books isn’t a question of democracy. Reading is. Free loans in public libraries are. E-books aren’t” (Hansson, 2011)

In Ireland, funding for acquisitions ought to be protected and increased rather than cut. Regardless of  individuals' socioeconomic circumstances, it is essential that everyone has equal access to high-quality information resources via public libraries. This is especially important during times of economic turmoil when citizens require access to information more than ever.

Clearly, things will not improve any time soon on the funding side of things...

16 May 2012

Gold open access: getting over the line

Last week I attended the excellent NECOBELAC Dublin workshop on promoting scientific writing and open access publishing in Public Health. Thanks to the speakers, all in the IPH and NECOBELAC for their involvement and a great day (summary of the workshop). For librarians and indeed anyone who makes use of scientific research, the benefits of open access are obvious: seamless and costless access for the user; greater visibility and potential citation advantage for the researcher; more opportunities for data sharing and collaboration; and on a broader level, positive externalities and social benefits such as improved health literacy of citizens.

Green OA - a pragmatic solution?
Consequently it feels like we should be pushing an open door when it comes to promoting the OA routes to publication, but I feel the reality is still somewhat different. Green open access is an easy sell in theory, providing that submitting to repositories is a painless one-click process for authors (which it has become to an extent in some cases). For the 30% or so of publishers who do not allow self-archiving of post-prints, we need to ensure that authors are aware that they should keep a copy of the pre-print so it can still be archived. Self-archiving essentially needs to become a routine part of the scholarly publishing workflow.

When green OA works well (and this includes support for text- and data-mining) it's an effective and achievable way of opening up information. Indeed some advocates argue that green OA is 'good enough' and pure gold OA is perhaps over-reaching beyond that which is achievable in practice (in the short-run at any rate). However, what about those publishers who don't permit self-archiving until after an embargo period, or only support pre-print archiving which is far from ideal? Or the confusion for researchers in knowing which version (pre-, post- or final) to upload to the IR, which may discourage them from self-archiving altogether? How do we solve the problem of the extremely low levels of spontaneous self-archiving and low compliance rates even where self-archiving is mandated? What about the researchers in smaller organisations who don't have access to an institutional repository for archiving their work? What if publishers simply decide they don't want to allow self-archiving anymore because libraries are cancelling too many journal subscriptions and their revenue is dropping to such a level where the profits are not worth the effort any more?

Gold OA - the ideal solution?
The ideal long-term situation is therefore arguably the gold open access route, supported by a sustainable 'author pays' (or more accurately in practice, 'funder pays') model which reflects the real costs of publishing. But selling this model in the short-term, and seeing it grow in practice, may not be so easy. Peter Binfield recently left PLoS ONE to start up Peerj.com, with the aim of driving down the costs of gold OA publishing for researchers. PLoS are a non-profit OA publisher charging relatively reasonable APCs of around $1700 to cover their costs – Binfield now seems to be chasing a fee of around $100. However, I wonder if even driving down APCs to the bare minimum will have any substantial effect on the decision to publish in OA journals? After all, are APCs the biggest barrier that gold OA faces?

Right now the exponential increase in the volume of scientific research, combined with the intense pressure to compete for scarce research funding and academic jobs, means that publishing in the ‘right’ journals is still a big factor for researchers. Indeed if researchers can now publish for $99 in gold route journals, no doubt this will lead to a much larger volume of research being published on a gold OA basis. Great news for the scientific community, society and libraries who won’t have to pay to provide access to such articles for their users. But if you are a researcher struggling to make your work stand out from the crowd and build your personal brand in order to enhance your reputation, you will probably still want to submit your manuscript to a more traditional, frontline, ‘prestigious’ journal – and then you are back to the green model in most cases. 

Getting over the line
A recent paper by Solomon & Bjork (2012) found that the traditional factors of journal fit and perceived quality still greatly outweigh open access in authors’ journal selection criteria. In fact even among authors who had previously published in OA journals, only 60% judged OA as either “very important” or “important” when choosing where to publish their research. In this context, we may have a harder job that we think we should have when selling the OA agenda to researchers. 

It requires a cultural change at all levels of the research chain, which includes funding agencies and authors explicitly recognising the costs of publishing OA, academic and research institutions actively acknowledging and endorsing the credibility of OA journals and repositories, and libraries working to increase an awareness of the huge benefits of OA publishing to undergraduates, research students and staff. In the same way that traditional publishers spend a small fortune in branding and promoting the value of publishing in their flagship titles, all stakeholders in the OA process need to invest similar time and energy in marketing and selling OA in order to compete with the commercial might of non-OA publishers. Not an easy task perhaps, but clearly a necessary and valuable one.

Solomon, D. J., & Björk, B.-C. (2012). Publication fees in open access publishing: Sources of funding and factors influencing choice of journal. Journal of the American Society for Information Science and Technology, 63(1), 98-107. doi: 10.1002/asi.21660

15 May 2012

Developing Your Professional Portfolio: Publishing Your Practice & Research

Today the ANTLC (Academic & National Library Training Cooperative) hosted a seminar for information professional’s interest in developing their professional portfolios.

There were a range of speakers and attendees from many different libraries and different type of LIS positions. Many of the speakers were previous attendees at the annual ANTLC writing seminars and gave insights into how the experience of attending the seminars gave them the confidence and knowledge required to start writing for publication. Cathal McCauley, University Librarian, NUI Maynooth remarked in his presentation “The Importance of Disseminating Your Research & Practice for Continuing Professional Development” that this was a very good example of effective CPD.  Participants had used the information they had gained in previous seminars to publish and were now giving presentations on the experience.

Siobhán McCrystal from Stewart’s Care Ltd. discussed her recent article that appeared in the March issue entitled An Leabharlann (Value Added: Case study of a joint use Library) in which she used the skills and tools she learned from attending a previous ANTLC writing course and participating in the writing workshop organised by Helen Fallon.

Mark Tynan from UCD shared his writing experiences with the attendees. He encouraged all to consider thinking about writing about what they are involved in everyday as a starting point for articles. Getting into the habit of writing, not just articles, but starting with tweets, blogs, book reviews etc. is a great way of keeping the momentum going. He also gave tips on developing poster sessions.

Myself and Marjory Sliney, (Fingal Public Libraries) presented on the Associate and Fellowship levels of Membership of the Library Association of Ireland and how these can be achieved in part by writing for the profession.

Other topics covered by speakers were how to identify which journals and seminars to pitch your writing and research to, how to develop and abstract for submission, the pros and cons of writing with a collaborative partner and how to develop productive writing practices.

As a past participant of this course I can say that I found the information invaluable to get started writing for publication. This course has been organised every year by Helen Fallon, Deputy Librarian, and NUI Maynooth. for the past six years and if you are interested in writing for publication to develop your professional portfolio I would highly recommend that you attend. For more information about ANLTC click here

14 May 2012

Libfocus Journal Club - Improving the presentation of library data using FRBR and Linked data

The following post is the first of a new feature on libfocus: a virtual journal club. Please feel free to comment on the article linked below.

Improving the presentation of library data using FRBR and Linked data By Anne-Lena Westrum,
Asgeir Rekkavik and Kim Tallerås


When a library end-user searches the online catalogue for works by a particular author, he will typically get a long list that contains different translations and editions of all the books by that author, sorted by title or date of issue. As an attempt to make some order in this chaos, the Pode project has applied a method of automated FRBRizing based on the information contained in MARC records. The project has also experimented with RDF representation to demonstrate how an author’s complete production can be presented as a short and lucid list of unique works, which can easily be browsed by their different expressions and manifestations. Furthermore, by linking instances in the dataset to matching or corresponding instances in external sets, the presentation has been enriched with additional information about authors and works.
By Anne-Lena Westrum, Asgeir Rekkavik and Kim Tallerås

Full article available here

The authors used the building of a new public library as a means of exploring how their metadata can be used in a way that can help to improve user experience. They used the work of one author to check how easy it was to access his work through the catalogue. The central problem they found was that their existing catalogue did not distinguish between the author's works and different versions of one work. The central hypothesis of the authors is that patrons generally only care about finding a title they are interested in, not a particular edition of that work.

They found that existing MARC records were not fit for this purpose so a tool was developed for automated FRBRisation  of existing MARC records. Unfortunately, a huge clean up process of the existing MARC records had to take place before FRBRisation.When the new and enriched dataset was completed, the project developed a web application to allow an end user to browse this part of the collection by choosing from a much shorter list of works.

Their final conclusion is that while libraries may find it tempting to convert to modern principles of metadata quality-such as RDF- this may well be a tedious and long-winded approach. Their cleanup alone took 60 hours of labour and they were only working on a very limited data set. However, the authors did show that a positive result for end users was achieved by this process, however laborious. I think it does seem like a rather limited outcome for such an investment in hours. The key issue here is that this was not a full library catalogue, only a tiny proportion of one; so, the question is is such an expenditure in time and effort worthwhile to achieve modern metadata standards
Posted on Monday, May 14, 2012 | Categories:

11 May 2012

Is it really all about apps?

Somehow we have reached the point where downloadable apps are everywhere. There is now a very real expectation from mobile users that apps should be available as a matter of course from virtually all content and service providers, including databases, journals and news sources.

However in spite of their popularity, downloadable apps are not necessarily the best solution for accessing content on the move. Firstly they need to be continually updated, and secondly the sheer volume of apps which are available can often lead to app overload as the number of different apps you need to install on your device escalates. Arguably the reason why many apps are so popular and successful is not because the apps are particularly effective or innovative, but rather because the web sites and services in question are so poorly designed for mobile devices in the first instance. But what if you could deliver an app-like experience through your web browser?

Responsive Web Design helps to ensure that the layout of your site adapts to the viewing environment so that your site will also be optimised for mobile viewing on tablets and smartphones. Recently the FT has shown that there's an alternative approach which can work, and pulled its iPad and iPhone apps from Apple's iTunes store after it launched a new version of its website written in HTML5. Using a web browser-based app designed in HTML5 means that there is no need for mobile users to download device-specific apps, and speeds up browsing in a way that looks and feels the same across all platforms (Android, iOS and Windows). Publishers (the FT included) will no doubt prefer the idea that they are in complete control of access to their content rather than being dependent on the fee structures and distribution platforms of Apple and Google.

This piece in Technology Review offers an honest insight into the magazine's recent failed experiment with apps:
"We sold 353 subscriptions through the iPad. We never discovered how to avoid the necessity of designing both landscape and portrait versions of the magazine for the app. We wasted $124,000 on outsourced software development. We fought amongst ourselves, and people left the company. There was untold expense of spirit. I hated every moment of our experiment with apps, because it tried to impose something closed, old, and printlike on something open, new, and digital."
Digital content typically works better with a web-like feel and structure that supports linking to and from other content, but apps often don't offer this experience for users. HTML5 does not necessarily mean that apps will disappear today or tomorrow, but as responsive design matures users will realise that apps are often a needlessly long-winded solution to a more fundamental problem.

10 May 2012

Join a committee and develop your transferable skills

Guest post by Margaret Irons

Becoming a member of the committee of the Academic & Special Libraries Section (A&SL) of the Library Association of Ireland (LAI) has been very beneficial for me, both professionally and personally.

If you are looking for a new challenge or a 'hands on' way of gaining professional experience and support, then I can highly recommend joining a committee and getting involved. Certainly it takes commitment and work, but the benefits make it all worthwhile and it is an invaluable way of developing your transferable skills. It is also a great addition to your CV.

I joined the committee in 2007, about 6 months after I had taken up my current role as Librarian at the School of Celtic Studies, DIAS. Joining was vital to me for gaining valuable access to a network of like-minded librarians. I also thought it was time for me to offer some of my own skills to a vibrant and rapidly growing Section. Working in a specialist research library is both challenging and fulfilling, but occasionally it can become isolating and introspective. So in order to combat that, I decided to look outwards to gain extra knowledge, experience and perspective from colleagues in other Academic and Special Libraries in Ireland.

The primary function of the Section is to provide a forum for discussion on issues relevant to academic and special librarians and information professionals. We carry out this remit through the provision of seminars, workshops, talks and visits. The task of the committee is to perform these functions on behalf of our members.

From 2009-2011, I took on an Officer role and became Secretary of the Section.
My duties as Secretary included (among other things):
  • working closely with the Chairperson and the Treasurer of the committee
  • maintaining communication with members of the Section via the A&SL Discussion list, website and social media
  • preparing an annual report of the Section's activities to be presented at the AGM and also a report to be included in the Annual Report of the LAI
  • maintaining an up-to-date membership list
The committee generally meets once a month. Our meetings take place after working hours and we conduct our communications mainly through email and online project management tools.

As a committee we work pretty hard and always aim to do our best for our members. However, we also have tons of fun and enjoyment. So it all balances out. I now can say that I know a myriad of experienced peers in this city, and nationwide, whom I can call on with any library related query I can think of. I have also met many new people through out networking events and our Annual Seminar.

So, if you have even once thought of joining a committee that is of interest to you then I can highly recommend it. Although I work in a specialist research library with one library assistant, I feel like I have many colleagues and work as part of a vibrant library team.

Below I have outlined just some of the professional and personal benefits of being part of this committee.

Professional outcomes:
Team work Learning how to work effectively with people you don't actually work with.
It is a voluntary committee. Commitment is personal. You get back what you put in.
Project management as part of a team
Five annual seminars later and I feel like I could do it with my eyes closed (or could I!?)
Events/ informal evenings
Planning smaller events also takes time. Dealing with caterers, organising venues, speakers, attendees etc.
Officer/ secretary role
Being secretary offered an even more in-depth view of the structure and workings of the committee and highlighted the value of communication both internally and externally.
Learning to work remotely with the team.
Learning to communicate effectively with our members.
Learning the importance of social media, online communications and keeping up-to-date with developments in this area.
Sourcing and working with sponsors for our Annual Seminar.
Learning the importance of our Section's logo and brand and promotion of both.

Personal outcomes:
Learning how to network is not as cringey as it sounds or as difficult. Librarians are a friendly bunch. Don't be shy. Come along next time and even our AGM is great for networking (I promise). Networking is extremely important and beneficial.
Knowledge transfer
Just being around and working with some of the other committee members is amazing in itself. We work in such different types of libraries and come from many types of educational and working backgrounds. A dynamic and diverse bunch indeed.
Needless to say I have made many new friends.

Getting involved in the LAI at A&SL committee level is a great way of learning and developing new skills, building networks and keeping up to date with current developments.

These are all my own personal and professional outcomes from the past five years on the committee. They may differ for everyone. So why not join a committee and work towards developing your transferable skills. I can certainly recommend it!
Posted on Thursday, May 10, 2012 | Categories:

8 May 2012

Exploring Rich Interactive Narratives

Tell me a Story……

Since the beginning of time we know that people have always told stories. Before the written word existed the telling of stories was the primary source of passing on history, tradition and folklore. Paintings on cave walls were perhaps some of the earliest forms of illustration to go along with the story telling experience. The pictures made the stories more interesting. Today we have many ways of experiencing stories. One such story telling tool is the Rich Interactive Narrative (RIN). A Rich Interactive Narrative allows for the building of stories by bringing together traditional forms of storytelling with images, multimedia, text and narration to create interactive experience.

The Decipher Project  is involved in the discovery and exploration of cultural heritage through story and narration. The major outputs of the Decipher project are the development of knowledge visualisation and displays for cultural institutions and museums. The project has an excellent Facebook page with almost daily updates about the projects progress and new applications.

Microsoft Research is also involved in the development of Rich Interactive Narratives. I was fortunate to be involved in this project during my internship for the Digital Humanities Course at TCD. I was given access to the developmental tool RIN Studio to develop a narrative about a TCD students experiences during the 1916 Rising. For more information about Microsoft RIN projects see here.

These narrative tools have the potential to provide information professionals with another resource to present, display and share multiple media resources to tell an interesting story.

3 May 2012

Digitising legacy theses

Quite a few libraries out there hold theses in analogue, hardbound format. In many cases they are available to students for reference-only access in the library. Demand various throughout the academic year, but tends to peak in and around the time when final-year students work on their projects. Half-empty shelves and frantic enquiries about the whereabouts of this or that thesis are typical tell-tale signs.

There are good reasons to get rid of hardcopies and replace them with digitised versions.
  • Create space on library shelves
  • Incidents of lost and possibly irreplaceable theses are eliminated
  • Enable safe and perpetual access via an online archive
  • Facilitate multi-user access (fluctuating demand can be met)
  • E-theses can be retrieved more easily (full-text searchable, accessible via OPAC/online archive)
  • Plagiarism risk is reduced significantly (all works become transparent and traceable)
  • Shelving load is reduced (free up staff time)
  • Annual stock-take workload is reduced (less stuff to count)

But there are risks too. Students of past days still own the copyright in their works. Arguably, chasing those students to get them to share their copyright (via, say, a Creative Commons licence) is often not practical. For this reason, providing public, full-text open access to archived works can become an issue. A solution to this is to limit access within the campus IP-range. From a conversion process point-of-view, only theses that are robust enough to withstand the manual handling can feasibly be processed. So a residual set might still end up sitting on the shelves after all.

I’d like to emphasize that there’s no right or wrong workflow in realising a conversion project swiftly. Local conditions, such as funding, available equipment, time etc. play their part. The below is a brief outline of how I went about this job (launched recently and ongoing) with the tools available to me.

Available equipment:
  • Fujitsu fi-6140 multi-sheet, high-speed scanner
  • Adobe Acrobat Pro 9 (not the ideal choice for OCR as Adobe’s OCR correction function is dodgy at best; Abbyy is king)
Steps involved (high-level outline):
  1. Dismantle and prepare source document for scanning
  2. Scan at 300dpi (generic capture figure)
  3. Insert copyright disclaimer
  4. Adjust image size (crop image etc.)
  5. Retain uncompressed/lossless .PDF master file
  6. Optimise file size
  7. Convert to PDF/A-1b
  8. Publish item in repository
  9. Clean up existing catalogue record held in the Library Management system and embed full-text link pointing towards the archived item
A note on OCR...
One objective is to realise full-text indexing as best as possible. However, it’s important to emphasize that OCRing aims for a find/search capability rather than bona fide replicas of original sources. OCR has little to no understanding of layout, format, word - line - paragraph structure, etc.

A note on PDF/A-1b...
The PDF/A standard defines provisions for long term archiving of electronic documents in the PDF format on different platforms. All content in a PDF/A file must be contained in such a way that viewing or printing of the file can be achieved reliably over a long period of time. PDF/A-1b requires that all page content and resources necessary for displaying or printing the document are included in the PDF file. It is not required that the page content is structured. The use of PDF/A-1b is recommended whenever no content is present, like in scanned documents or PDFs which have been created without structure information.

How long does it take to process a thesis from start to end? Again, this depends on various variables including the condition of the source documents, scanner speed, OCR speed, etc. I sort of broke down and timed each step of the conversion process. With an average of about 55 – 80 pages per thesis, the job takes about 45 minutes per thesis (give or take).

The bottom line here, clearly, is that some input is involved in shifting legacy theses from analogue to digital. However, the underlying benefits more than justify the effort involved.