24 May 2013

Report on UKSG 2013, Bournemouth, UK -- Part 2

Guest post by Anne Madden

Key themes:
UKSG 2013 was apparently the biggest yet. With 930 delegates, the organising committee are very restricted in the venues open to them. Bournemouth and the Bournemouth International Centre (BIC) proved acceptable and accessible and will be recycled for 2016. In particular, the sponsor hall was well laid out and worked very well (see pics below). Getting to a workshop did occasionally involve a bit of advanced navigation though.

Part 1 of the report is available here

This (final) part covers the following themes:
  • Our clients under the microscope
  • The role of Technology

Our clients under the microscope
I have to start this section with the outstanding presentation by Josh Harding – a self-confessed “paperless student”. Josh is a post-grad medical student; in his earlier student days, he was a book and paper-bound stereotypical student. This time round, the Tablet has set him free. He requires “lots of information, very quickly, very efficiently”. The iPad in combination with a carefully selected set of apps now meet all his information needs.  Not only is it more practical but by allowing searching and switching across all types of content from notes to texts to lectures, it also adds enormous value in terms of speed and efficiency. He predicts it will be the norm among students within 18 months, and wondered whether we the information providers would be ready.

He proceeded to list the tools and resources he would use in a typical day. As they are very discipline- (and iPad) specific, I won’t list them here but I think his presentation is recommended viewing  not just to those in healthcare but to anyone providing services to students of any description.

Some generic apps include Notability, Inkling and GoodReader. He is a fan in particular of Inkling, a “(relatively) smart” interactive textbook source which allows per chapter download. As far as eBooks are concerned, in his view it’s a case of “a lot done, more to do” – smart should mean “adaptation based on learning analytics”. His textbook of the future would record his progress, identify any weaknesses – as well as applaud his successes, compare his performance with that of his peers and allow him to adapt the content to his needs (based on e.g. the amount of time he spends on the different chapters).

Notability is the handwriting app he uses, and into it he pulls his eBooks, lecture notes and other resource apps so that he can add his notes and links. These are then sent to the Cloud in PDF format where he can later access them using GoodReader. Obviously connectivity is key to making this all work and central to this is cloud storage.

The Librarian’s role is to identify how to deliver these services to our students, making students aware of what’s available and how to both access and use it effectively. He suggests that using “early adopter” students to advise their peers may be a way of achieving this. A major challenge is costs – in the same way as core paper textbooks and resources could be loaned from the library, in the future the digital equivalent should be equally freely available. Other challenges include fragmentation – multiple platforms, multiple apps and variable quality; PDFs with DRM restricting their use with 3rd party apps.

This said, his wish list not surprisingly included:
  • Single platform, single source for textbooks which would be smart and interactive.
  • Institutional subscription to core apps; however, students are willing to pay a fee if they feel they are getting what they want, in the format they want, at a reasonable price.

The following speaker should have been Sian Bane, discussing MOOCs but as she couldn’t make it, Ken Chad stepped into the breach and introduced the concept of "jobs to be done" (JTBD). This is the application of a commercial or business concept to the library world. Once you have carried out your user survey or /needs analysis, you then focus on meeting the unmet needs of your clients.

You focus on the “sweet spot”; ignore services that “competitors” can provide, stick within your capabilities, and wherever this intersects with an identified unmet customer needs you have found your niche. Focus on ends, not means: “Not quarter inch drills, quarter inch holes” (Theodore Levitt).  The jobs referred to in the title relate to jobs your users need to get done.

It was at this point that we were treated to the Spice Girls’ “I’ll tell you what I want, what I really really want” with special emphasis on the phrases “make it fast” and “don’t go wasting my precious time”. Citing Clayton Christensen, he suggested that students don’t want textbooks, they want to pass exams, preferably without ever having to open a textbook – that is the “JTBD”.

Introducing the JTBD method to your library, you need to identify 3 things:
1. What is the problem/job?
2. Who should solve it?
3. In what circumstances/scenario?

In conjunction with your clients, you need to establish:
- Why this problem is important to them
- How they are currently handling it
- Why they have chosen the solution they have chosen
- Their likes and dislikes of their chosen solution

Back at base, you then need to ask yourself:
- What do I already have that might work
- In what circumstances will it be effective
- Any weaknesses or strengths?
- Why will my clients use this, instead of what they are already doing?
- Does my solution address the full problem?

It may be that you need more context in order to fully understand the JTBD. It’s not just providing something that will fill a gap, your solution must add value or be seen as adding value by the client.

The next three presentations relate to recent user research studies, carried out by Lynn Silipigni Connaway of OCLC, Simon Inger of Renew Training and the interesting juxtaposition of Jo Alcock and Mark Brown of Birmingham City University who gave a lightning presentation.

Connaway looked at the “digital student”. They endorse both the Josh Harding and the Clayton Christensen view – the alternative title to her presentation was “I don’t think I have ever picked up a book out of the library to do any research – all I have used is my computer.”

They like Google “reliable and fast”, and Wikipedia. They believe the following:
- Libraries are quiet places with internet access.  They contain many print books.
- “If two sources say the same thing, then it is most likely the truth”

Student Nirvana would be to find a one stop shop for everything they need to pass their exams. Is this what we deliver? Not according to this research. Students don’t so much go to the Library web page as stumble across it through their preferred search-engine. They (and Faculty) may land on a Library page whose content is out of date and irrelevant, and not feel tempted to stay.

So, in the words of Ken Chad, what are the jobs to be done? Firstly, she suggests, digital students need digital librarians. The “Ask a Librarian” service should be more interactive and personalised. Customised help features should be triggered by scenarios that might otherwise make students turn away e.g. “no results” to their search.

She used the term “the inside-out Library” – i.e. if students are happening on Library pages and resources through generic search engines, then the library must recognise linking as a key role. In an inspired example of this, faculty are now adding key citations to Wikipedia articles as they know this will increase the chances of students spotting them. In general, students feel guilty about using Wikipedia – even while citing and using the references from a Wikipedia article, they rarely mention Wikipedia as the source, and even more rarely will they read these citations. Their behaviour is “squirrelling” – storing to read later, but later rarely comes.

Another key factor is lack of experience with databases and a lack of specialised search engines. The research showed that 62% of all researchers are self-taught. Mentor programmes are common in universities – providing training for mentors on research is a way of propagating awareness.

The lightning presentation by Alcock and Brown served up highlights from their recent survey on Discovery and healthcare students. The full report is available to view here and even based on their highlights it was obvious that their findings were very much in line with those of the other research surveys present.
  • Students get familiar with one particular database or search tool and use this for general searching
  • Their choice of search tool is usually scenario-specific
  • Google is used to “scope” a topic – to find buzz words which they then search in a peer-review database. This echoes findings from the NUIG breakout session on “simplifying the search experience”.
  • They showed a number of advanced searching skills such as the ability to transfer their searching skills across different databases; awareness of different databases and knew they could work with their results to e.g. limit them by date etc.

Again on the topic of content discovery but on a global scale, Simon Inger gave an overview in 20 minutes of what is obviously a very substantial piece of research: “How readers discover content in scholarly journals”. “Readers” referred to researchers, students and librarians and all had their own network of routes to content.

These routes are of interest to both librarians and publishers – different readers will, he suggests, carry different value. While different sets of analytics are available to publishers and librarians, neither has the full picture. The current research attempts to address this and is a repeat of similar research carried out in 2005 and 2008 but on a far wider scale with 19,000 respondents.

To be fully comprehensive, they wanted representation from all geographical locations, sectors and professions. Given the no. of responses, even a small percentage such as 1700 responses (9%) from the medical sector; 2,500 responses (13%) from students, is still a very significant cohort.

The research covered preferences in search engines, discovery tools, apps and devices – all of which could be analysed by sector, geographical location, profession etc. Superficially, most of the findings come as no surprise:
• Different disciplines arrive at content in different ways.
• Different reader groups arrive at content through different routes

A more in-depth analysis of the data provides more practical and translational results:
  • Social Sciences are more likely to link from a Library web page than any other sector
  • Linking from search results and from targeted email alerts is more popular in Medicine than other sectors
  • Social Sciences prefer journal aggregators (ProQuest/Ebsco) as a starting point in a search; however their most popular starting point is an academic search engine such as Google Scholar.
  • Medical sciences are more likely to use A&I database (PubMed)
Changes since 2005:
  • Library web pages show a fairly significant increase in usage as a starting point in search
  • Journal alerts are declining in popularity as a means of discovering recent articles in a field
  • Bookmarks to journal home pages are on the up
  • The biggest supporters of Library web pages in “search” are Education Research and Humanities; physics and other hard sciences are least likely to use them
Library subscribed discovery resources:
  • A&I resources almost twice as likely to be used in Life Sciences and Medicine than a library web page or full text aggregator; Humanities more likely to use full text aggregators
Google in preference to Google Scholar by subject area:
  • Google Scholar most popular in Social Sciences, Psychology and Education Research with all other subjects preferring Google.
  • Physics & mathematics showed the strongest leaning of all to use Google over Google Scholar.  “Google Scholar only covers academic material”.
Device used:
  • Desktops/laptops still most used across all sectors
  • Tablets and phones small but significant.  Most popular in the medical sector which supports Josh Harding’s experience.

While the results are interesting, understanding the underlying reasons for them is not clear. Do they match our own experiences? Is there a platform that can support all preferences?

Simplifying the search experience” – a breakout session by Monica Crump and Ronan Kennedy of NUIG – fits neatly here. Armed with a new Library Discovery service (Primo Central), they set out to provide their users with a Library search interface second to none.

This was a very enlightening and engaging presentation; among the “Sacred Cows” to be sacrificed was the dedicated library group. They described how at times the Group itself became the focus rather than the issue it set out to resolve. Secondly, not everyone understands library terminology; “print locations” translates to a student as “where I can find printers”. Finally, selecting features for their pedagogical value is commendable but may just discourage usage.

Once the new interface was finalised to the satisfaction of the Group, it was launched and then followed up by a LibQual survey. They received mixed messages: “website is difficult to use” and “website is easy to use”. The LibQual survey was followed up by user observation studies to clarify the situation.

Some useful findings:
  • They ticked all the boxes that librarians like to tick.  Unfortunately these proved to be the wrong boxes
  • Librarians aim for perfection; users want “good enough”.  Or as Ken Chad would put it, can it just get the job done?
  • “You can please some of the people some of the time etc”.  You will sometimes need to make a radical change and plan to manage any consequences.
  • Just because your system is feature-rich does not mean you should use all of them. Less can be more.
  • Library training makes for better searchers
  • Channel the options – don’t offer them all on the one page.  E.g. Default: single search box, with, below, “More search options” button.
  • To get maximum buy-in, choose when to make any major changes (not to coincide with start of a new academic year).
  • As already mentioned in the Alcock/Brown presentation, Google is generally used to “set context” but once this is done, they move to the library-based resources.
  • Academics had a preference for “point and link” as opposed to locating through a search.
Having revamped the interface on the basis of feedback, they have now repeated the LibQual survey; however at the time of the Conference, the results were not yet available.

The final one in this section is a lightning presentation by Eric Hunter on a current awareness service for his users at the RCSI. The service was developed to address both the need for relevant information at point-of-need and cognitive overload. A number of different prototypes were tested and a bulletin created which contained links to the content online.

The process is very labour-intensive however and given staff shortages, it is not certain whether it is the best use of librarians’ time. The service is currently under evaluation.

The role of technology
 I must confess to a lack of familiarity with some of the acronyms so some of “the science bit” may have gone over my head. I’ve provided links to the presentations wherever they exist and have listed the highlights below.

Anyone involved in Library discovery technology should keep their eyes open for a joint UKSG/JISC study which attempts to identify knowledge gaps and best practice. Its title will be “Assessing the impact of Library discovery technology on content usage” and it is due out in September. UKSG acts as an “incubator” for projects according to Ed Pentz of Cross-Ref, an example would be the usage-factor which has now been moved to Counter.

Liam Earney discussed the topic of Knowledge bases. His main contention is that inaccuracies have crept into systems to the detriment of the information landscape. Some rather surprising examples include the fact that few publishers have a complete and accurate list of all their publications. Equally, ISSNs are sometimes missing or inaccurately portrayed in journals. He made the catchy remark: “set your data free….. but please tidy it up and make it presentable first”!

He spoke of two products which have been developed to capture the information required to manage e-resources: KB+ and GOKb, both available under open licence. Duplication of effort needs to be avoided – only minor points of differentiation exist between the four largest knowledge bases. The focus should instead be on: open data (to ensure one single accurate version is in circulation), collaborative communities (again to avoid duplication and encourage sharing of metadata and data), enriched information (incorporating human awareness) and standards/best practice guidelines – too many too varied. Some of the partners in the project: KBart, Editeur, PieJ.

The aim of this project, as in many of the others in their own way, is interoperability.

The Journal Usage Statistics Portal (JUSP) and how it is used by the Open University. JUSP collects Counter statistics that are SUSHI compliant. It also gives you options to “interrogate the content” in a number of ways in order to assist in renewal decisions for example. The benefits of using one interface as opposed to multiple publisher platforms, is obvious. Other usage includes subscription negotiations based on price per use; assessing the value of “big deals”, it can also be used to indicate trends in usage. Nesli 1 & 2 publishers currently participate while others have been contacted and asked to do so.

Also from the Open University is how their ebook acquisition policy came about. They first establish the purpose of each text (research, core student text etc). For research, a single-user licence may be enough but a course text might require a 300 concurrent user licence which is often not available through regular channels. This content can then end up on a separate platform.

The whole process has becoming very labour-intensive. They have ended up using 5 different ebook aggregators and their biggest wish is for a one-stop-shop for all ebook suppliers. Their likes include: ability to download to devices; printing rights; individual login options, appropriate formats. Dislikes were pretty much the opposite of these.

Mobile access was addressed by both the University of Surrey and the Taylor & Francis mobile team. T&F gave a short overview of the relatively new T&F online platform, its features and benefits (content indexed to Google Scholar; alerting service etc). Their web app is free and can be downloaded directly from their site.  Key features include saving to a device where they can later be read offline; sharing via Facebook and Twitter. It is compatible with Android, Blackberry as well as iPhone. A once-off “pairing” of the device is all that’s required for access. The balance of the presentation related to how they promoted this new mobile service.

The University of Surrey gave a brief overview of how they integrated mobile technologies to the academic library. Aim: make their resources available on any device, anywhere. They decided not to go down the app route but decided to use what they currently had as far as possible. Their initial approach was staff-focused, developing familiarity using a hands-on approach – feedback was very positive. iPads were then trialled, to give on-the-go demonstrations to academic departments, and for more interactive and practical roving user support.

They carried out a short focus group study to establish a list of technical changes required for their web page:
  • QR codes, however, not hugely used
  • mobile version of catalogue: positive feedback
  • mobile friendly sites for opening hours and study room bookings (functional changes, not just cosmetic ones)
  • making lists of what is available within subscribed resources, etc
The Focus group also emphasised the need for the library to integrate with the institution as they did not see it as separate to the institution.

In a very absorbing breakout session, Fred Guy and Adam Rusbridge of the University of Edinburgh discussed the long term availability of eJournals. Following a brief introduction on the archiving infrastructure scene, they introduced a panel of experts to address the questions of who is archiving our journals, how they are doing it, and what recent changes have occurred.

He mentioned a key resource: “The Keepers” (http://thekeepers.org/thekeepers/keepers.asp) – a registry of archives for eJournals. This registry is the result of a scoping study by Loughborough University and Rightscom Ltd. in 2008, “A scoping study for a registry of electronic journals that indicates where they are archived.”

Developments in this area can be tracked on the JARVIG website: JARVIG (eJournals archiving interest group). Their Action Plan is being reviewed following a cost/benefit analysis in 2012. Funding for the project is provided by PEPRS and EDINA. The JARVIG role is to develop the infrastructure in partnership with stakeholders such as the ISSN registry (which as was mentioned in an earlier presentation, is in need of tidying up). Another partnership is that between the Libraries of Cornell and Columbia Universities (http://2cul.org/). “Trusted archives” include LOCKSS, CLOCKSS, Nederland National Library, British Research Library, The Hathi Trust, the National Science Library and Portico.

Libraries should look at the preservation of their local collections; David Prosser and Lorraine Estelle both mentioned the JISC nesli model licence which includes archival rights. Licenced subscriptions provide current access to content, but as we have probably all experienced, once you cancel a subscription you frequently no longer have access even to the years you subscribed. The nesli licence (as mentioned in the earlier JUSP presentation) is being accepted by more publishers but it is worth requesting its acceptance with all publishers when renewing.

Publishers must be persuaded to be included in the LOCKSS archive. Archives may be “Light” or “Dark”; “light” archives permit not only access but reproduction, lending and general circulation; “dark” archives will only allow local access and reproduction under certain conditions. Archived collections must be compatible with link resolvers and other search technology; content would be available continuously and at a cost that would not exceed that of print.

A side-effect of these online archives is the amount of space it would clear; the RLUK has set the clearing of 100 kms of shelf space as a target.

For the future, a more joined-up approach is required.  There is currently (and may never be) one global market solution.  Coverage has to be comprehensive – already gaps are appearing.

Finally, a breakout session from Anita Wilcox of UCC: an open-source ERM system. The initial choice was based principally on budgetary restraints – when researching which serials managements system to choose, CUFTS emerged as a viable solution.

The system was developed and hosted by Simon Fraser University Library and partners, who provided ready assistance throughout the set-up period and after. For a small fee, they also provided assistance in the development of the interface. So far, it has equalled out-of-the box solutions in almost all respects:
  • It uses CrossRef for linking
  • It provides a direct to article openURL link resolver (GODOT)
  • There is a licence tab which will identify licence-specific data (Athens, walk-ins etc)
  • User guides can be uploaded to the database links
  • The research Suite Journals portal and the CJDB Database portal can be integrated with the Library catalogue
  • There is a statistics portal where you can create your own sushi stats
  • Ranking is allowed, so you can decide on the order of results display
  • It creates a full MARC record for all records in the system
  • It permits the uploading and integrating of local ILL form
  • The initial record table allows for input of item-specific data: licence, pricing model and subject information
  • A Provider record allows the tracing of source of individual items
  • Data can be imported in either PDF or doc format
She finished by noting key pros and cons:
Pro: far greater control and rapid response
Con: no automated upgrades when publisher links are changed

The final presentations of the Conference were by Jason Scott and T. Scott Plutchak. I won’t even attempt to cover the presentation given by Jason Scott – you need to view this yourself – it’s worth it, he is a highly entertaining and irreverent presenter!

T. Scott Plutchak, who is a key player in the American OA movement and was part of the Scholarly Publishing Roundtable, finished by echoing the words of Jill Emery – librarians tend to see publishers as adversaries in the battle to achieve democratic access to research. He suggested that we both worked from the same premise: the value of scholarly literature. It was time to dispense with misconceptions about each other and move forward by recognising the contribution of all parties.

His presentation can be viewed here.

And so ended UKSG 2013 – it was a truly enlightening experience and I feel very lucky to have been there.  Sincere thanks to the Acquisitions Group of Ireland for making my trip possible.

2 comments:

  1. Anne, thanks for another fantastic post. Fantastic level of detail and sense of the conference. Excellent resource, and lots to think about, particularly the Simon Inger research which I must follow up.

    ReplyDelete
  2. Thanks Michelle,
    it is a truly useful thing to write up a report on any event you attend. You analyse it while it is still fresh in your mind, get an objective view of it as you think about your audience, then embed it through the act of writing it down.
    That said, there are a lot of follow-ups that are still on my "to-do" list:)

    ReplyDelete