26 May 2015

... Librarians are a lot like DJs...

 

I don't tune in to night time radio very often but the other  night I caught a listen of Dave Couse on Today FM He is a a DJ I like listening to and I rebuke myself for not listening to him more often. I like his show because his set list is completely eclectic and random. I can never be sure what is coming up next. But invariably it is a track I like. He plays much that I am delighted to hear: current tracks I'm hearing for the first time and classic tracks that I love and enjoy listening to again. And some are classic golden oldies I have never heard before.

That night he played one of the best tracks I have heard in a very long time.  It was a track I don't remember having heard before - The Unguarded Moment by 1980s Australian rock band The Church

And I don't think I'm being facetious, (though I may be thinking it too much,) but as the song faded out I lighbulbed thought - DJs are actually like librarians. Or to be more precise - night time DJs are a lot like librarians.

  • We both of us curate material.
  • We introduce people to the 'best' of what's out there in a particular field.
  • We try to make that field comprehensible and manageable to our listener / user.
  • We do the work of sorting through the noise so that our listeners / users don't have to.
  • DJs  help people discover music. Librarians help people locate information.
  • We try to educate people - our goal ought to be  "we provide the music or information and people learn," We provide the building blocks and they start from there.
  • We both help people navigate our particular sea - whether that be music or information.

I think back to the DJs I listened to as part of my teenage music education. DJs such as  John Peel, and Dave Fanning,  I think back to how they sorted through all the dross - and it being the 80s there was a lot of it. And they pointed us towards the gems. I remember how they would advocate for particular artists and genres and say you really need to listen to these guys. They actually had access to all this music that they would listen to, make decisions on, and then serve up to us. We as fans would never have been able to do that. We could read our chosen fave of the music mag triad Melody Maker, N.M.E or Sounds. But we didn't have a place to listen to the music the journalists wrote about. This is where Dave and John came into it. They had access to the goods and they opened that access to us. On their advice I would spend my hard earned money and buy the record. They made me the music fan I am today.

I think back to when I used my local library as a child and how the librarians would guide me towards particular books, genres and authors. And how this early grounding has guided my reading habits ever since. I think back to when I was an undergrad and postgrad and how college librarians would guide me through the info sphere. All helped make me the info literate / digitally literate person I am today.

And in todays streamed world of Spotify, Pandora, Deezer, Google Play Music and numerous other legal streaming options we are literally drowning in music and in choice of music. We really need curated content. We need curators. And this is where DJs like our John Peels' Dave Fannings' and Dave Couses' come into play, They listen to the music. They decide what they like. They play it. We listen. And hopefully learn and branch out and educate ourselves from there.

And in todays hyper informational and multiplatformed world of Subscription databases, catalogues, Google, Repositories, Social Media sites et al we are literally drowning in information. We really need curated content. We need Librarians. We sort the information. We decide what are good sources. The best places to find information for particular needs. And we teach our users. And hopefully they learn, branch out and educate themselves from there.

So, yes, librarians and DJs are a lot alike...

19 May 2015

UKSG 2015: Experiences of the studentship bursary

Guest post by Saoirse Reynolds, Library Assistant, Maynooth University Library

 I have been working in MU Library for 3 years now. My first post was as a postgraduate student, working with the Facilities Team – you can see a poster I presented at the Academic and Special Library Conference 2015 at MU Library e-prints. After getting my MA in Irish History from Maynooth University I took up a post as a Library Assistant and shortly after that commenced the Masters in Information and Library Studies from Aberystwyth University by distance education. Earlier this year, I applied for a UKSG bursary to attend the annual conference in Glasgow and was thrilled to win one of the three student bursaries. This meant that in addition to having all the costs of attending the conference met, I also had a mentor at the conference.

Photo by Simon Williams: Bursary Winners receiving prizes from Sage & Springer Bursary Sponsors
Sunday evening, after orientating myself and unpacking, I met with my mentor Sarah Roughley, from the University of Liverpool. She gave me some tips and told me about her experiences of being a bursary winner the previous year. Later that evening I meet the other winners, there were six of us in total, three students and three early career professionals. It was really great to meet everyone before the conference began, to hear where they were from and what they did. It made it a lot less daunting having other people in the same boat as me, knowing that we all had to get up on that stage together the following morning to receive or prize.

The first plenary session had a very interesting title: ‘The Four Straw Men of the Scholarpocalypse”. I studiously took notes on this and the second session; sometimes it was hard enough going as I’ve just completed one module on my course and worked in a library for three years. However, I enjoyed the challenge as it made me aware of the latest topics and issues in librarianship and my new knowledge will hopefully help me in my studies and my career.

I liked being able to choose the breakout sessions and opted for topics which reflect my interests and are useful in my studies. “E-book Usage on a Global Scale” was the first breakout session I attended. It was interesting to hear different viewpoints and hear about different usage patterns in different disciplines.

Digital Preservation” was my choice for the second breakout session. The question of preserving digital content into the future wasn’t something I had thought about, but it is so necessary. It’s like having an ‘insurance policy’ on your resources providing access if your archived content is lost or suddenly unavailable.

In the evening the prize winners were asked to help with the quiz in the Science Museum. We walked across the Clyde in the pouring rain to another beautiful building and found our stations, from where we helped with the quiz which ran over dinner. It was great interacting with the other delegates, even if it was hurrying them along with choosing a name for their table!

On Tuesday the first plenary I attended was “Innovation in non-fiction content” presented by Catherine Allen from Touchpress. This was a really exciting talk about apps in which she demonstrated different types, including a Disney animated app which is a traditional book with interactive elements. We were also told how the animations were created. The interactive book was really beautiful and captured my imagination. Interactive books are a great way of engaging people with reading and learning.

The breakout session “Screen vs. paper – what is the difference for reading and learning?” yielded an interesting discussion. Tests have been carried out comparing learning with paper to learning with screen. Initial results showed those learning from paper performed better in these tests than those using the screen. After a number of tests those who were using the screen got similar results to those with paper. This suggests that as people get used to screen they will be able to use it to learn effectively. This is true for me – as an undergraduate student I used to print out articles but now I find that I can learn just as well with a screen. It is all about practice and change in perception. Using the interactive features (highlighting, making notes, linking to videos etc.) can actually aid learning. I still see a preference from users for printed rather than e-books in my work here in MU, but that will change with developments in information literacy and digital literacy.

The conference dinner was in Merchant Square, which I thought was a great venue as it is a covered mall with numerous restaurants and a central area. We were greeted by fire dancers and bagpipes playing and a red carpet which really added to the atmosphere. There was a choice of restaurants and many of the delegates took part in the ceilidh and the disco afterwards.

On the third and final day the last session was “Using LinkedIn for job hunting, career development and professional networking.” I found this professional development talk very useful. Like so many others, I have a LinkedIn account but hadn’t been using it to its full potential. Now I can use it more effectively and I am busy updating my profile.

Overall I found the experience of attending UKSG really beneficial. I will certainly use what I have learned in my day-to-day work and already I feel like I have a better grasp on the issues facing libraries. I’m also very pleased to have this award to enhance my CV. I know the library world is competitive and anything I can do to enhance job applications in the future is a real plus. I’d like to do one of my written assignments for my Masters Degree on ebooks. I fully participated in all of the networking opportunities and enjoyed going around to the different stalls, chatting to the vendors, entering competitions and getting free stuff!

Thanks to UKSG for awarding me this terrific bursary and to MU Library for the encouragement and time to attend.

14 May 2015

Open access & research data management: Horizon 2020 and beyond (UCC, 14th-15th April 2015)

Guest post by Maura Flynn, Breeda Herlihy and Ronan Madden, all UCC Library

Speakers and Organisers day 1. Picture courtesy of Richard Bradfield

This two-day training event was held in UCC in April, and was jointly hosted by UCC Library, UCC Research Support Services, Teagasc and the Repository Network of Ireland (RNI). The first of its kind to be held in Ireland, the event introduced attendees to the concepts of open research and research data management within the context of Horizon 2020. With speakers from the U.K. and Ireland sharing best practice, the event was an invaluable learning experience, and timely in the context of Horizon 2020’s Open Data Pilot.

To stage the event, the project team was successful in securing funding from the FP7 funded FOSTER project. FOSTER (Facilitate Open Science Training for European Research) is a two-year EU funded project which aims to promote & ‘foster’ open science in research, and to optimise research visibility and impact and the adoption of EU open access policies.

Research data management (RDM) generally refers to the processes of organising, structuring, storing, and preserving the data used or generated during a research project. Numerous factors are now influencing the drive for open data, but chief among them is the influence of funders seeking transparency and a demonstration of the wider impact of the research they are financing. In Horizon 2020 a limited pilot on open access to research data is being implemented, with participating projects required to develop a Data Management Plan (DMP). There is an expectation that this trend of research funding programmes requiring data management plans is set to continue, as has been happening in the U.K.

In addition to compliance, RDM benefits researchers and institutions through the potential for re-use of data, and the opportunity to demonstrate research excellence. Many institutions are taking a lead by establishing research data policies and seeking to coordinate cross-campus approaches to gathering and maintaining data. This often involves research support services, IT teams, libraries and researchers working together. However, RDM has been described by Cox et al. (2014) as a ‘wicked problem’, complex and difficult to define, requiring solutions that are flexible and pragmatic. RDM is still in the early stages at many Irish institutions, and this event offered an opportunity to learn from others and to draw on the expertise of those who are further down the road. It was a chance also for making connections within and across institutions in Ireland and the U.K.

David O'Connell opening Day 1. Picture courtesy of Richard Bradfield

Day 1: ‘Open research in H2020: how to increase your chances of success’

The first day was targeted to researchers and small and medium enterprises interested in developing Horizon 2020 proposals. David O’Connell, Director of Research Support Services, UCC, provided the opening remarks, mentioning that as former chief editor of ‘Nature Reviews Microbiology’ he has had a long interest in open access publishing, and a strong interest now in the application of open access to research data.

The project team were lucky to have support and guidance from Martin Donnelly from the Digital Curation Centre (DCC). The DCC is a UK-based world-leading centre of expertise in digital information curation, providing expert advice to higher education. Martin played an invaluable advisory role in the run-up to the event. Although he was unable to attend due to unavoidable reasons, he provided four recorded presentations for the event at short notice. Day 1 began with his first presentation: an overview of Open Science and Open Data in Horizon 2020. He started by providing a background to open access and RDM, looking back at open access in FP7, before looking at open science in Horizon 2020, and the specifics of the open data pilot.

Joe Doyle, Intellectual Property Manager, Enterprise Ireland, provided a background to how intellectual property relates to both innovation and collaboration, describing IP as a bridge between the creative and the commercial. Open access can generate greater collaboration, but it is important to acknowledge that what is free to access is not necessarily free to use without limits. Open access and patents can work hand-in-hand, as patents are about disclosing data. While they can’t be copied, much can be learned from previous innovations.

Jonathan Tedds, Senior Research Fellow, Department of Health Sciences, University of Leicester, spoke of RDM from the perspective of researchers, giving examples of projects he has been involved in, and issues encountered. He originally became convinced of the benefits of data sharing through his work as an astronomer, when he would ‘stitch together’ data he had generated for re-use. He cited the Royal Society (2012) report ‘Science as Open Enterprise’ suggesting that publishing articles without making data available is a form of scientific malpractice, and he noted that the number of papers based upon reuse of archived observations now exceeds those based on the use described in the original proposal. However researchers in many fields, especially those involved in smaller projects, need help to comply with funder requirements. He emphasised the iterative nature of research and data management planning, and the challenge of sustaining research software, not just the underlying data. The HALOGEN project was a good example of combining different kinds of data from different fields, achieved by creating a central scalable database infrastructure to support the project. The BRISSkit project involved developing software to link applications to create a data warehouse of anonymised (consented) patient data. It brings bed-side patient data to university researchers to be used for new biomedical research.

Group shot. Picture courtesy of Richard Bradfield


In the afternoon, Martin Donnelly’s second presentation focussed on data management plans (DMPs), providing an overview of these and their benefits. He went on to outline various data related policies and requirements in Europe and elsewhere, plus the supports and resources that are available to those writing DMPs, including those provided by the DCC. He demonstrated the DMPonline tool, which was created by the DCC, and can be customised by institutions. It can be used by researchers at the point of application and throughout the research project, and can be used for sharing and co-writing plans.

Brian Clayton, Research Cloud Service Manager, UCC, spoke of RDM as a work-in-progress at UCC. He described current UCC research cloud paid services which include data storage and compute services. The service has expanded to offer elements of data management, and a draft RDM policy is currently awaiting University committee approval. The aspiration is that RDM services can be provided at zero-cost to the researcher. Many outstanding issues will need to be explored, particularly in regard to data sharing, metadata, and who will carry out the various roles within the University.

Peter Mooney, Environmental Research Scientist, Environmental Protection Agency, looked back at over a decade of RDM at the EPA. As far back as 2004 the EPA made a commitment to researchers they were funding, that they would preserve data free of charge, and be responsible for long term management and infrastructure. The SAFER data archive was launched in 2006, linking data to papers and reports. Collaboration with researchers has been key to its success and development. Data reporting is now an essential element of the reporting process on EPA funded projects. He outlined some of the lessons learned, and suggested that open data is often misunderstood by researchers, and metadata is often a mystery, or seen as a burden. Modelling data correctly at the start of a project increases usability, and researchers would benefit from understanding the basics of relational databases. As an example, he cited over-reliance on Excel rather than using databases. He also cautioned against long embargo periods which only serve to make data lose relevance.

The final speaker of the day was Evelyn Flanagan, Data Manager at the UCC Clinical Research Facility, who spoke of her role as a data manager in clinical trials. She discussed how core principles of data management are a fundamental element of good clinical practice (GCP), before providing a thorough description of the ‘data sequence’ from protocol design right through to the report writing stage. She examined each stage of the process, including database design for case report forms (CRF), the importance of good metadata, data collection and data entry procedures. Like the previous speakers she stressed the value of DMPs at the early stages of a project, and how they underpin good practice at each stage of the data sequence.


John Fitzgerald opening Day 2. Picture courtesy of Richard Bradfield
Day 2: ‘Research data management – institutional needs, targets and training’

The second day of the event was aimed at institutional support staff who can provide support to researchers engaging with RDM. Many speakers came from the UK where the policies of the UK research funders (RCUK) require researchers to engage with RDM. In Ireland, the open data pilot in Horizon 2020 is the first signal that research performing institutions here will have to address RDM in the coming years.

John FitzGerald, University Librarian and Head of Information Services, UCC, gave the opening remarks mentioning how RDM will ‘challenge us as professionals with broadly curatorial problems’ as we seek to ‘manage the ecosystem in which data exists’. The first invited speaker of the day, Martin Donnelly of the DCC, provided a clear overview into RDM for support staff. Although unable to attend the event in person, Martin provided a recorded presentation which was very well received by those present.

Stuart McDonald, Research Data Management Service Co-ordinator, University of Edinburgh, spoke about their comprehensive approach to RDM services which began early in 2008 with a JISC funded pilot project. There were some audible gasps when he outlined the resourcing and staffing of the RDM programme at Edinburgh where £1.2 million has been allocated via internal funding. Aside from the resourcing, it was also illuminating to see how Edinburgh approaches data management before, during and after research. They are now investigating how to ensure that systems used for data management do not duplicate effort required by researchers which they will undoubtedly be happy to hear about.

David McElroy, Research Services Librarian at the University of East London, demonstrated how they have used Eprints, their existing institutional repository software, to create a new data repository, Data.uel. Publications archived in their open access publications repository are then linked to underlying data archived in their data repository. This of course ensures traceability and reproducibility of research. It was really useful to see the repository development path taken from decision making and planning to functional and metadata specifications and right through to mock ups and branding.

The third speaker, Jonathan Greer highlighted how Queens University Belfast is taking an ‘incremental approach’ to RDM services as they seek to align the plans and policies of the institution with the practice of their researchers. He offered some consolation to those uninitiated in RDM services by relaying how challenging it can be to roll out a service in such a complex area.

In the afternoon, Gareth Cole, Research Data Manager at Loughborough University and formerly of the University of Exeter, outlined how both university libraries approached the delivery of training and support. This was very useful as it became clear throughout the day that there is no one size fits all approach to RDM services.

Julia Barrett, Research Services Manager, UCD Library, summarised how she has shaped their research services to facilitate effective data management and sharing in UCD. It was encouraging to see the potential for a range of services which the library can offer and Julia has categorised these into ‘Discover’; ‘Create / Analyse’; ‘Manage’ and ‘Disseminate / Publish’ services.

Louise Farragher, Information Specialist, Health Research Board, introduced the PASTEUR4OA project which seeks to align open access policies across Europe. While an earlier question from the audience queried the effectiveness of lots of policy, Louise was quick to reinforce the message that policy is a good starting point for open access adoption.

Finally Dermot Frost, Research IT services at Trinity College Dublin, gave an engaging account of his experiences of developing the technical infrastructure for the Digital Repository of Ireland (DRI). The DRI is a ‘green-field repository’, to be launched publicly in June 2015 at DPASSH and is Ireland’s trusted repository for humanities and social sciences data. The DRI has had a large inter-disciplinary project team and Dermot stressed that while the language barrier (tech vs. non-tech) can be challenging, it was very useful for the exchange of ideas to have different people on board.

Q&A featuring Day 2 speakers. Picture courtesy of Richard Bradfield

Overall take home messages

1. Challenging: developing RDM services can be challenging due to the complexity and variety of research data. However, it is possible to learn from established services at other institutions. All speakers were very open to sharing their own experiences, tools and resources for late adopters of RDM services. All highlighted the well-established services available in the UK e.g. Digital Curation Centre as well as various online tools and resources which can be reused.

2. Planning: it is essential to plan out a roadmap after first establishing an understanding of the needs of the stakeholders. Stuart McDonald discussed the Data Audit Framework used at Edinburgh to identify research data assets and their management before developing an RDM policy and service. Dermot Frost mentioned that a ‘repository needs data to justify its existence’ and so the DRI has a stakeholder advisory group to ensure that depositors were involved from the planning stages.

3. Cross campus collaboration required: due to the complexity of RDM, the different types of stakeholders involved and emerging funder requirements, coordination across the institution is essential for an effective approach to service development.

4. Planning at the research project level: the importance of DMPs in the early stages of projects was emphasised by all Day 1 speakers. They ensure good data management practice at each stage of the research process.

References

Cox, A.M., Pinfield. S., & Smith, J. (2014). Moving a brick building: UK libraries coping with research data management as a ‘wicked’ problem. Journal of Librarianship and Information Science, 46 (4), 299-316. doi:10.1177/0961000614533717

Royal Society. (2012). Science as Open Enterprise. Retrieved from https://royalsociety.org/policy/projects/science-public-enterprise/Report/

8 May 2015

Digital Preservation: Not Just Clouds & Unicorns (ANLTC – NLI 29th April – 1st May 2015 – Report)

Guest post by Elaine Harrington, Special Collections Librarian, UCC Library

I had previously attended a one-day seminar run by the DPC on Getting Started in Digital Preservation. This three-day course run by the DPTP is an intermediate course for practitioners of digital preservation. Over the course Ed Pinsent, a digital archivist and Steph Taylor, a senior consultant, both with University of London Computer Centre (ULCC) showed us tools, methods and strategies for engaging with digital preservation. We viewed practical examples, examined case studies and challenging and complex objects, and participated in group exercises to better understand what digital preservation is.

Over the three days there were moments when I thought I was in a different universe where acronyms ruled (AIP, SIP, DIP,  JHOVE, PLATO, SCAPE, SCOUT, METS, MODS, TDR) or on a Star Wars’ set (constant references to DROID) or looking at antique cars (parallel situation of finding parts to replace wear and tear in cars or older technologies). By the end of third day I was beginning to return to Earth.

The course was broken into modules each of which lasted approximately 45 minutes. Although the course was intensive there was plenty of opportunity to ask questions and Ed and Steph included plenty of examples. At certain points for example when we were discussing ‘migration’ in methods of digital preservation we noted that ‘migration’ would also feature in file formats and as part of a ‘Migration Strategy Exercise.’

Due to sheer volume of concepts and information covered over the three days it is impossible to write about all the modules.

What is Digital Preservation?
According to the National Archives at Kew a digital preservation policy is the mandate for an archive to support the preservation of digital records through a structured and managed digital preservation strategy.” In practical terms the following are needed for digital preservation:
a database to manage preservation and store metadata
tools to perform ingest functions
a place to store digital objects
an access or delivery platform
rules, workflows, policies
an IT infrastructure
people and skills

OAIS Model
Ed and Steph used the OAIS Model and its terminology throughout the course to illuminate the digital preservation process.

Courtesy of University of London Computer Centre



Day 1
On the first day we examined what the OAIS Model is and some of the implications in using it. This was useful as it would be used in some of the group exercises over the next three days and we would have the appropriate terminology to use. This section was followed by modules on methods of digital preservation and exercise; significant properties and the Performance Model; file format: their structure and treatment; and metadata for practitioners. Significant properties varied depending on the file type: 16 significant properties for moving images compared to 6 for audio.

It was clear from the exercise on digital preservation methods that while we understood what was being said to us it was another matter entirely to be given a method and to discuss the pros and cons of that method. Approaches included: migration, emulation and technology preservation. The group I was in was given the bit-level only approach which focuses on maintaining only the 1s and 0s of code.

Courtesy of Elaine Harrington


It was a little bit worrying when Ed said that someone (not on the course!) thought a way to preserve technology was to dip a laptop in Perspex and then chip it off in 20 years! If the computer specs were known and 3D printers still exist in 20 years’ time perhaps it would be possible to 3D print any parts that would be needed to fix a physical piece of technology.

Real world examples were used to explain each module. For example DIOSCURI was used to show how emulation works. The National Library & Archives of the Netherlands use DIOSCURI to run old operating systems such as DOS and WordPerfect 5.1.

Courtesy of University of London Computer Centre


Ed and Steph also mentioned Atari systems and Pac-Man. The Centre for Computer History in Cambridge was established to tell the story of the Information Age through exploring the historical, social and cultural impact of developments in personal computing.

Courtesy of Centre for Cambridge History of Computing.


Day 2
On the second day we covered XML for digital preservation; tools for ingest; how to do migration including an exercise; METS; PREMIS and an exercise; making a business case and an exercise; assessment, audit and TDR.

XML is Extensible Markup Language. Like HTML, XML uses tags but whereas HTML describes presentation XML describes content. There may be:
XML Schema which has specification for elements and tags you will use. In a digital preservation plan the schema being used must be declared.
XML Stylesheet which displays the underlying XML and renders text in useful way for readers
XML Document which is the document you are authoring and which describes the object.

XML is a preservable text format in that it is open, documented and is not tied to vendor or platform. It is both good for storing and conveying metadata. There are different types of metadata: Descriptive, technical, rights, structural and preservation, and XML can be used to describe them all. The Library of Congress uses XML to represent their metadata records in MARC, MODS and METS. XML can enclose a digital object and be used to build and AIP (see OAIS Model). XML allows for interoperability.

Courtesy of University of London Computer Centre


XML & Migration
It is not enough simply to have digital preservation for the file but also for the metadata. The metadata may be stored separately to the object, within a database or metadata is embedded in the files requiring preservation. Metadata can be used for the source file format and when migrating to the new target file format for example word moving to pdf. Migration exercises are like Fight Club: There will always be losses. We have to decide when migrating what could be lost, what is an acceptable loss, what should not be lost such as significant properties and what are the choice that need to be made so that only acceptable losses happen. Ed and Steph suggested doing very detailed use cases before migration.

Day 3
On the third day we looked at metadata exercise; email preservation; social media: communicating with the user community; social media: user community and engagement; understanding legal issues for preservation and access; preservation of databases; and managed storage.

Metadata Exercise
On paper we were shown a painting and its museum cataloguing record. The painting had been digitised and metadata was present. There were gaps in the metadata which had to be identified and what preservation data was also required. This exercise highlighted that no matter the source of the metadata some metadata will not be present.

Courtesy of Elaine Harrington


Cloud Storage
Cloud storage providers should meet ISO standards and should care about auditing standards. Discussion during an exercise showed that institutions who have cloud storage should limit the holdings to within the EU at the very least. If material is held on the cloud and moved to American then it is subject to different copyright laws, different data protection. Copyright law has not yet caught up with digital content. Indeed if a project for digital preservation has EU funding then the storage, cloud or otherwise, may need to remain in the EU. Cloud companies don’t mention how long the objects will be stored for and considering how fast technology changes (who remembers VHS or Betamax?) will the objects require a new digital preservation storage facility in a very short span of time? Of equal concern was cost: it may require little money to insert an object into cloud storage but it could take a long time and much more money to extract an object from cloud storage. If an object is requested will it pass through multiple countries’ before reaching its destination? This may happen as cloud providers move data on a regular basis.  We were advised to always read the fine print!

Conclusion
There is much more to digital preservation than placing objects in cloud storage and all the processes and details are real and not imaginary like unicorns. A good deal of discussion is required no matter which method of digital preservation is chosen, no matter which method of storage for digital preservation is chosen and no matter which tools are used during the process. It was clear that we should all be engaging in digital preservation and that we should be engaging right now.

Thanks to Ed Pinsent and Steph Taylor who shared their experiences and expertise so freely. The slides are available through Creative Commons and UCLC. Thanks also to ANLTC and NLI for organising and hosting the event.

7 May 2015

21st Century Libraries

I recently received a scholarship to attend the Massachusetts Library Association annual conference. In my application, I was asked to imagine that I was looking through a powerful telescope at a distant planet inhabited by an advanced alien race and describe their libraries. Here’s part of what I wrote:

I see a great diversity of libraries. Tiny libraries serving communities of less than a thousand and giant libraries serving nations of millions. Virtual libraries serving billions. I see libraries that are silent and libraries that are loud, and both hum with activity. I see libraries that are housed in treasured landmarks, their buildings telling as many stories as the materials they contain. And other libraries, in plain, inexpensive buildings whose patrons come daily, working to re-write their own stories.

Some libraries have laboratories where patrons experiment and build and break and fix and learn by doing rather than just by reading. Other libraries do not. Some libraries have books. Other libraries do not.

The libraries I see – including the library tucked into the corner of a subway station and the library taking up ten city blocks – are diverse because they match the needs of their communities, and their communities are diverse.

I intended this piece as a mild criticism of the idea most commonly expressed via the term “21st century library.” I have two objections to this term. The first - which I did not discuss in my application - has to do with precision of language: all libraries today are 21st century libraries, and have been for 15 years. When people use the term “21st century library,” they mean something more than “a library that exists between the years 2000 and 2099.” I think they mean a modern library, or a successful library, or a top library. We need a new term. Give some suggestions in the comment section.

My second objection has to do with the meaning behind “21st century library.” It seems to me that use of this term suggests a homogenous understanding of libraries. Some examples even read like the author is suggesting that there is a single 21st century library:

"In the age of e-books and online content, what's the role of the 21st century library?" (Source - emphasis mine)

"[T]he 21st century library is competing with numerous web-based resources." (Source - emphasis mine)

“[This book] provides an up-to-date picture of what the public library is today… the library [has] reinvented and repositioned itself [since the latter half of the 20th century.” (Source - emphasis mine)

Now certainly, the authors of these examples do not think that there is literally only one library (although, with interlibrary loan, participating libraries do function, in a sense, as a single global library, but that’s another blog post).

But I think it’s fair to suggest that when we talk about the 21st century library (or, less objectionably, about 21st century libraries) we are implying that there are certain traits that successful libraries have: flexibility, simplicity, patron-centeredness, collaborativeness, technological sophistication, etc.

I disagree with this implication. There are “21st century libraries” that are struggling. And there are “20th century libraries” (perhaps even “19th century libraries”) that are successful.

I have a single criterion for successful libraries: a successful library must meet the needs of its community. That’s it. For some libraries, that may mean maker stations, online courses, and circulating ukuleles. For others, it may mean a silent room filled with books. And that’s okay. Our world, like the alien world I explored in my application, is diverse, and so are its libraries.