22 Dec 2013

Popup Heritage

Guest post by Roy Murray (@trimroy

The thing I like about being an information professional is you get never know what you might be working on. You never know what connections are being made. I was thinking about it earlier this month as I sat in a room brainstorming with an archaeologist and an archivist about the best way to mount an exhibition on Snow White and the Seven Dwarfs. I was also thinking about it this weekend as I watched my twitter feed on the Solstice in Newgrange. How are these things connected?

I first came across the term Popup while working in theatre. I did a lot of work with site-specific companies, so the idea of transforming spaces is something that I am familiar with. Later on, I came across the work of Nina Simon and found her ideas about community museums very refreshing. When IWTN announced that they were holding a workshop about popups to bridge the gap between enthusiasm and information I signed up. 

The keynote speaker was Pat Cooke from UCD School of Cultural Policy who spoke about the history of cultural heritage in Ireland. At the heart of it, he suggested, was our awareness that time is speeding up and we want to hold things in our gaze for a while. He used museums as an example of an institution which are designed for temporary exhibitions. While the concept of popup museums was questioned later on by Ciara Farrell from Creative Limerick, it did open up discussion about reclaiming the word heritage from the Irish Tourist Board and what Brian Crowley from the Pearse Museum calls "museumification". At the heart of Pat's talk was the idea that cultural heritage is created by communities, not institutions. This contrast ran throughout the conference. Popups tend to exhibit user generated content rather than more authoritative information. As a personal example he used his childhood memory of a man in Kilkenny who spent all his time following behind ploughing tractors with his head down. Years later, he discovered that this man was an amateur archaeologist who specialised in finding new sites for the National Museum. When it comes to information, Pat suggested, the wisdom of the community can often gives a more authentic view than that of experts. Popup exhibitions always raise issues about professional curation vs crowdsourcing that sparks debate amongst the GLAM community. On the one hand you have the likes of the Folklore Commission who generate thousands of objective interviews, while on the other you have StoryCorps who use relatives as interviewers to create emotionally charged stories. This got me thinking about the builders of Newgrange during the Solstice weekend. Were they an elite group of thought leaders with a very precise agenda or were they a happy band of communal earth-lovers following their dreams?

With the importance of physical space to exhibitions, design was always going to be an important element of this conference. Barry Sheehan from DIT explained that good design goes unnoticed, and is often heavily dependent on a man with a van. In their case, they build their own portable display units which are adaptable to spaces. Like any sort of blank canvas, layout is important and this became apparent in the previously mentioned interactive workshop created by Dominique Bouchard from the Hunt Museum. She used the analogy of chapters in a book to explain how design helps audiences to follow the flow of the exhibition. From the examples in Dominique's workshop, Popup brings a completely different tone to exhibitions. While a lot of Irish heritage is grim and often stark, popup is more playful. There were lots of questions about words (16pt, non-serif typeface and not a lot of it). All of the speakers stressed the importance of not writing for your peers, while Pat Cooke remarked upon the use of photography to capture the heart of communities. What does design tell us about Newgrange? It has no settlement features so it would qualify as a derelict space, although it is a bit early for retail. The external design is not a perfect circle because it is flattened at the entrance. Is this to allow crowds to gather? The internal layout is certainly not crowd friendly, so access is certainly an issue. Art pieces are positioned at specific points, although some of them appear to be facing the wrong way. Lighting is VERY exact and the space has been re-used. Soil samples taken from the mound indicate that the landscape here was not farmed extensively when it was built. It was a place that people passed through without staying.

One of the areas that we explored in the conference that is relative to Popup exhibitions is storytelling. Popups tend to create multiple narratives, usually around objects. Dominique spoke about the use of narrative layers to access collections. Brian Crowley had already suggested telling stories by theme, chronology or from a modular perspective. Unlike other speakers she suggested that we start with the object first, rather than design. While she argued that people do not generally like exhibitions about "stuff", she remarked that they like stories about people told through "stuff". The Tenement Experience was mentioned by quite a few speakers as a model in this regard. It had low level interpretation and different access points for different users. The core idea, according to Dominique, is to use objects to tell stories about the past and present. The strength of Popup is that it has a history of telling "other" stories that are often neglected by institutions who want to tell the grand narratives. Popup caters to niche stories such as MONA and the Museum of Queer History. This ties in with what Pat Cooke was saying about immigration in Ireland and how it adds a different layer to the story. We can sometimes have too much pride in our own heritage, he suggested. So much so that we see it through one lens. This theme was taken up by Community Development Officer, Denise Feeney who stressed community buy-in on popup projects and explained how to go about accessing resources which enabled that. What stories does Newgrange tell us? Why did they deliberately source their stones from so far away? How does the artwork reach out to other communities along the Western Seaboard?

Engagement was another area that was stressed by speakers, most notably Bairbre-Ann Hawkins from the Butler Gallery who spoke about her early experiences in a cultural space and how unmoved she was by it. This helped inform their policy on extended engagement. Jenny Siung from Chester Beatty Library explained about the importance of their new programme for teenagers. While Brian Crowley was blunt about overdosing on art exhibitions and how it drives you to the coffee shop, David Teenan from the Source Arts Festival argued that youth engagement was essential for creating programme content for them. Again, the stress here was on being niche and eclectic. As Brian remarked "We don't eat off the tourist menu when we go on holidays. Why would we expect tourists coming here to do the same with our culture?". From another perspective, Ciara from Creative Limerick spoke about the practicalities of artists engaging with business owners (who puts out the rubbish, turns out the lights, pays the bills?). Marie McMahon from South Tipp Museum shared her experience from a recent community heritage exhibition which involved the ICA and fishermen along the River Suir. A speaker from Abhainn Rí Festival in Callan spoke about their concept of using the whole village as a temporary participatory space. Similarly, Newgrange has also engaged different groups over time and been used for different purposes. Along with its astronomical function, we also find mortuary activity. There is evidence for feasting there from the Bronze Age and deposits which suggest it was a pilgrimage site after the Iron Age. Today it attracts many different groups - archaeologists, new-agers, artists, teenagers and local farmers. All of them bringing their own interpretation to it and looking at different aspects of it.

Popup clearly covers a lot of areas and borrows from many disciplines. As an information professional it is the perfect place to try out skills. They can be constrained by size (e.g. yard sales) or by time (Granby Park) and they can range from the traditional (a library art exhibition) to the more outlandish (Burning Man). One speaker referred to popup museums as "a little bit of old stuff, followed by a coffee". Yet, could they even incorporate millenia long monuments? They all ask you the same question that Brian Crowley asked - What 5 objects would represent the story of your life?

 Letting Go? : Sharing Historical authority in a user-generated world, (2011) Adair, W., Filene, B., Koloski, L., Pew Centre for Arts & Heritage
The Participatory Museum, (2010) Simon, N. Museum 2.0

Image by gpoo: http://www.flickr.com/photos/gpoo/2412677150/

20 Dec 2013

Repository Network Ireland (RNI) - TeachMeet Event Summary / 25th Oct. 2013

Guest post by Sarah Kelly, Dublin Business School

The newly formed Repository Network Ireland (RNI) – created by Aoife Lawton (HSE), Máire Caffrey (Teagasc) and Stephanie Ronan (Irish Marine Institute) – was borne from the founders’ experiences with getting their repositories harvested by the national institutional repository, RIAN. They found that there was a gap in knowledge and support for smaller institutional repositories that could benefit from the creation of a new networking forum where repository managers, librarians and information professionals could meet to share information. With this in mind, RNI extended an open invitation to their first ‘TeachMeet’ in the Long Room Hub (Trinity College Dublin) on the 25th of October.

After a welcome from the Director of the Long Room Hub, Dr. Jurgen Barkhoff, and a brief introduction by Máire Caffrey, the 7-minute presentations got underway.

The DRI is an interactive national repository for contemporary, historical, social and cultural data in Ireland. It provides a central hub for data from several Irish institutions, in order to link and preserve the data and make it more accessible. The DRI can also be used as an educational resource for students and the general public, supporting Open Access (to at least the metadata). Not only can the DRI preserve data, but it can also create policies and guidelines for best policy in the field of digitisation.

A major concern for the DRI is sustainability, particularly as the organisation relies on government funding through the HEA. Currently, funding has been secured until 2019, but a contingency plan is in progress for when funding ends. The DRI built the repository from the ground up through networking and community engagement, with a series of qualitative interviews with national institutions to gauge their approaches to digitisation. Reports and publications are available here.

Partnerships have been built internationally as well as nationally, with involvement in DARIAH, Europeana, Decipher, ALLEA and many others. The DRI and INSIGHT will be hosting the third plenary of the Research Data Alliance in Dublin in March 2014, which is a major international research data event. For more information on the resources the DRI provide, please see their website.

UCC’s Institutional Repository CORA was set up by the Library to house and showcase research undertaken in UCC. Previously, theses were submitted manually in bound print format and processed by the library, with reference access only. It was decided that the online submission of e-theses through CORA would provide greater visibility, access and impact. The pilot project for online submission of theses began in 2009, and has progressed to online-only submission in September 2013.

The project hit an unexpected delay in 2012, when the Academic Council rejected the mandate with a recommendation that an opt-out progress be incorporated. This may have been due to concerns about copyright and publication from certain disciplines in the university. In any case, building an opt-out process involved considerable customisation and the creation of two separate workflows.

For the opt-in process, the student registers with CORA – submits to supervisor – work is graded – sent to Graduate Studies Office – Approved – Sent to Library – Loaded up to CORA. The student uploads the abstract and thesis. Metadata is imported from CORA to the library catalogue. (CORA uses DSPACE as a platform, which uses Dublin Core for its metadata. The Library system uses MARC records).

In the opt-out system, only the abstract and metadata are submitted to the library, and the supervisor step is skipped.

So far, the system has been successful, and allows greater access to theses for students and researchers at UCC. Full text items on CORA are harvested by RIAN.

In this presentation, Aoife Lawton gave some helpful advice on setting up a repository in the unique form of a baking recipe!

The HSE’s repository LENUS was established to store health-related reports, research and publications in order to provide a centralised knowledge base for medical researchers and clinical practitioners. Ideally, to set up a good repository, you should have a team consisting of a repository manager, at least two qualified librarians on the project team, and a clinical champion to launch and promote the repository. Remember the SPARC method – Scholarly, Perpetual, Institutionally defined, Interoperable and Open Access. The project manager should establish a clear vision and mission statement, strategic plan and content criteria policy. The content should be organised to mimic that of RIAN’s, to make the harvesting procedure smoother. Marketing is also very important – being Web 2.0 enabled (e.g. having a Twitter account and using other social media) can help to promote the repository. Tools such as Google Analytics are very useful to measure the progress of the repository. Finally, check that the repository is serving its users well by conducting surveys or focus groups on occasion.

Joseph Greene’s presentation focussed mainly on how to join RIAN, and started with two simple statements: get your repository harvestable, then get RIAN to harvest you!

To get your repository harvestable, it is advisable to:
  1. Use a recognisable software package such as DSpace, Eprints, Fedora or Digital Commons
  2. Plan your collections – think about how they will be organised. Look at how RIAN is currently structure to get an idea of how you should organise your own repository to match up.
  3. Plan metadata fields and consistently apply them.
  4. Plan what data goes in, and how it is entered e.g. keywords
  5. Remember that RIAN only takes full text and scholarly material
  6. Create a list of RIAN fields and pair them with your own (can be literal or programmatic). Excel can be useful for this purpose. Full text instructions can be translated to programming language fairly easily.
  7. Build an OAI-PMH crosswalk called rian_dc. You can use basic Java for DSpace or Perl for EPrints.
  8. Once you have completed the crosswalk to make your fields match RIAN’s format, Enovation Services (the company hosting RIAN’s site) can create an institution page on RIAN and perform a test harvest to see if it worked. If everything is in order, a full harvest will follow.
To get RIAN to harvest you, you will need to contact the chair of the Irish University Association Librarian’s Group (currently John Cox of NUIG). Much needed help is at hand for RIAN new joiners from Joseph Greene (UCD), Sinead Keogh (UL) and Fran Callaghan (DCU). You can also consult Repositories Support Ireland on http://www.resupie.ie/moodle and join the mailing list for updates.

The Social Work department at CUH Temple Street was badly in need of a centralised, easy to access and easy to search repository to store research material and references. Unfortunately, they had no money to finance this, so the challenge was to find a way to set up a repository without any budget. Jane Burns volunteered to be the project manager, and recruited four Library and Information Studies volunteers to assist her. Work commenced on scoping and organising the material; not an easy task given the paper-based nature of the work. Classification schemes and file naming conventions were established. After some trial and error (often due to the old infrastructure of Temple Street), Zotero was identified as a suitable free resource. It had to be completely stand-alone due to privacy concerns, so there are strict limits on access to the Zotero system. Of necessity, it is limited to in-house access and sharing rather than full OA.

Despite having no budget, the team were able to create a fully functional in-house repository, with supporting documentation.

Yvonne Desmond of DIT encouraged us to ‘think outside the box’ and look at repositories as a way to engage students and faculty. The ARROW@DIT repository hosts a huge variety of material, including journals such as the Irish Journal of Religious Tourism and Pilgrimage and the Irish Journal of Academic Practice to name but two. Given that DIT has a strong culinary arts school, the Gastronomy Archive is a popular resource. DIT will be hosting the Dublin Gastronomy Symposium in 2014. Students are encouraged to publish their material on the repository as a step-up in their academic career, as well as being a means of preserving their work.

The focal point of the ARROW homepage is the Discipline Wheel – a bright and accessible icon designed to allow easy browsing of the collections on the repository. The sunwheel is intuitively designed, but is also accompanied by a YouTube tutorial to explain how to navigate it. A massive 613 disciplines are covered, divided into 10 main categories.

Gary Cullen represented the newly formed Connacht-Ulster Alliance, a strategic alliance between the Institutes of Technology in Letterkenny, Galway/Mayo and Sligo. Their ultimate goal is to be re-designated as a Technological University. A central repository will be part of this alliance, and Gary was hoping that the RNI TeachMeet would give him an opportunity to ask questions and seek advice on how to set up a repository from scratch.

St. Patrick’s College mainly concentrates on Education and the Humanities, so the material on the repository will consist mainly of publications and theses, and specialised projects in Art, Music, and the Irish Language (one example being a collection of photos of children’s school projects). In the initial stages of planning, a working group was set up with members of the Library, IT and Research. This group reports to a management committee within St. Patrick’s College. A staff survey was conducted to explore opinions on IRs. As was the case in UCC, staff voiced concerns about copyright and the impact on publication. There was a view that publishing on the repository could negatively impact small or societal publications. Quality control was also flagged as a potential issue. The survey highlighted varying levels of engagement and knowledge on the subject of repositories and OA.

After much consideration, St. Patrick’s College have opted for Discovery Garden, created by the University of Prince Edward Island's Robertson Library specifically for hosting institutional repositories. It’s an Islandora system, modelled on Fedora, Drupal front-ended, open-source and cloud-based.

When the repository is up and running, St. Patrick’s College are hoping to build in an e-submission policy for theses. To build the collection, there will be a retrospective call for staff publications. Existing staff publications must be collated and publishers contacted. This in itself is a time-consuming and difficult task, especially given the fact that not all Irish publications are on SHERPA/ROMEO and can be difficult to track down.

Niamh Brennan (TCD) spoke about how the material on your Institutional Repository can become visible not just nationally, but internationally. Institutions should make themselves aware of European collaborations such as OpenAIRE, which holds EU funded papers. No Irish institutions are on it as yet, but this will change soon. There is a procedure for making works OpenAIRE compliant – contact Niamh for more information. Another example of a European collaboration is PEER (Publishing and the Ecology of European Research). Trinity College’s TARA is a partner repository. Major academic publishing companies such as Elsevier, Springer, Wiley and Taylor Francis etc. have provided content. Interestingly, they found that it didn’t detract from their business but rather added to it.

Web analytics such as Google Analytics can be very useful to monitor traffic, and the results can be surprising. (the example given was the variety of countries accessing the book “Sin, Sheep and Scotsmen” by WE Vaughan!) It can be a powerful marketing tool to show lecturers how their work is reaching new and varied audiences. The impact of a repository can be far reaching, and it can have societal, economic and cultural influences.

Stephanie Ronan of the IMI offered some handy advice on joining RIAN based on recent experience.
- Team up with others with similar interests and goals : a buddy system provides much needed help and support
- Factor in a considerable amount of time to organise a meeting with the RIAN Steering Group
- Prepare for lots of tests and checks, particularly if developers are inexperienced
- Accumulate a budget for RIAN and OpenAIRE
- Run in-house if possible
- Note that the subject line isn’t mandatory in DSpace but is for RIAN. Take note of other mandatory fields on RIAN.

The TeachMeet finished with an open discussion in groups to talk about the aims and goals of the RNI.

- To combine expertise and support each other
- Build contacts
- Feed into National Policy
- Share information
- Create best practice
- Develop technical skills
- Provide practical support
- Increase marketing of IRs

In terms of technical skills, it might be useful to set up workshops or webinars on topics such as:
- Copyright
- Open Access Publishing
- Web Analytics / Google Analytics
- Ensuring repository fields match up to those on RIAN : Crosswalks.

The groups also discussed which type of forum might be suitable for the RNI – it would need to be private, but also allow conversations. Possibilities include the wiki, a LinkedIn group or maybe a Wordpress discussion forum. As regards the formality of the group, it was agreed that it should be formal enough to have assigned roles but not so formal as to be a society etc. Further meetings will take place in 2014, and not necessarily in Dublin, given the range of institutions present.

[Presentation slides available on RNI wiki]

16 Dec 2013

Essential reading for academic librarians: How Freshmen Conduct Course Research Once They Enter College

That's this report (not my blog post! :)). Last week saw the release of another excellent Project Information Literacy report: "Learning the Ropes: How Freshmen Conduct Course Research Once They Enter College,", which provides insight into how first year college students manage the transition to a complex and unfamiliar information environment. Although based on interviews and survey data from the US, many of the findings will likely be relevant across many academic settings.

Some of the challenges highlighted in the report which can contribute to a difficult transition for new students include:

The difference in scale between high school and college libraries (which may be up to a factor of twenty depending on the type of resource). From an Irish perspective, where there are in fact very few second level school libraries, I imagine this difference is even more pronounced. Indeed in some cases, students' only knowledge of a library prior to entering university may be from their experience using public libraries (if even this) .

The most difficult research tasks related to online searching, with three quarters of the sample reporting difficulties selecting keywords, and over half finding themselves overburdened with large volumes of irrelevant information. Identifying and selecting potential sources was the third most frequent difficulty experienced by students. Devising effective search strategies for databases that are comprehensive but reasonably specific is not easy. Ask anyone who has ever undertaken a systematic review. But we need to remember that first year undergraduates are not writing a systematic review, they are finding their way around the landscape and learning as they go. They don't need every single paper, they need a few important and relevant ones, which (most importantly) they can evaluate, analyse, critique, synthesise and use with their own ideas and arguments.

It is not just finding information that is a problem however, with over 40% expressing difficulty in making sense of, and using, the information they had found. If we target our instruction solely at retrieving and extracting information, we may be showing our users where the door is, but still not giving them the key. After reading the report, I believe it points to a need to simplify a lot of what we offer to users. I think well-designed discovery tools and Google Scholar can work extremely well for transitioning undergraduates. They simplify the process of retrieval for students (and yes, they oversimplify it as well) in a way that looks and feels familiar, freeing up significant time for developing skills for evaluating, using and managing information. I know some librarians feel that encouraging students to use these tools somehow 'lessens' the value of library databases like JSTOR or Web of Science, but in reality it is simply exposing our subscription content in a new way.

For many undergraduates, the alternative to using discovery platforms and Google Scholar is not embracing half a dozen specialist databases and boolean logic, but rather switching off from library resources altogether.
Posted on Monday, December 16, 2013 | Categories:

13 Dec 2013

Some thoughts on Academia.edu, Elsevier and TDNs

It's difficult not to post something about the recent Elsevier/Academic.edu take-down notice situation, even if it has already been heavily blogged about and discussed. It has now emerged that it is not just Academic.edu in Elsevier's field of vision, but also personal websites, such as wordpress blogs. For those looking for an overview of some of the issues I would recommend Scholarly Kitchen's discussion, and there is a good round up of blog posts and news article on ScienceBlogs.

Image: Sony Records
As much as I don't like saying it, I think Elsevier are correct in what they are doing. If an author signs a copyright agreement that prohibits uploading a publisher's final version, he or she should uphold it in my opinion. The real problem is that copyright agreements are typically very unfair on authors in the first instance, and it is this that needs to be changed. From my experience there can be a lot of confusion over what level of sharing is allowed by authors, with some believing they are free to share 'their' articles where they like, as it is 'their' work. This, however, does not make breaching such legal agreements and contracts 'right'.  Hopefully, the biggest effect of the Elsevier TDNs is that authors will now understand how restrictive publishing contracts are and the rights they are giving up, and henceforth renegotiate as far as possible and maximise what they can do to disseminate their research through legal means, such as institutional repositories.

In truth, I believe the Academic.edu TDN situation is a good great news story for IRs, and ultimately, sustainable open access. Whilst sites like ResearchGate and Academia.edu may provide a platform for authors to share their work and their simplicity may be appealing, there is no guarantee that these platforms will exist in a year's time, nevermind five or ten years' time. More importantly, open access is not the raison d'etre of these research networks, but merely a convenient by-product which has garnered them good-will from users. Researchers also need to remember that by using such sites they are willingly providing their data and analytics to a for-profit third party site to resell. In reality, these sites may actually be holding back open access by reducing green repository deposits and providing a sticking-plaster solution as regards accessibility, that has just about stopped the issue from boiling over. Until now.

8 Dec 2013

Google/OpenRefine for metadata cleanup and linked data

Last Friday, I attended a half-day workshop at the RIA, which provided a birds-eye view introduction to GoogleRefine. The morning session kicked off with a 45-minute recap on the history/complexity/limitations of databases. The rest of the time was spent playing with OpenRefine.

GoogleRefine (now OpenRefine) is a standalone open source desktop for data cleanup and transformation. It displays itself as a flat table but behaves like a relational database. It’s a hugely powerful tool and requires some legwork and practice to fully exploit its potential.

An immediate and very practical use of the software includes the ability to clean up messy metadata effectively. Say you have an export of a text file with some semi-structured data; you can edit it using transformations, facets and clustering to re-structure the data.

Screenshot of “categories” for sample data-set

GoogleRefine can also be used to convert data values to other formats and extending it with web services, for example for geocoding addresses to geographic coordinates.

Check out http://collection.cooperhewitt.org/people/18060335/ as a good example for linked metadata (person search).

OpenRefine (Project homepage)
Getting started with OpenRefine
Using OpenRefine: a manual

29 Nov 2013

Three librarian webinars in December

...and one in early January. Below is this year's
final set of interesting cpd webinars. Topics covered include collection management, reader services, web design and cataloguing.

Weeding: The Good, the Bad, and the Musty
Wednesday, 11th December,  4pm – 5pm GMT (Provided by Nicolet Federated Library System)
Weeding is not a dirty word. Change your attitude about weeding and take charge of your collection. Learn the essential steps to make your collection more useful, comfortable, and attractive for your users.

Extreme Customer Service, Every Time
Thursday, 12th December, 6pm - 7pm GMT (Provided by WebJunction)
Commitment to great customer service goes beyond “service with a smile.” It is a commitment to truly engage and communicate with patrons and to find ways to extend the experience above and beyond their expectations. Building on the success of the Darien Library, whose reputation is known internationally for providing “extreme customer service”, presenter Gretchen Caserotti will provide you with practical and actionable ideas that can help your library, whether small or large, commit to excellent customer service.

Web Trends 2013-2014: Where to Invest Your Pixels
Tuesday, 17th December, 6pm - 6.45pm GMT (Provided by Systems Alliance)
In the last few years we have seen progressive - and monumental - changes in technology that have begun to transform the way websites are designed. Responsive design, sticky navigation and web typography are no longer just buzzwords but important elements considered in every design project. But separating the fads from the trends and identifying those trends worth investing in can certainly be challenging. This webinar takes a look at trends in web design that have emerged in recent years as well as those anticipated for 2014. Trends examined by Systems Alliance include:
- Responsive design
- Web typography
- Minimalist and flat design
- Parallax scrolling
- Infographics and data-driven graphics
- Sticky navigation
- One-page websites

Transitioning from MARC to BIBFRAME: The Environment and the Format
Wednesday, 8th January, 5pm - 6pm GMT (Provided by UW-Madison School of Library and Information Studies)
MARC is dead! How many times have you heard that during your career in libraries?

This time, though, it might really be for real. In 2011, the Library of Congress (LoC) announced that it was transitioning away from the MARC format for bibliographic data.

In the summer of 2012, LoC hired Zepheira to investigate the possibilities of linked data as the carrier for library descriptive information. In November of 2012, Zepheira and LoC published a report, Bibliographic Framework as a Web of Data: Linked Data Model and Supporting Services, often referred to as the BIBFRAME Primer.

You can follow BIBFRAME developments at BIBFRAME.org, but to help you get started, UW-Madison SLIS Continuing Education Services is providing two free Webinars, taught by Kevin Ford of the LoC MARC Standards Office, and BIBFRAME Initiative.

The first 100 attendees to sign up will be able to watch the webinar live; those who sign up later will receive emails with links to the recordings.
Posted on Friday, November 29, 2013 | Categories:

26 Nov 2013

Information literacy, graduate attributes and employability

As an academic librarian, I think one of the key reports in recent times is Project Information Literacy's How College Graduates Solve Information Problems Once They Join the Workplace. A large proportion of our IL instruction often focuses on 'library' resources such as subscription databases and journals, and aims at achieving learning outcomes that can help students succeed during their time in University.

The bigger picture of course, is that once our students graduate, the information landscape of industry, policy-making and the professional world often bares very little resemblance to 'the library'. After 3 or 4 years of using JSTOR, Web of Science or Scopus we now expect them to be able to go out into the world and know where to find the information they need to help them in their jobs. In healthcare (where PubMed is freely available) or in large financial companies that may have subscription resources, the transition to information-seeking in professional life may be more manageable. However, retail managers trying to analyse market trends or an engineer in a small practice looking for information on measuring environmental impact may find themselves trying to climb a glass wall with no familiar footholds for leverage.

As instructors we help our students to recognise that being able to find, evaluate and manage information effectively is a key transferable skill, but I wonder do we emphasise this enough? If we highlight the long-term value of being information literate (and in particular the advantage of possessing a set of competencies and attributes desired by employers) students may also be more willing to invest time in the process of learning and developing their skills. Perhaps we need to think of how we can package or state our typical IL objectives and outcomes, in a way that is directly linked to the world of work?

The aforementioned report from PIL (2012, p.9) highlights the three baseline competencies required by employers at the recruitment stage. These same competencies are of course inherent in probably every third level information skills curriculum across the globe. However, do we emphasise the point enough that these are also key skills required by employers? Do we really sell how transferable many of the skills needed in using information in everyday life really are?

What Do Employers Expect from College Hires?
1. To know how and where to find information online, without much guidance
2. To use a search strategy that goes beyond Google and finding an answer
on the first page of results
3. To articulate a “best solution” and conclusion from all that was found
(How College Graduates Solve Information Problems Once They Join the Workplace, 2012, p. 9)

The Report also includes the findings from the 2011 NACE Survey, where employers rate the importance of candidate skills/qualities on a scale of 1 (not important) to 5 (extremely important)

1. Ability to work in a team structure 4.60
2. Ability to verbally communicate with people inside and outside the organization 4.59
3. Ability to make decisions and solve problems 4.49
4. Ability to obtain and process information 4.46
5. Ability to plan, organize and prioritize work 4.45
6. Ability to analyze quantitative data 4.23
7. Possession of technical knowledge related to the job 4.23
8. Proficiency with computer software programs 4.04
9. Ability to create and/or edit written reports 3.65
10. Ability to persuade or influence others 3.51
(How College Graduates Solve Information Problems Once They Join the Workplace, 2012, p.9)

Again, our instruction can often hit several of these to some extent at least, through collaborative learning, promoting and supporting digital literacies, helping students develop skills for managing and organising information, fostering critical thinking skills and developing effective processes for solving research problems.

Given the importance of context and relevance in teaching IL, I still feel that it is essential to try and reach students when they are most receptive and faced with a real information need or problem, such as completing an assignment or essay. Targeting instruction which can hit this specific gap at the right time will no doubt be more effective than pitching it as a set of vague or bigger picture attributes desired by employers. However at the same time, I think it could be useful for us to explore how we can anchor our IL instruction to employability and the 'real world' a little more.
Posted on Tuesday, November 26, 2013 | Categories:

18 Nov 2013

Germany’s green road: a 2012 census overview of German open access repositories

Back in 2012, Paul Vierkant and his team of three at the Information Management Department at the Berlin School of Library and Information Science conducted the most thorough snapshot survey of German open access repositories to date.

For practical purposes, Vierkant et al. developed their own definition as to what a digital open access repository denotes within the context of their census project. “The Census "[...] definition of Open Access Repository includes repositories that are institutional, cross-institutional or disciplinary providing (in the majority of cases) full-text open access scientific publications together with descriptive metadata through a GUI (with search/browse functionality). The repositories are registered with a functioning and harvestable base URL in at least one of the following registries: ROAR, OpenDOAR, OAI, DINI and BASE." (Vierkant, 2013).

Repositories that host digital collections, open access journal aggregators and research data were not included in this snapshot survey as they are difficult to compare due to their significant differences in character, scope and content. The survey took place on 14th February 2012.

Repository sizes and amounts of content
Within the context of the above definition, 141 active open access repositories are operational in Germany to date.

Size ranges of and software used for open access repositories in Germany (Source: Vierkant/D-Lib Magazine)

In total, over 704,121 items are accessible through open access. The majority (57) of individual repositories contain ≤ 1,000 items. Nordrhein-Westfalen (the most populated Bundesland) hosts 27 repositories, followed by Baden-Württemberg (28) and Bayern (22).

The top five largest open access repositories are 1) elib Publikationen des DLR (46,136 items), 2) EconStor (45,268 items), 3) German Medical Science (41,753 items), PUP – Universität Bielefeld (32,695 items), 5) ePIC – AWI (29,480 items).

It is interesting to see that one third of Germany’s open access repositories avail of hosting services: the smaller the repository, the more likely it is that they are hosted off site.

Value-added services
German repositories offer the following basic services, but they are not collectively present in all instances. Bibliographic export is offered by 56%; usage statistics is offered by 24%; checksum provision is offered by 36%; RSS feed services is offered by 48%; Social bookmarking services are offered by 45%; Social bookmarking (such as Facebook, Twitter or AddThis button) is offered by 11%.

The reasons given why German repositories (don’t) offer value-added services are speculative in the report.

Repository software and metadata formats
Germany is considered to be “OPUS-country”: 77 out of 141 repositories use OPUS (open source).

National distribution of repository software in Germany (Source: Vierkant/D-Lib Magazine)

Only 9 repositories run DSpace, which is the biggest open source software platform for repository services. Simple Dublin Core is supported by 99% of all repository instances.

The full results of the survey can be accessed here.

As an aside, UKSG will be running a 45-minute webinar on 20th November: Managing Open Access in the Library (it offers a thorough introduction to open access and explains how open access advocacy and other related procedures can be integrated into libraries).

8 Nov 2013

Four librarian webinars in November

Below is a bunch of interesting and free webinars that take place this month. The topics cover MS PowerPoint 2010, Website re-design challenges, services for vision impaired patrons and where to find essential graphic novels for the adult reader.

Getting the Most out of Microsoft PowerPoint 2010
Wednesday, 13th November,  6pm – 7.30pm Dublin
This webinar covers working with Microsoft PowerPoint to present and deliver information to your audience. You will learn about new features and time-savers to help in your day-to-day work. This session is good for beginner, intermediate, or advanced users – everyone will learn something!

Over Budget and Out of Time: Common Pitfalls of a Website Redesign Project and How to Avoid Them
Tuesday, 19th November, 6:00 PM - 6:45 PM GMT
The task of keeping a website relevant, engaging and on-trend with modern technology can be a daunting, costly, and stressful process.  Even website redesign projects with the most detailed project plans and requirements often fail due to a number of common mistakes. Don’t let your website redesign be next – avoid these pitfalls and ensure you start off on the road to success.

In this webinar, we will examine typical reasons why website redesign projects go over budget, out of scope, and launch beyond the initial timeline. We’ll discuss each pitfall in detail and how to avoid it. Pitfalls include:
- Unrealistic budget
- Too many stakeholders
- Surprises
- The curse of content entry
- Lack of internal resources to support the digital strategy

Itasca Community Library's Vision Center: Library technology for patrons who are blind or have low vision
Wednesday, 20th November, 7pm – 8pm GMT
Does your library provide assistive technology for patrons who are blind or have low vision? Or is this a goal for the near future? Join this session to learn from the Itasca Community Library (IL), which provides special technology, services, materials and equipment for patrons with vision impairments. A Vision Centre was created in the library.

Graphic Novels for Adult Readers: Recommending the Best
Wednesday, 20th November, 5pm – 6pm GMT
Are you wondering how to recommend graphic novels to adult leisure readers? Are you uncomfortable talking with adults who want to discuss graphic novels because you’re not the “staff expert”? Do you know where to find essential graphic novel titles that should be included in most library collections? 

Even though graphic novels continue to become more visible in library collections, adults often don’t consider reading in this format. Staff providing reader’s advisory may also feel at a loss when attempting to include graphic novels as suggestions.

This hour-long webinar will help staff broaden their skills by adding graphic novels to their recommendations. It will show how to locate satisfying and often little-known graphic novels that respond to both the subject interest and personal appeal factors in readers who have little experience with the format. Ideas for encouraging experienced comics readers to move to graphic novels will also be discussed.

Collection development staff will learn sources for graphic novels that are essential to most collections for adult leisure readers.

At the end of this one-hour webinar, participants will:
- Know the history and key elements that make up the graphic novel.
- Be able to discuss the literary appeal graphic novels will have for adult readers.
- Recognise literary genres in graphic novel form.
- Have techniques for recommending graphic novels to adults based on their other reading interests.

This webinar is aimed at those who work with adults and with materials published for the adult reader market. It will be of interest to readers’ advisers and collection developers at any type of library serving adults.
Posted on Friday, November 08, 2013 | Categories:

4 Nov 2013

Research, Evaluation and Audit by M.J. Grant, B. Sen & H. Spring (eds.) (Review)

Sometimes it surprises me just how many books on research design and methods have been written without the practitioner in mind. Whilst I understand that the majority of texts may be principally aimed at the academic sector, research and evaluation are also fundamental skills in most workplace contexts. LIS is one such case, and indeed a sector that has seen the philosophy of evidence based practice grow steadily over the past ten or fifteen years to a point where assessment, metrics and evaluation are now cornerstones of service design and delivery.

Research, Evaluation and Audit brings together many of the key names who have been involved in both the emergence and development of EBLIP, in a book that is written and packaged firmly with the practitioner in mind. From the outset, it is evident that the editors understand the real challenges when it comes to librarians and practitioners undertaking research. It is not just a simple matter of learning how to carry out research from a technical perspective, and which methods are appropriate and why; it starts at a much more fundamental level. Indeed in some cases, the concept of evidence based practice requires developing a new mindset - one that continuously questions, seeks and appraises rather than relying on experience, habits and traditions.

The contributors to this book clearly recognise and acknowledge the complexity of this challenge. The editors have skillfully managed to curate and incorporate the broader issues involved in adopting an evidence based approach, including the need to develop a curious and analytical mindset; cultivating the habit of asking the right questions; practical aspects like writing a project plan to give clarity and keep things on track; ethics and best practice. It's refreshing to see that it is nearly 100 pages before research methods are discussed in any detail, and that the second chapter is dedicated to the broader issue of building confidence is indicative of the holistic focus of the book. This breadth however, means that the second party of the book, which deals with methods and data analysis, may be too introductory and brief for some. It really serves as an overview, and provides a diving-off point for researchers who can consult the recommended further reading for more specific information on methods or techniques. A chapter on research tools is a welcome and unusual addition, and provides some useful links and applications for current awareness, reference management and surveys.

Peppered throughout the book illustrative case studies of 'real' research demonstrate just how intertwined research and evaluation are with service delivery. It serves as a reality-check for those who may claim their job is to 'support research, not to undertake it'. Research is not something that should be viewed as disconnected and separate from our day jobs, but about finding answers to the key questions that affect our services. Finding the best quality evidence helps us to do our jobs better, as well as to ascertain and demonstrate impact in an era when the need to communicate our value is greater than ever.

To me this book is not so much a one-stop-shop for those undertaking research in LIS; instead its greatest value lies in how it gently steers the reader through the research terrain, highlighting both the pitfalls and best routes to take, and giving them the context and insight to navigate and reach their own destination. Indeed it is likely that once the reader gets involved in any kind of project, this will be just one of several research texts that they reach for. However, it might ultimately end up being the most essential, by being the one that started them on their journey in the first place.

Research, Evaluation and Audit: Key steps in demonstrating your value by M.J. Grant, B. Sen & H. Spring (eds.) is published by Facet, October 2013.

1 Nov 2013

Getting Started in Digital Preservation - dpc Workshop, Dublin 1st Nov. - Review

Today I attended a one-day introductory workshop delivered by the Digital Preservation Coalition, which covered the nuts and bolts of digital preservation. It became quite clear from the outset that attendees' ideas, needs and priorities on this tricky topic differed. These ranged from arguing the business case to secure funding for the preservation of particular materials to identifying the general requirements for a successful digital preservation project and the particulars of approaching risk analysis.

Essentially, this workshop delivered a high level walk-through on how to accomplish a successful digital preservation project, which will deliver on the needs of all stakeholders involved. It also provided us with a substantial list of information resources that can make this happen.

The first presentation introduced us to the idea of getting started in digital preservation activities. Much digital information requires long-term preservation as it represents value to particular users in the present, whilst at the same time creating information contexts and opportunities for users of the future. Preservation means migration (adjusting file formats to ensure accessibility), emulation (intervening in an operating system to ensure that legacy software continues to read information), hardware preservation (maintaining the physical computing environment to read information), and research and development of new preservation and access solutions.

Various challenges arise when engaging with digital preservation activities, which consequently require careful planning and execution. See the presentation "Getting Started in Digital Preservation: what do I need to know?" for details on those challenges, approaches and related support tools.

The next presentation introduced us to identifying appropriate file formats for preservation. This included a practical showcase using Pronom + Droid, a public file format registry. I found this quite useful as it highlighted the challenges inherent in the changing of file formats over time (i.e. identifying 'robust' formats versus 'proliferating' formats versus 'conformant' data containers).

We were then shown how to implement a workflow for digital preservation. Six basic steps apply:
1) Know what you have, 2) Prioritise the risks, 3) Plan what to do about them, 4) Test the plan, 5) Implement the plan, 6) Check the plan has worked. Practical planning tools include PLATO, DMPonline and OAIS. This presentation is particularly valuable if your are new to the field of digital preservation as it introduces a step-by-step protocol to executing digital preservation projects.

The reality is that successful digital preservation projects rely on considerable planning, manpower and expertise (appropriate funding is a separate question altogether). This workshop helped me to better understand what's ultimately required, including which information resources to consult to make sure that projects get properly off the ground.

24 Oct 2013

Playing it safe on the Web

Last night I tuned into a WSL webinar, which covered various aspects around staying safe and sound on the Web (how to browse dangerously whilst staying safe).

The following areas were covered:

Passwords (how to create a good password, password managers)
Viruses and malware (what they are and do)
How to stay Virus free and how to get rid of them
Privacy (how to maintain privacy on social networks)

It thought the session was quite useful, not just for my own benefit but also for situations whereby students ask questions around safe Web browsing. The 1-hour session is recorded here if you would like to check it out for yourself.


Below is a handy list of resources that will help you stay safe on the Web.

Random Word Generator
The Diceware Passphrase Home Page
How secure is my password?
The last password you will have to remember
Open source, light-weight and easy-to-use password manager
Microsoft Security Essentials (MSE)
AVG AntiVirus (free)
AVAST AntiVirus (free)
COMODO AntiVirus (free)
Malwarebytes (free)
Google / Secure your passwords
Google / Keep your device clean
HTTPS Everywhere (Firefox and Chrome extension that encrypts your communications with many major websites, making your browsing more secure)
Anonymity, Privacy, and Security Online
VPN (virtual private network)
Why You Should Start Using a VPN (and How to Choose the Best One for Your Needs)
Android Device Manager
Apple iCloud Find my iPhone
Remove Ask Toolbar and Ask.com Search (Uninstall Guide)
Flashblock (a content-filtering Firefox extension)
NoScript Firefox extension (provides extra protection for Firefox, Seamonkey and other mozilla-based browsers: this free, open source add-on allows JavaScript, Java, Flash and other plugins to be executed only by trusted web sites of your choice (e.g. your online bank))
Gizmos Freeware Reviews
Portable Apps (software solution allowing you to take your favourite software with you)

21 Oct 2013

Life is lossy: a preliminary review of the current Coursera MOOC on metadata by Dr. Jeffrey Pomerantz

Guest post by Giada Gelli, LIS Professional

Who, by now, hasn’t heard of MOOCs (Massive Open Online Courses)? I certainly had heard of this relatively new phenomenon, but had never given it too much weight. After all, I was not in college nor working in the academic field during what can be considered the explosion of the open online education revolution. Wikipedia tells me that MOOCs were born in the form we know them today in 2008, but 2012 seems to have been their absolute year of fame with even the New York Times dubbing it the ‘Year of the MOOC’.

At the end of last month, as I found myself facing another spell of unemployment with nothing lined up in terms of work (thank you, recession), the sudden realisation of a lot of newly-gained free time prompted me to have a look at what MOOCs offerings were out there. I had a look at one of the major players in the field, Coursera, and I was struck by the breadth and depth of options of their free online courses. There are obviously other MOOC providers out there, but my search that day ended there.

In fact, it didn’t take me very long to fall in love with one of the Coursera offerings, a module on metadata taught by Dr. Jeffrey Pomerantz of the School of Information and Library Science at the University of North Carolina at Chapel Hill (quite a mouthful as Dr. Pomerantz puts it!). I had come across the University of North Carolina in the past few years for some of their work done in the field of digitisation and digital curation, so somehow I felt connected to them – the quirkinesses of the web. Also, after working with metadata for a while, I was eager to go back to basics and brush over all of the theory and conceptual models I might have forgotten while working on specific datasets and schemas, in order to get a fresher look on the subject and maybe even get to do some XML coding – happy days!

So, without thinking too much about it I set out to enroll on the Metadata: Organising and Discovering Information course (#MetadataMOOC on Twitter). Unfortunately for me the course had already started 3 weeks before. Nonetheless, the nature of the MOOC model allowed me to enroll at such a later stage without any real impairment to my learning, except for the fact that I would not be able to submit previous homework on time and get formal accreditation for it. But I didn’t mind this aspect, as I felt my main goal was to learn, not to gain points. The signing up process struck me as the easiest ever, similar to those online products we revere so much for their user-friendliness. In a minute I had created my Coursera account and I was subscribed to an awesome looking course.

I was immediately launched into the deep belly of a well-structured academic module. The video lessons neatly organised into units stacked in an orderly hierarchical tree, with only the ones done so far visible on the page, thus allowing for a bit of mystery surrounding future lessons still to come. The lessons’ titles were simply music to my ears: thesauri, Dublin Core, LCSH, HTML, DTD, CDWA Lite etc. In addition, some interesting links to be explored were grouped on the left of the workbench: announcements, downloads, homeworks, sillabus and even more appealing ones such as discussion forums. There was even a Mapping the Metadata MOOC link, where pins had been dropped on a Google Map to show the location of all course participants – thousands from what I could see, and literally from all over the world. This was getting exciting by the minute.

After the smooth, user-friendly experience of the signing-up process I went straight to the introductory video of the first unit of lessons. Even though my Mac uses HTML5 and a message popped up on the page saying that I should switch to Flash, the video launched without glitches and the course began. Dr. Pomerantz started with a welcome and a warning: this was going to be a course for total beginners, perfect for those with very little knowledge about metadata. At first that struck me as perhaps not very appropriate for me. However looking at the syllabus the course seemed to be so comprehensive, and judging by the lessons’ titles very detailed, that I was not put off by it and decided to plough ahead. The academic flavour of the MOOC experience was now beginning to tickle my taste buds. It was like being back in university, but without having to pay fees, how bizarre!

The introduction was quickly followed by a couple of brief lessons engineered around the idea of finding examples of metadata use in our daily lives: phone metadata and the NSA scandal of metadata harvesting (metadata *is* data), the new tagging feature of the latest Apple operating system Maverick, all tangible examples of metadata in action around us.

The lessons went on to explain what metadata is (data about data anyone?), with some very true statements such as:
“…this course is about something that's difficult and maybe impossible to pin down. In Information Science we study all of these weird subjective phenomena - no gravity, or electromagnetism, no well-behaved physical phenomena for us here in Information Science.”

The course then continued on to explore Dublin Core - there’s an entire unit dedicated to it - plus a whole host of metadata schemas and controlled vocabularies used to produce metadata for digital objects of all sorts.

This is how much of the course I have explored so far, and I have to say I am very impressed by it. The homework in the form of multiple-choice questions peppered with more practical exercises is very meaningful and the lessons are enriched by links, recommended readings and interviews with inspiring practitioners in the field of metadata and digital preservation.

In particular, I must mention a very good interview with historian and archivist, or rather free-range archivist (yeah!) Scott Jason (@textfiles) of the brilliant Archive Team and the even more brilliant archive.org. In a very insightful conversation with Pomerantz, Jason pointed out that through initiatives such as the Internet Archive computer history has changed forever. It is thanks to the dedication of various teams of professionals, but also of many amateur (read: nerd) collectors of all times out there, that an extensive library of videos, music, books, magazines and even software is now made freely accessible to everyone on the internet.

Such efforts are so important if we wish to preserve the current and recent history of computing technologies, while securing future access to the metadata that is being constantly produced. Faced with such a wide-scoped digital preservation task, we can only try our best to save as much as we can from oblivion, obsolescence and data corruption, without falling into the illusion that we can archive and preserve everything that has ever been created in the digital world. At the end of the day, in Scott’s words, ‘life is lossy, it is not a lossless protocol’. Preserve we can, but preserve everything shall remain mere utopia.

Screenshot taken from Dr Jeffrey Pomerantz's MOOC
Further reading about this library MOOC:

Metadata MOOC in the news by Dr. Jeffrey Pomerantz on his blog
A tale of two MOOCs by Steven Chang on Infoseer
MOOCs and Libraries by Abby Clobridge on Against the Grain

Posted on Monday, October 21, 2013 | Categories: ,