2 Feb 2014

Student assessment of information literacy instruction - the challenges

Seth Godin's recent post on Measuring nothing (with great accuracy) reminded me of the challenges involved in measuring student learning. In terms of my own teaching practice, student assessment of teaching (SAT) and student feedback is something I have been exploring for a while now, so much so, that I recently raised the question on Twitter as I was interested in hearing what others are doing:


From the responses, Google Docs and Surveymonkey are both popular collection tools (does anyone still use paper I wonder?), with some supplementing this routine post-session data with periodic focus groups or face-to-face feedback sessions to tease out deeper issues. The type of feedback sought typically includes a mixture of different elements including: ranking the usefulness of the sessions, timing of sessions, and suggestions for future topics/workshops. Capturing student contact details also provides a good opportunity for follow-up.

I would be interested to hear what others do with the data they collect, that is, how they analyse and present it, what they use it for, and also what they find it useful for. From my own experience, data is sometimes collected as a matter of process or habit, filed away in neat tables in annual reports and other documents (i.e. technically used), but perhaps not fully utilised to support or inform change. How can we use feedback and assessment measures to shape our strategy, build relationships with academic staff, and guide our teaching practice? The first step in this, requires looking at the kind of feedback and information we gather - are we collecting the right information?

As many of the sessions I deliver are 'one shot', I feel the scope for running pre- and post- tests is limited (though possible). If the post-test is set immediately after the session, questions are likely to be based largely on recall. There is also arguably too small a window of time for any 'real' learning to have taken place, given that learning (hopefully!) happens over a sustained period of time following the session in a self-directed way. If the test is administered at a point further into the future, other confounding factors will likely be introduced, which may preclude the drawing of any meaningful inferences regarding relationships or causation.

To a large extent, I think quantitative measures and tests like Project SAILS are too narrow to fully capture a continuum of competencies like information literacy. Being able to remember the name of a database or the order of elements in a standard citation does not really tell us much about whether a student really understands how to find, evaluate or use information effectively. At best, it is an isolated snapshot of a moment in time, which again can be potentially influenced by other variables.

The literature (Gross & Latham have a recent study on this) shows that students' self-assessments and views of their abilities are often very different from the reality. Consequently asking students to rate their confidence or ability after a session is likely to overstate actual levels in many cases. Using student satisfaction measures will tell us little, if anything, about learning and may be biased by students' perceptions of an instructor's personality, the complexity or difficulty of the material covered during the session or other factors.

At the moment, I find using formative assessment techniques, such as the one-minute paper and muddiest point activities, helpful in terms of informing my future lesson plans and instructional approach (in fact, a recent paper by Gewirtz largely mirrors the structure of my current student feedback method). I use this kind of  feedback to gain insight into the concepts that students engage well with, and areas that remain challenging and may need a different approach. Ultimately, I think a blend of measures is probably the most effective way to capture something that is as complex, covert, and difficult to measure as learning. Nearly all feedback can be potentially useful to some extent, and I think how we actually use the data we collect is just as important. If anyone finds the 'right' answer, let me know :)

2 comments:

  1. Boo, my previous comment just got eaten. :-( I just wanted to chime in and say that I struggle with this too and also feel the same way about things like self-assessments and standardized tests. I actually have a column coming out in American Libraries soon on this topic. I've stopped obsessing on finding a great way to measure student learning in an infolit session (which, as you mentioned mostly measures recall) and focus on things that really inform my teaching. I currently use three methods 1) pre-assignments that act as formative assessments and helps me to tailor my session to the needs of the group, 2) minute papers to inform future instruction, 3) rubric assessments of student research papers. #3 is really the only way to authentically measure whether or not students were able to successfully internalize and use those skills they were taught in the infolit session, but it's also really time-consuming to do and it's difficult-to-impossible to isolate the librarian's contribution. I find it extremely enlightening to read and assess student work (something we librarians too often don't have access to) and it also has really informed changes in my teaching as well as some tutorials a colleague and I built this summer after a big Freshman research paper review.

    ReplyDelete
  2. Thanks for sharing Meredith - some great ideas there. I totally agree about using rubrics to assess student papers, and RAILS (http://railsontrack.info/rubrics.aspx) is something I'm really interested in - I guess it is something that may be difficult to scale up though with a large volume. Looking forward to the column in AL!

    ReplyDelete