Wondering how librarians are currently gathering student feedback on #infolit instruction? Online/print? & what do you ask for feedback on?
— Mish (@mishdalton) January 15, 2014
From the responses, Google Docs and Surveymonkey are both popular collection tools (does anyone still use paper I wonder?), with some supplementing this routine post-session data with periodic focus groups or face-to-face feedback sessions to tease out deeper issues. The type of feedback sought typically includes a mixture of different elements including: ranking the usefulness of the sessions, timing of sessions, and suggestions for future topics/workshops. Capturing student contact details also provides a good opportunity for follow-up.
I would be interested to hear what others do with the data they collect, that is, how they analyse and present it, what they use it for, and also what they find it useful for. From my own experience, data is sometimes collected as a matter of process or habit, filed away in neat tables in annual reports and other documents (i.e. technically used), but perhaps not fully utilised to support or inform change. How can we use feedback and assessment measures to shape our strategy, build relationships with academic staff, and guide our teaching practice? The first step in this, requires looking at the kind of feedback and information we gather - are we collecting the right information?
As many of the sessions I deliver are 'one shot', I feel the scope for running pre- and post- tests is limited (though possible). If the post-test is set immediately after the session, questions are likely to be based largely on recall. There is also arguably too small a window of time for any 'real' learning to have taken place, given that learning (hopefully!) happens over a sustained period of time following the session in a self-directed way. If the test is administered at a point further into the future, other confounding factors will likely be introduced, which may preclude the drawing of any meaningful inferences regarding relationships or causation.
To a large extent, I think quantitative measures and tests like Project SAILS are too narrow to fully capture a continuum of competencies like information literacy. Being able to remember the name of a database or the order of elements in a standard citation does not really tell us much about whether a student really understands how to find, evaluate or use information effectively. At best, it is an isolated snapshot of a moment in time, which again can be potentially influenced by other variables.
The literature (Gross & Latham have a recent study on this) shows that students' self-assessments and views of their abilities are often very different from the reality. Consequently asking students to rate their confidence or ability after a session is likely to overstate actual levels in many cases. Using student satisfaction measures will tell us little, if anything, about learning and may be biased by students' perceptions of an instructor's personality, the complexity or difficulty of the material covered during the session or other factors.
At the moment, I find using formative assessment techniques, such as the one-minute paper and muddiest point activities, helpful in terms of informing my future lesson plans and instructional approach (in fact, a recent paper by Gewirtz largely mirrors the structure of my current student feedback method). I use this kind of feedback to gain insight into the concepts that students engage well with, and areas that remain challenging and may need a different approach. Ultimately, I think a blend of measures is probably the most effective way to capture something that is as complex, covert, and difficult to measure as learning. Nearly all feedback can be potentially useful to some extent, and I think how we actually use the data we collect is just as important. If anyone finds the 'right' answer, let me know :)