Back in 2012, Thomas Reinsfelder suggested that students exposed to a one-on-one consultation with a research librarian after writing a first draft will show a greater improvement in the quality of sources used on the final paper than students who did not meet with a librarian.
This statement is based on the premise that personal research
consultations that include the analysis and progressive monitoring of citations can be quite
effective in improving the qualitative outcome of continuous assessments
produced by undergraduate students.
The rationale here is straight forward. Individual meetings allow for greater attention to detail, consideration of specific academic needs and the ability to address students’ assignment focused concerns. Librarians have a unique learning opportunity here as they can find out about how students select and use information sources. They can also monitor and review the progression of individual works, i.e. from initial assignment draft to final product. The citation analysis element focuses on the quality and appropriateness of sources cited – rather than type and/or format – within the context of a student’s early draft paper and the final pre-submission version.
It is difficult to quantitatively measure the effectiveness and impact of research consultations enjoyed by individual students. For example, Donegan (cited in Reinsfelder, 2012) compared test results and identified little difference in the research skills displayed by students who received individual research support vis-à-vis those who merely availed of group based instruction. Others (Gale and Evans; Williamson, Blocker, and Gray cited in Reinsfelder, 2012) note that individual research consultations were perceived as a positive and effective library service.
Measuring citation quality
It is important to identify adequate criteria and a solid process for rating the quality of an assignment bibliography presented by a student. Through the ‘objective’ measurement of a draft paper, one hopes to improve the quality and appropriateness of information sources used in the final version. Common measures include 1] quantity of sources, 2] format or type of source (digital/analogue; grey literature, journals, monographs), 3] currency, 4] variety, 5] relevance to the topic under discussion, 6] authority/legitimacy/quality of information used (e.g. based on the idea of quality on the reputation of the source, i.e. publisher and author; scholarly vs. popular), 7] consistency in citation formatting.
Rating scale reliability
It is important to consider that the development for any type of assessment scale requires considerable time and effort. When applied in the field, ‘objective’ rating scale measurements are coloured by individuals’ (subjective) assessments. This, in turn, can lead to incongruous outcomes when applied on scale for validity and reliability testing. Consequently, continuous scale variables fine-tuning and re-assessment is required to ensure successful application (see e.g. the efforts made on behalf of the READ Scale and Project SAILS).
It is important to consider that the development for any type of assessment scale requires considerable time and effort. When applied in the field, ‘objective’ rating scale measurements are coloured by individuals’ (subjective) assessments. This, in turn, can lead to incongruous outcomes when applied on scale for validity and reliability testing. Consequently, continuous scale variables fine-tuning and re-assessment is required to ensure successful application (see e.g. the efforts made on behalf of the READ Scale and Project SAILS).
The good news is that in the analysis of this rating scale scenario, one-on-one consultations with a librarian during the paper writing process did result in sources of higher quality being used on the final paper than on the draft paper. The comparison of relevant test results, through the application of the above rating scale, noted a significant statistical difference.
Crucially, no statistical significant difference was noted in the quality of sources used in draft and final papers of students (the control group) who did not enjoy a consultation (one might even use the horrible phrase "an intervention...") with a librarian.
Faculty who supported this project noted that students who enjoyed a one-on-one research consultation with a librarian improved the ultimate outcome on their work significantly. In particular, students developed a better understanding of the different types of sources out there and their appropriate use within the academic research/writing context. Another interesting perception was that students were encouraged to develop new ways of thinking about research assignments.
The bottom line is that strategic partnering up with the library for assistance and expertise is a no-brainer approach in aiding faculty and student success. The use of rating scales (in this case for rating the quality of sources used by students) can be helpful in measuring and evidencing the success of such partnerships.
However, the establishment of such a service necessitates that expectations are carefully managed and calibrated. Research consultations are time and energy consuming for library staff. It is also important to state that not every student on campus can avail of such a focused service due to obvious staffing constraints.
Source and further reading:
Reinsfelder, T. (2012). Citation Analysis as a Tool to Measure the Impact of Individual Research Consultations. College & Research Libraries,73(3), 263-277.
Mackey, T. P., & Jacobson, T. (2010). Collaborative information literacy assessments: Strategies for evaluating teaching and learning. New York: Neal-Schuman Publishers, Inc.
Reinsfelder, T. (2012). Citation Analysis as a Tool to Measure the Impact of Individual Research Consultations. College & Research Libraries,73(3), 263-277.
Mackey, T. P., & Jacobson, T. (2010). Collaborative information literacy assessments: Strategies for evaluating teaching and learning. New York: Neal-Schuman Publishers, Inc.
0 comments:
Post a Comment