EDU 800 Week 11 Annotated Bibliography

Ertmer, P. A., Richardson, J. C., Belland, B., Camin, D., Connolly, P., Coulthard, G., Lei, K., & Mong, C. (2007). Using peer feedback to enhance the quality of student online postings: An exploratory study. Journal of Computer-Mediated Communication, 12(2), 412-433. doi:10.1111/j.1083-6101.2007.00331.x

This study investigates the impact of peer feedback as an instructional strategy to increase the quality of students’ online postings. In addition, the authors investigated the impact of feedback by comparing the quality of students’ postings, based on Bloom’s taxonomy, from pre-course to post-course. While peer feedback has been demonstrated to support students’ learning in traditional classrooms, little is known about its efficacy in online discussions. To address this gap, the authors examined students’ perceptions of the value of giving and receiving peer feedback, specifically related to the quality of discussion postings in an online course.

Results suggest that the quality of students’ postings was maintained through the use of peer feedback despite students’ preferences for instructor feedback. Students noted that peer feedback could be valuable and, more importantly, described how giving peer feedback not only reinforced their learning but enabled them to achieve higher understanding. Effective discussions progress from descriptive content discussions to include both reflection and critical thinking.

The article poses that discussions in online environments are supported by the socio-cognitive learning perspective, specifically Vygotsky’s (1978) Zone of Proximal Development. Embedded in this perspective is the idea that understanding results from personal interactions in social contexts. However, the authors note neither interaction nor discussion alone is enough to guarantee that students will reach the critical level of learning desired.

The article provides a comprehensive introduction that identifies notable literature that emphasizes the importance of interaction and reinforces that most online discussions consist of sharing and comparing information, with little evidence of critical analysis or higher-order thinking. Furthermore, the introduction nicely provides the role of feedback in instruction identifying seven essential functions that feedback performs (Nicol & Macfarlane-Dick, 2006). Moreover, the role of feedback in online environments is provided. The authors discuss the advantages and challenges of using peer feedback.

The authors emphasize limited research has been conducted that examines the role or impact of feedback in online learning environments in which learners construct their knowledge based on prior experiences and peer interactions. Moreover, very few, if any, studies have examined the impact of using peer feedback to shape the quality of discourse in an online course. Nevertheless, the authors pose three research questions in their mixed-method exploratory study. First, they attempt to determine the level of impact of peer feedback in an online environment and the ability to maintain or improve the quality of instruction. Secondly, they identify students’ perceptions of the value of peer feedback and how it compares to receiving the instructor’s feedback. Lastly, identifying the students’ perceptions of the value of giving peer feedback.

From a presentation perspective, the authors describe their mixed methodology by providing a methodological overview, the role of the researchers, participants, contextual procedures, and data collection. The authors used a case study framework to conduct an in-depth study regarding the use of peer feedback in an online environment that was situated within a semester-long, graduate-level course. Using both descriptive and evaluative approaches, they examined participants’ perceptions of the value of the peer feedback process and evaluated the impact of the process on the quality of students’ postings.

The research team included two faculty members and seven graduate students (one female, six male) enrolled. They collaboratively identified the specific research focus and created data collection instruments (e.g., surveys, interview protocols) and interview analysis codes. In contrast, the participants included 15 graduate students (10 female, 5 male) enrolled in an online technology integration course during the spring semester of 2005. In a typical week, students were expected to post at least one response to the discussion question (DQ) and one response to another student’s post.

For this study, feedback was defined as 1) a numerical score (from 0-2) based on Bloom’s taxonomy and 2) descriptive comments supporting the assigned score and relating specifically to the quality of the post. A scoring rubric, adapted from Ertmer and Stepich (2004), also provided the instructor, students, and researchers with a concrete tool for determining the quality of thinking embedded within online postings. Nevertheless, quantitative and qualitative data were collected through participant interviews, scored ratings of students’ weekly discussion postings, and responses to entry and exit survey questionnaires.

Survey results captured students’ overall perceptions of giving and receiving feedback, while interviews provided insights into individual perceptions and personal experiences with the feedback process. Changes over the semester in posting scores were used to answer our research question regarding the impact of peer feedback on the quality of students’ postings. The researchers used Non-numerical Unstructured Data Indexing Searching and Theorizing (NUD*IST) qualitative analysis software to help identify recurring themes and patterns across the interview data. Lastly, the authors describe validity and reliability issues within their methodology, which was insightful.

In summarizing the study’s results, although participants’ perceptions of the importance of feedback in an online course significantly increased from the beginning to the end, students continued to believe that instructor feedback was more important than peer feedback. Despite seeing no quantitative improvement in the quality of students’ postings during the peer feedback process, interview data suggested that participants valued the peer feedback process and benefited from having to give and receive peer feedback.

This is one of the most comprehensive studies I have had the pleasure to review regarding “feedback” from my experience as a graduate student. Moreover, the article provides many relationships with best practices and lessons learned. For practical purposes, the article focused on ensuring my institution’s online courses meet specific criteria. The quality of discourse in an online course could be evaluated based on the following criteria: relevance, depth, clarity, engagement, diversity, and respect. Additionally, I have found that regardless of a student’s age, most students do not openly provide peer feedback. However, from a social and communication perspective, programmed engagements help give students confidence. Reinforcing this task in opportunistic means at relatively moderate frequency provides them with experience, thus allowing them to reflect on future feedback deliveries.

Leave a comment