
Zusammenfassungen
In this study, we aim to examine how students’ awareness of the feedback
provider’s identity might influence their evaluation of feedback content, particularly
in the context of algorithm aversion and preference for human expertise.
With these goals in mind, the study seeks to address the following research questions:
Von Tanya Nazaretsky, Paola Mejia-Domenzain, Vinitra Swamy, Jibril Frej, Tanja Käser im Text AI or Human? Evaluating Student Feedback Perceptions in Higher Education (2024) - First, can students distinguish between AI-generated and humancreated feedback (simplified Turing Test), and what factors influence their ability to make this distinction (RQ1)?
- Second, how do students’ perceptions of the same feedback content change after revealing the feedback provider’s identity (RQ2)?
- And third, do students hold a negative bias towards AI as a feedback provider (RQ3)?
To summarize, our study analyzing 457 student responses in actual
learning contexts gave us a detailed and accurate understanding of student
responses to human and AI-generated feedback, a depth that synthetic scenarios
might not achieve. We found that students’ feedback evaluations were influenced
by their knowledge of the feedback provider’s identity. Students tended to rate
human feedback slightly higher after being informed about the provider, whereas
AI-generated feedback was rated lower, especially regarding Genuineness where
the decrease was significant. Furthermore, the results of the Turing Test had
a notable correlation with feedback perception. Students who failed the Turing
Test rated AI-generated feedback higher than human feedback, while those who
passed the test preferred human-generated feedback. A significant finding of the
study was the influence of feedback provider identity on the perceived credibility
of the feedback. Humans as feedback providers were consistently rated as more
credible compared to AI. This underscores the prevailing preference for human
feedback in educational settings and highlights the complexities of integrating
AI tools into educational environments.
Von Tanya Nazaretsky, Paola Mejia-Domenzain, Vinitra Swamy, Jibril Frej, Tanja Käser im Text AI or Human? Evaluating Student Feedback Perceptions in Higher Education (2024) Feedback plays a crucial role in learning by helping individuals understand and improve their performance. Yet, providing timely, personalized feedback in higher education presents a challenge due to the large and diverse student population, often resulting in delayed and generic feedback. Recent advances in generative Artificial Intelligence (AI) offer a solution for delivering timely and scalable feedback. However, little is known about students’ perceptions of AI feedback. In this paper, we investigate how the identity of the feedback provider affects students’ perception, focusing on the comparison between AI-generated and human-created feedback. Our approach involves students evaluating feedback in authentic educational settings both before and after disclosing the feedback provider’s identity, aiming to assess the influence of this knowledge on their perception. Our study with 457 students across diverse academic programs and levels reveals that students’ ability to differentiate between AI and human feedback depends on the task at hand. Disclosing the identity of the feedback provider affects students’ preferences, leading to a greater preference for human-created feedback and a decreased evaluation of AI-generated feedback. Moreover, students who failed to identify the feedback provider correctly tended to rate AI feedback higher, whereas those who succeeded preferred human feedback. These tendencies are similar across academic levels, genders, and fields of study. Our results highlight the complexity of integrating AI into educational feedback systems and underline the importance of considering student perceptions in AI-generated feedback adoption in higher education.
Von Tanya Nazaretsky, Paola Mejia-Domenzain, Vinitra Swamy, Jibril Frej, Tanja Käser im Text AI or Human? Evaluating Student Feedback Perceptions in Higher Education (2024)
Dieses Konferenz-Paper erwähnt ...
![]() Personen KB IB clear | Berkeley J. Dietvorst , John Hattie , Duri Long , Brian Magerko , Cade Massey , Joseph P. Simmons | |||||||||||||||||||||||||||
![]() Begriffe KB IB clear | algorithm aversion
, Bildungeducation (Bildung)
, ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() | |||||||||||||||||||||||||||
![]() Bücher |
| |||||||||||||||||||||||||||
![]() Texte |
|
Dieses Konferenz-Paper erwähnt vermutlich nicht ... 
![]() Nicht erwähnte Begriffe | Generative Pretrained Transformer 3 (GPT-3), GMLS & Schule, Textgeneratoren-Verbot |
Tagcloud
Einträge in Beats Blog
Zitationsgraph
Zitationsgraph (Beta-Test mit vis.js)
Anderswo finden
Volltext dieses Dokuments
![]() | ![]() ![]() ![]() ![]() ![]() |
![]() | ![]() ![]() ![]() ![]() ![]() |
Anderswo suchen 
Beat und dieses Konferenz-Paper
Beat hat Dieses Konferenz-Paper erst in den letzten 6 Monaten in Biblionetz aufgenommen. Beat besitzt kein physisches, aber ein digitales Exemplar. Eine digitale Version ist auf dem Internet verfügbar (s.o.). Es gibt bisher nur wenige Objekte im Biblionetz, die dieses Werk zitieren. Beat hat Dieses Konferenz-Paper auch schon in Blogpostings erwähnt.