Experiences from Chatbot Conversations in Automated Programming Assessment Systems: User Personas and Feedback Reliability
Introduction: This study explores chatbot interactions in programming assessments to uncover distinct user personas and evaluate the reliability of ChatGPT-generated feedback.
Methodology:
- Data Extraction: Collect conversation metadata, linguistic features, sentiment cues, and engagement patterns.
- User Persona Analysis: Identify and classify behavioral profiles based on interaction styles.
- Performance Comparison: Evaluate student metrics before and after the integration of automated feedback.
- Synthesis: Combine findings to assess the educational impact and reliability of AI-based feedback.
Back to Theses Topics