Profile

Eduard Frankford's Picture

PhD Researcher
University of Innsbruck
Department of Computer Science

ICT Building Room 3M02
Technikerstraße 21a
A-6020 Innsbruck
Austria - Europe

Email: Firstname.Secondname [AT_NOSPAM] uibk.ac.at

Research Interests

Available Theses Topics

I provide guidance on topics suitable for seminars (SE), bachelor's (BSC), and master's (MSC) theses, primarily within the disciplines of computer science (CS), software engineering (SE), and information systems (IS). I mentor students from the University of Innsbruck and various other European institutions. Below is a compilation of potential thesis topics. Should any of these pique your interest, I welcome you to reach out.

Title Type Study
In the realm of automated programming assessments, unit tests play a pivotal role in determining the correctness and efficiency of submitted code. While these tests can indicate where the code fails, they often fall short in providing qualitative feedback on why a particular piece of code may have failed or how to rectify the mistake. By integrating the capabilities of OpenAI's GPT API, this system seeks to bridge this feedback gap. The AI-driven system will examine failing code in conjunction with the associated unit test failure, attempting to ascertain the underlying mistake. The feedback will then be enriched with detailed insights, offering suggestions, or pinpointing errors, assisting the programmer in understanding and rectifying their mistake. This approach promises to not only make feedback more informative but also aid in the learning process, ensuring that coders don't just identify their errors, but truly understand them, fostering deeper learning and better coding practices. In summary, the thesis will address a significant limitation in traditional automated programming assessments by leveraging the advanced capabilities of a large language model, offering a more holistic, informative, and educational feedback system for programmers.
This study sets out to analyze the impact of ChatGPT on student performance. By comparing student results before and after the indroduction of ChatGPT, the research aims to determine if the feedback and guidance provided by ChatGPT positively or negatively affect students' understanding and problem-solving capabilities in programming tasks.
The review aims to present a comprehensive picture of the current state of research in this intersection of AI and education. It will shed light on the real-world impacts of integrating models like ChatGPT in APAS, providing educators, researchers, and developers with a clearer understanding of the benefits, challenges, and future possibilities.
This research delves into the multifaceted effects of Automated Programming Assessment Systems (APAS) on both educators and students within computer science education. It seeks to explore the benefits, challenges, and unintended consequences that emerge from automating the process of evaluating programming tasks, shedding light on how APAS transforms the pedagogical landscape.
This research aims to delve into the capabilities of large language models (LLMs) like OpenAI's GPT series in aiding the creation of exercises for Automated Programming Assessment Systems (APAS). By examining how LLMs can generate, modify, or enrich programming problems, the study aspires to unlock novel methodologies to keep APAS content diverse, relevant, and aligned with evolving educational needs.
This research focuses on evaluating the reliability and validity of feedback provided by ChatGPT when used in programming contexts. By examining the consistency, accuracy, and relevance of ChatGPT's responses to coding queries and problems, the study intends to determine the system's efficacy and trustworthiness as an automated feedback tool in educational programming environments.

Publications