References

Digital literacy in higher education, part II: an NMC Horizon project strategic brief. 2017. https://tinyurl.com/37a8w8xh

College of Paramedics. Paramedic curriculum. 2024. https://tinyurl.com/mrxdwxyv

Cope B, Kalantzis M The things you do to know: An introduction to the pedagogy of multiliteracies. In: Cope B, Kalantzis M London: Palgrave Macmillan; 2015 https://doi.org/10.1057/9781137539724_1

Cotton D, Cotton P, Shipway JRCharlottesville (VA): Centre for Open Science; 2023 http://dx.doi.org/10.35542/osf.io/mrz8h

Department for Education. Generative artificial intelligence in education call for evidence. 2023. https://tinyurl.com/5dxy5y3t

Information Commissioner's Office. Artificial intelligence (AI) and data protection. 2023. https://tinyurl.com/2f2j85cn

Koh KHOxford: Oxford Research Encyclopedia of Education; 2017 https://doi.org/10.1093/acrefore/9780190264093.013.22

Kumar R, Eaton SE, Mindzak M, Morrison R Academic integrity and artificial intelligence: an overview.Singapore: SpringerNature; 2023 https://doi.org/10.1007/978-3-031-39989-3_153

Integrating large language models into higher education: guidelines for effective implementation. 2023. 10.3390/cmsf2023008065

McGrew S, Breakstone J, Ortega T, Smith M, Wineburg S Can students evaluate online sources? Learning from assessments of civic online reasoning. Theory Res Soc Educ. 2018; 46:(2)165-193 https://doi.org/10.1080/00933104.2017.1416320

National Centre for Computing Education. Summary of envisioning AI for K–12: what should every child know about AI?. 2024. https://tinyurl.com/2ssnxawm

Perkins M Academic integrity considerations of AI Large Language Models in the post-pandemic era: ChatGPT and beyond. J University Teach Learn Pract. 2023; 20:(2) https://tinyurl.com/4jr35e3w

Russell Group. Russell Group principles on the use of generative AI tools in education. 2023. https://tinyurl.com/yc6ab8tt

Digital fluency: Preparing students to create big, bold problems. 2018. https://tinyurl.com/4nw7u53m

University College London. Using generative AI (GenAI) in learning and teaching. 2023. https://tinyurl.com/985uudc8

University and College Union. Workload survey data report. 2022. https://tinyurl.com/5dynbyp6

Innovation and integrity: AI in paramedic education

02 August 2024
Volume 16 · Issue 8

The College of Paramedics (2024) recently released the updated version of its paramedic curriculum, which adheres to several crucial documents that ensure the highest quality of paramedic education in the UK and compliance with essential standards. This curriculum outlines the intended learning outcomes necessary for a student paramedic to register as a paramedic and practice safely and effectively.

Learning outcomes are generally assessed through various methods in paramedic curricula to guarantee that students have gained the requisite knowledge, skills, and attitudes to become proficient paramedics. These assessment methods may include written assignments, practical demonstrations, simulations, clinical placements, multiple-choice examinations, and others. By employing diverse assessment strategies, educators can comprehensively evaluate students' progress and readiness to enter the paramedic profession.

However, the increasing sophistication and accessibility of artificial intelligence (AI) technologies pose significant challenges to the authenticity and integrity of certain assessments in paramedic curricula.

An authentic assessment effectively evaluates a student's true intellectual abilities and depth of understanding by requiring them to complete tasks that showcase their higher-order thinking skills and ability to solve complex problems (Koh, 2017). In written assessments, the process of researching, organising, and critically thinking through the topic is just as important as the essay itself. Effective essays demonstrate the student's ability to engage with the subject matter, analyse information, and communicate effectively. This aligns with authentic assessment principles, which seek to evaluate a student's true intellectual abilities and depth of understanding.

However, the rise of large language models (LLMs) poses significant challenges to the authenticity of written assessments. LLMs, such as GPT, can generate highly coherent and contextually relevant text, making it difficult to distinguish between human-written and AI-generated essays (Cotton et al, 2023; Perkins, 2023). This means that students could potentially use LLMs to complete their essay assignments, bypassing the crucial process of researching, organising, and critically engaging with the subject matter.

The use of LLMs in essay assessments undermines the purpose of authentic assessment, as it becomes challenging to evaluate a student's true intellectual abilities and depth of understanding when the task can be completed by an AI system (Perkins, 2023). This raises concerns about the validity and reliability of written assessments in the face of advanced language models (Kumar et al, 2023).

This paradigm shift in how written assessments can be formulated has introduced a new dimension to the assessment process. It is no longer sufficient to evaluate only the final outcome; the process itself must also be taken into consideration. By focusing on the process, as well as the outcome, educators can better assess a student's genuine understanding and critical thinking skills—even in the presence of advanced language models. Different techniques are being developed to help educators achieve this process. It is crucial during this development phase, however, that there is transparency between educators and students regarding the use of LLM in essay writing. Open communication and clear guidelines are essential for maintaining academic integrity and ensuring that students receive the support they need to develop their knowledge.

Educators should engage in proactive discussions with students about the potential benefits and drawbacks of using LLMs in the writing process (Licht, 2023). This includes addressing concerns about plagiarism, the importance of original thought, and the role of AI as a tool rather than a replacement for human creativity and critical thinking. Offering lectures, workshops, or tutorials on how to effectively use LLMs as a writing aid while maintaining originality and critical thinking skills is a critical starting place. The use of AI in essay writing requires educators to be well-informed about how these tools can and should be used to demonstrate students' abilities accurately. This, in turn, creates further issues that need to be addressed.

The first is that some LLMs are protected by a paywall, meaning that access to their full capabilities requires a paid subscription. This could create a disadvantage for students who cannot afford the paid version, leading to a digital divide in the classroom. Unless the educational institution provides equal access to all students, this disparity in access to advanced AI tools could significantly impact the fairness and equity of the learning environment. If some students have access to more advanced AI tools due to their ability to pay for premium features, they may be able to generate higher-quality content with less effort compared to their peers who lack access to these tools. This creates an uneven playing field and undermines the integrity of the assessment process. The second issue that arises is the need to teach students foundational skills for developing critical thought, with an emphasis on the ability to prompt and critique (Cope and Kalantzis, 2015; Alexander et al, 2017).

In the context of using AI tools for essay writing, students must be taught how to effectively prompt the AI system to generate content that aligns with their intended purpose (Sparrow, 2018). This requires a deep understanding of the subject matter, as well as the ability to craft clear, specific, and well-structured prompts. By learning how to prompt the AI effectively, students can better leverage these tools to support their writing process and enhance their understanding of the topic.

Equally important is the skill of critiquing the AI-generated content (McGrew et al, 2018). Students must be able to critically evaluate the output provided by the AI tool, assessing its relevance, accuracy, and coherence. They should be taught to identify potential biases, inconsistencies, or irrelevant information in the generated content, and to make necessary revisions or adaptations to ensure that the final essay meets the desired standards.

To foster these skills, educators should:

  • Provide explicit instruction and guidance on how to create effective prompts for AI tools, including examples of well-structured and poorly-structured prompts
  • Encourage students to critically analyse AI-generated content and discuss their findings with peers and teachers
  • Incorporate exercises and assignments that require students to revise and refine AI-generated content to better align with the intended message and purpose
  • Engage students in discussions about the limitations and potential biases of AI tools and the importance of human judgment in the writing process (Alexander et al, 2017).

By emphasising the development of these foundational skills, educators can help students to become more critical and reflective thinkers, who are better equipped to navigate the challenges and opportunities presented by AI-assisted learning.

In most universities, there is an ongoing debate about the emergence of LLMs. The divide typically falls between ‘AI-optimists,’ who view LLMs as a significant opportunity to enhance student learning and performance, and ‘AI-skeptics,’ who worry that students might not acquire essential knowledge, potentially leading to post-graduation challenges (Licht, 2023).

The UK has made significant strides towards establishing itself as a global leader in AI. A key component of this strategy involves measures to support and educate people about AI, starting from a young age. Initiatives such as the collaboration with the National Centre for Computing Education (2024) aim to engage children with AI, fostering early interest and proficiency in this critical field.

Educational institutions across the UK are increasingly incorporating AI into their curricula. Schools that use International Baccalaureate qualifications and members of the Russell Group of universities are notable examples. These institutions actively encourage the use of AI tools among pupils and students, though the extent of integration varies. For instance, the University College London (UCL) (2023) has outlined specific guidelines for using generative AI in learning and teaching, underscoring a structured approach to integrating these technologies in educational settings. Similarly, the Russell Group has developed principles on the use of generative AI tools in education, highlighting best practices and ethical considerations (Russell Group, 2023).

The integration of AI in education is still in its early stages, posing challenges for policymakers, universities, and students due to limited evidence of its benefits. Current research indicates that AI has the potential to improve education through various means. These include administrative tasks such as student record-keeping and progress tracking, enhancing accessibility and inclusion by addressing diverse student needs, and supporting teaching and learning through personalised approaches (Department for Education, 2023).

However continuous professional development (CPD) for educators on the latest AI technologies and their regulatory requirements is essential. Ongoing training would have to be initiated so that educators remain up to date with best practices in AI and data protection. Institutions are required to maintain detailed documentation of their data processing activities, including the use of AI (Information Commissioner's Office 2023). This documentation should include data protection impact assessments, records of consent, and evidence of compliance with general data protection regulation principles. Institutions must be able to demonstrate accountability and compliance to regulatory bodies upon request.

The University and College Union (2022) survey highlights significant workload challenges in higher education and further education, with staff often working multiple unpaid days each week. AI could offer a potential solution by automating administrative tasks, thus reducing the burden on educators and allowing them to focus more on teaching and professional development. However, the successful integration of AI poses its own challenges. The current lack of capacity and resources within these institutions might hinder the effective implementation of AI technologies. This could further strain staff, who would need to manage the transition to AI systems while continuing to cope with their already unmanageable workloads.

Educators must begin embracing AI to prepare paramedic students for a future where understanding the benefits and risks of AI is essential. Teachers are now in a position similar to the ‘red queen’ in ‘Alice in Wonderland,’ needing to run as fast as they can to stay up to date with the rapidly evolving field. Adopting AI is crucial for keeping pace with technological advancements and ensuring that students are well-equipped for the future.