References

Abe D, Inaji M, Hase T A prehospital triage system to detect traumatic intracranial hemorrhage using machine learning algorithms. JAMA Netw Open. 2022; 5:(6) https://doi.org/10.1001/jamanetworkopen.2022.16393

Bajwa J, Munir U, Nori A, Williams B. Artificial intelligence in healthcare: transforming the practice of medicine. Future Healthc J. 2021; 8:(2)e188-e194 https://doi.org/10.7861/fhj.2021-0095

Baştanlar Y, Özuysal M. Introduction to machine learning. In: Yousef M, Allmer J (eds). Totowa (NJ): Humana Press; 2014

Chew KS, van Merrienboer JJG, Durning SJ. Perception of the usability and implementation of a metacognitive mnemonic to check cognitive errors in clinical setting. BMC Med Educ. 2019; 19:(1) https://doi.org/10.1186/s12909-018-1451-4

Cotton D, Cotton P, Shipway JR. Chatting and cheating. ensuring academic integrity in the era of chatgpt. 2023; https://doi.org/10.35542/osf.io/mrz8h

Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J. 2019; 6:(2)94-98 https://doi.org/10.7861/futurehosp.6-2-94

Dehouche N. Plagiarism in the age of massive generative pre-trained transformers (GPT-3). Ethics Sci Environ Polit. 2021; 21:17-23 https://doi.org/10.3354/esep00195

Feretzakis G, Karlis G, Loupelis E Using machine learning techniques to predict hospital admission at the emergency department. J Crit Care Med. 2022; 8:(2)107-116 https://doi.org/10.2478/jccm-2022-0003

FitzGerald C, Hurst S. Implicit bias in healthcare professionals: a systematic review. BMC Med Ethics. 2017; 18:(1) https://doi.org/10.1186/s12910-017-0179-8

Gerke S, Minssen T, Cohen G. Ethical and legal challenges of artificial intelligence-driven healthcare. In: Bohr A, Memerzadeh K. London: Elsevier; 2020 https://doi.org/10.1016/B978-0-12-818438-7.00012-5

González-Gonzalo C, Thee EF, Klaver CC Trustworthy AI: closing the gap between development and integration of AI systems in ophthalmic practice. Prog Retin Eye Res. 2022; 90:(101034) https://doi.org/10.1016/j.preteyeres.2021.101034

Green J, Ewings S, Wortham R, Walsh B. New ‘nature of call’ telephone screening tool, employed prior to nhs pathways triage, can accurately identify those later treated for out of hospital cardiac arrest: analysis of sensitivity and specificity using routine ambulance service data. Emerg Med J. 2019a; 36:(1) https://doi.org/10.1136/emermed-2019-999.12

Green J, Ewings S, Wortham R, Walsh B. The NHS pathways (NHSP) medical call triage system and new ‘nature of call’ telephone screening tool, employed prior to NHSP, can accurately identify high acuity patients: analysis of sensitivity and specificity using routine ambulance service data. Emerg Med J. 2019b; 36:(1)e5-e6 https://doi.org/10.1136/emermed-2019-999.13

Hammond MEH, Stehlik J, Drakos SG, Kfoury AG. Bias in medicine: lessons learned and mitigation strategies. JACC Basic Transl Sci. 2021; 6:(1)78-85 https://doi.org/10.1016/j.jacbts.2020.07.012

Hannun AY, Rajpurkar P, Haghpanahi M Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network. Nature Med. 2019; 25:(1)65-69 https://doi.org/10.1038/s41591-018-0268-3

Haleem A, Javaid M, Qadri MA, Suman R. Understanding the role of digital technologies in education: a review. Sustainable Operations and Comput. 2022; 3:275-285 https://doi.org/10.1016/j.susoc.2022.05.004

Hayashi Y, Shimada T, Hattori N A prehospital diagnostic algorithm for strokes using machine learning: a prospective observational study. Sci Rep. 2021; 11:(20519)

Itoe Mote NJ, Karadas G. The impact of automation and knowledge workers on employees' outcomes: Mediating role of knowledge transfer. Sustainability. 2022; 14:(3) https://doi.org/10.3390/su14031377

Kang D, Cho K, Kwon O Artificial intelligence algorithm to predict the need for critical care in prehospital emergency medical services. Scand J Trauma Resusc Emerg Med. 2020; 28:(1) https://doi.org/10.1186/s13049-020-0713-4

Kirubarajan A, Taher A, Khan S, Masood S. Artificial intelligence in emergency medicine: A scoping review. J Am Coll Emerg Physicians Open. 2020; 1:(6)1691-1702 https://doi.org/10.1002/emp2.12277

Korteling JE, van de Boer-Visschedijk GC, Blankendaal RAM, Boonekamp RC, Eikelboom AR. Human-versus artificial intelligence. Front Artif Intell. 2021; https://doi.org/10.3389/frai.2021.622364

Kruse O. The origins of writing in the disciplines. Written Comm. 2006; 23:(3)331-352 https://doi.org/10.1177/0741088306289259

Masic I. Medical decision making - an overview. Acta Inform Med. 2022; 30:(3)230-235 https://doi.org/10.5455/aim.2022.30.230-235

Mencl F, Wilber S, Frey J, Zalewski J, Maiers JF, Bhalla MC. Paramedic ability to recognize st-segment elevation myocardial infarction on prehospital electrocardiograms. Prehosp Emerg Care. 2013; 17:(2)203-210 https://doi.org/10.3109/10903127.2012.755585

Miles J, Turner J, Jacques R, Williams J, Mason S. Using machine-learning risk prediction models to triage the acuity of undifferentiated patients entering the emergency care system: a systematic review. Diagnost Prognost Res. 2020; 4:(1)1-12 https://doi.org/10.1186/s41512-020-00084-1

NHS England. The ambulance response programme review. 2018. https//tinyurl.com/yc2ccbzr (accessed 27 April 2023)

NHS England. Delivery plan for recovering urgent and emergency care services. 2023. https//tinyurl.com/yvrbu9jk (accessed 27 April 2023)

Norori N, Hu Q, Aellen FM, Faraci FD, Tzovara A. Addressing bias in big data and AI for health care: A call for open science. Patterns. 2021; 2:(10) https://doi.org/10.1016/j.patter.2021.100347

Owoc ML, Sawicka A, Weichbroth P. Artificial intelligence technologies in education: benefits, challenges and strategies of implementation,”. IFIP Adv Info Comm Technol. 2021; 37-58 https://doi.org/10.1007/978-3-030-85001-2_4

Phillips JS. Paramedics' perceptions and experiences of NHS 111 in the south west of England. J Para Pract. 2020; 12:(6)227-234 https://doi.org/10.12968/jpar.2020.12.6.227

Poon AIF, Sung JJY. Opening the black box of AI-Medicine. J Gastroenterol Hepatol. 2021; 36:(3)581-584 https://doi.org/10.1111/jgh.15384

Rivenbark JG, Ichou M. Discrimination in healthcare as a barrier to care: experiences of socially disadvantaged populations in France from a nationally representative survey. BMC Public Health. 2020; 20:(1) https://doi.org/10.1186/s12889-019-8124-z

Rogers PL. Barriers to adopting emerging technologies in Education. J Educ Comput Res. 2000; 22:(4)455-472 https://doi.org/10.2190/4uje-b6vw-a30n-mce5

Sharples M. Automated essay writing: an AIED opinion. Int J Artif Intell Educ. 2022; 32:1119-1126 https://doi.org/10.1007/s40593-022-00300-7

Siebelt M, Das D, Van Den Moosdijk A, Warren T, Van Der Putten P, Van Der Weegen W. Machine learning algorithms trained with pre-hospital acquired history-taking data can accurately differentiate diagnoses in patients with hip complaints. Acta Orthopaedia. 2021; 92:(3)254-257 https://doi.org/10.1080/17453674.2021.1884408

Spangler D, Hermansson T, Smekal D, Blomberg H. A validation of machine learning-based risk scores in the prehospital setting. PLoS One. 2019; 14:(12) https://doi.org/10.1371/journal.pone.0226518

Takeda M, Oami T, Hayashi Y Prehospital diagnostic algorithm for acute coronary syndrome using machine learning: a prospective observational study. Sci Rep. 2022; 12:(14593) https://doi.org/10.1038/s41598-022-18650-6

Tamminen J, Kallonen A, Hoppu S, Kalliomäki J. Comparison of prehospital national early warning score and machine learning methods for predicting mortality. Resuscitation. 2019; 142:(s1) https://doi.org/10.1016/J.RESUSCITATION.2019.06.045

Tay SW, Ryan P, Ryan CA. Systems 1 and 2 thinking processes and cognitive reflection testing in medical students. Can Med Educ J. 2016; 7:(2)e97-e103 https://doi.org/10.36834/CMEJ.36777

Ambulance Response Programme: Evaluation of Phase 1 and Phase 2. Final Report. 2017. https//tinyurl.com/msyhjv33 (accessed 27 April 2023)

Vyas DA, Eisenstein LG, Jones DS. Hidden in plain sight - reconsidering the use of race correction in clinical algorithms,”. New Engl J Med. 2020; 383:(9)874-882 https://doi.org/10.1056/NEJMms2004740

Wadden JJ. Defining the undefinable: the black box problem in healthcare artificial intelligence. J Med Ethics. 2021; 48:(10)764-768 https://doi.org/10.1136/medethics-2021-107529

Williamson B. The hidden architecture of higher education: building a big data infrastructure for the ‘smarter niversity.’”. Int J Educ Technol Higher Educ. 2018; 15:(1) https://doi.org/10.1186/s41239-018-0094-1

The age of artificial intelligence

02 May 2023
Volume 15 · Issue 5

Artificial intelligence (AI) is gradually integrating into various sectors such as finance, transportation, energy and education. Although AI is in its infancy in healthcare, it is still being used in many ways, including medical imaging, chatbots, diagnosis, treatment, and telephone triage in an ambulance setting. The introduction of AI has given rise to ethical concerns—particularly about how data are gathered and used (Gerke et al, 2020).

The key attributes of AI are its ability to analyse and compare vast datasets and predict likely outcomes, hence its integration into patient triage and assessment systems. To achieve genuine impartiality and autonomy in the realm of AI, datasets utilised by such systems must possess analogous qualities (González-Gonzalo et al, 2022). Norori et al (2021) highlight statistical and social bias within healthcare datasets.

Adressing bias within AI data

Statistical bias occurs when the distribution of a particular dataset does not accurately represent the distribution of the population which, in turn, causes algorithms to produce inaccurate outputs, typically at the expense of lower socioeconomic groups. The medical industry has been recognised as being susceptible to biases, which can be challenging to detect and measure, with numerous reports of discrimination against vulnerable groups (FitzGerald and Hurst, 2017; Rivenbark and Ichou, 2020). This is ultimately the data that AI draws from. Vyas et al (2020) suggest that healthcare algorithms often do not offer a rationale for why racial and ethnic differences might exist, along with the assumption that the data gathered could be outdated; this further perpetuates the risk of algorithmic bias.

When problems arise in healthcare, transparency and accountability play a critical role in maintaining a reliable system. Kang et al (2020) conducted a study forecasting the needs of patients in an accident and emergency (A&E) department using AI. The study revealed a need for more comprehension regarding the AI system's decision-making process and variable input processing. This lack of understanding raised concerns around validity, reliability, and accountability if an issue arose.

Adapting open AI systems would enable the data to be peer-reviewed. AI in healthcare is often referred to as a ‘black box’ phenomenon due to its ability to generate outputs without any inherent mechanism to interrogate the underlying decision-making process (Wadden, 2021). This opacity is a distinguishing feature of AI systems that interpret large datasets through complex algorithms. Subsequently, this lack of transparency presents significant challenges to the validations, regulation and ethical implications for AI in healthcare (Poon and Sung, 2021).

AI for ambulance telephone triage

Triage is typically considered a qualitative process through which the acuity of a person's presentation is assessed. In 2017, ambulance services across the UK adopted the Ambulance Response Programme (ARP) framework (Turner et al, 2017; NHS England, 2018). While achieving high sensitivity levels in detecting out-of-hospital cardiac arrest (Green et al, 2019a) and other high-acuity presentations, arguments have been raised that this is achieved through over-triage of low-acuity presentations (Green et al, 2019b). Similar problems have been identified with NHS 111 triage for calls identified as suitable for ambulance response (Phillips, 2020). This disconnect between triage and actual clinical outcomes has the potential to result in significant operational demand and financial burden.

This is where AI comes in. AI tooling, implemented by prehospital clinicians, has already been shown to be accurate in detecting traumatic intracranial haemorrhage (Abe et al, 2022), stroke (Hayashi et al, 2021), and acute coronary syndrome (Takeda et al, 2022). In addition, models exist to differentiate hip complaints (Siebelt et al, 2021) and even mortality (Spangler et al, 2019; Tamminen et al, 2019). Further, triage models have been developed to assess patients presenting to emergency departments (Miles et al, 2020; Feretzakis et al, 2022).

Artificial intelligence is in its infancy in the context of paramedicine but is already being used for telephone triage

Machine learning, a subtype of AI, involves the algorithmic generation of models with sample data to predict specified attributes (Baştanlar and Özuysal, 2014). A supervised learning approach could create a predictive model to associate transcribed emergency telephone triage with patient outcomes. By configuring the training process, the model could predict the likelihood of the need for a patient to be conveyed to the hospital, referred to another service, or even for resuscitation. A general aggregate acuity score could be generated by incorporating a range of outcomes.

ARP was developed to address the growing and changing needs of urgent and emergency care (UEC) with an aim of reducing operational inefficiencies (Turner et al, 2017). With hospital bed occupancy resulting in significant emergency department congestion, the national UEC strategy has shifted to increasing the amount of clinical assessment at the triage stage to support referrals to other services (NHS England, 2023). Now is the time to introduce a validated, AI-driven decision support tool as a key enabler of this strategy.

AI in patient-facing care

Prehospital clinicians are required to make complex care decisions in the absence of definitive tests based primarily on a patient's subjective history, physical assessment and observations (Masic, 2022). This process is susceptible to cognitive bias, hindering critical thinking and preventing due consideration of the available information (Hammond et al, 2021). Cognitive biases are synonymous with System 1 thinking, a process based on intuition and experience (Tay et al, 2016). Metacognition, or conscious awareness of one's own thoughts, can partially mitigate this effect (Chew et al, 2019); however, our brains' cognitive capacity is finite, and our acquired knowledge degrades over time (Korteling et al, 2021). In contrast, AI algorithms can be trained on vast datasets to recognise patterns and make correlations that would be challenging or impossible for humans to find (Kang et al, 2020).

A retrospective cohort study by Kang et al (2020) demonstrated that AI was able to identify critically unwell patients from the details routinely collected by prehospital clinicians. The algorithm also demonstrated superior sensitivity and specificity compared with established triage tools such as the National Early Warning Score. Prehospital electrocardiogram (ECG) interpretation is another area where AI may prove beneficial, with Mencl et al (2013) finding that paramedics frequently overlook acute ECG changes. Interestingly, AI may not be close to addressing this problem, with Hannun et al (2019) demonstrating the ability of a deep neural network, which outperformed numerous cardiologists in ECG rhythm strip interpretation. Similarly, a study in Taiwan showed that AI could reliably analyse 12-lead ECGs for the presence of ST segment elevation myocardial infarction (STEMI), reducing the time from first contact to reperfusion therapy in their system (Chen et al, 2022).

AI has shown the potential to improve patient outcomes through advancements in diagnostic accuracy. As research continues, the number of applications for AI in frontline healthcare will increase (Kirubarajan et al, 2020). The delegation of cognitively demanding tasks, such as statistical analysis, fact memorisation or clinical reasoning, to AI may ultimately lead to a reduction in specialties, compelling clinicians to adopt a broader scope of practice (Davenport and Kalakota, 2019). Conversely, integrating AI in healthcare may liberate clinicians to engage more deeply with the distinctly human facets of healthcare, such as empathy and compassion (Bajwa et al, 2021).

AI in paramedic education

Digital technologies can be a powerful tool for improving educational outcomes if used responsibly and effectively (Haleem et al, 2022). They provide access to information, resources, and learning materials, enabling teachers to assess student performance more accurately and provide tailored instruction (Rogers, 2000). Additionally, technologies used in learning and research can provide a platform for critical thinking, problem-solving, and reflection (Owoc et al, 2021).

Higher educational institutions must invest in the infrastructure to use and integrate advanced AI technologies (Williamson, 2018), and modify their pedagogical approach to incorporate the implications of AI platforms for paramedic education, employment, and the workforce. In the 21st century, the automatisation of knowledge work has become one of the most significant drivers of competitiveness for firms (Itoe et al, 2022). To successfully implement an AI strategy, it is important to establish a comprehensive plan that outlines the goals and the means to measure and manage them.

One of the main adaptations will be how students are assessed. Essays have been an essential part of academic assessment since the 19th century, with European universities using them to assess students' understanding and ability to construct arguments (Kruse, 2006). AI essay-writing systems can be both beneficial and detrimental for students. On the one hand, these systems can help students to become better writers and learn the fundamentals of good essay writing (Cotton et al, 2023). On the other hand, these systems can be misused by students to cheat on their assignments (Dehouche, 2021). These systems must be used responsibly and with appropriate oversight to ensure they are not abused.

AI essay generators have become increasingly difficult to ignore as a means of completing student assignments. Despite the potential conflict between plagiarisers and work recipients, attempting to outdo each other with more advanced software, Sharples (2022) suggests a different approach: invigilated exams and contextualised written assignments that AI cannot generate. Using this approach could allow teachers to adequately accommodate students who are using AI technology while still ensuring that students learn the material. Moreover, this could provide a reliable mechanism for ensuring the quality of the assessment, as it removes the potential for plagiarism and AI manipulation.

AI has the potential to provide academic institutions with improved efficiency, cost savings, enhanced decision-making, and new insights into research and teaching. AI can also help to facilitate collaboration and knowledge sharing, allowing for a more comprehensive understanding of academic topics. Ultimately, AI has the potential to revolutionise the way prehospital care is conducted and create a more effective and efficient system.