The rise of technologies such as ChatGPT has thrust artificial intelligence into the spotlight throughout 2023 鈥 and health care is no exception.
鈥淲ith the increasing availability of health-care data and the rapid progress in analytic techniques 鈥 whether machine learning, logic-based or statistical 鈥 AI tools could transform the health sector,鈥 the World Health Organization said when it launched a set of regulatory recommendations in October.
As we move into 2024, here are some key AI developments 鈥 and cautions 鈥 that will be top of mind for Canadian experts in the new year and beyond.
PERSONALIZED PATIENT CARE
One of the most exciting potential developments in health-care AI is harnessing the ability of a computer model to process and interpret 鈥渕ulti-modal鈥 data about a patient, said Roxana Sultan, chief data officer and vice-president ofhealth at the Toronto-based Vector Institute dedicated to AI research.
Right now, AI models can make a diagnosis based on one or two pieces of information, such as an X-ray, Sultan said. That鈥檚 achieved by training the model on 鈥渢ons and tons of X-ray images鈥 so it learns to recognize certain diagnoses.
鈥淭hat is fantastic. But that is (only) one source of information,鈥 Sultan said.
In the 鈥渘ear future,鈥 she said, machine learning will develop so that AI can take a 鈥渕uch more comprehensive look at patient health.鈥
In addition to a patient鈥檚 X-ray, for example, AI would be able to process other data, including doctor鈥檚 notes, lab samples, medications the patient is taking and genetic information.
That ability will not only play a critical role in diagnosing a patient, but also in coming up with a more personalized treatment plan, Sultan said.
鈥淲hen you have models that understand the complex interplay between a person鈥檚 genetics and a person鈥檚 medications and all the different diagnostic tests that you run on that patient, you pull those together into a picture that allows you to not only understand what鈥檚 happening in the moment, but also to kind of plan ahead that, if I applied this treatment 鈥 what is the more likely outcome for this particular person?鈥
Russ Greiner, who holds a fellowship with the Alberta Machine Intelligence Institute, agreed.
鈥淭he standard medical practice used to be one size fits all,鈥 said Greiner, who is also a professor of computing science at the University of Alberta.
鈥淣ow you realize that there鈥檚 huge differences amongst individuals 鈥 different genes, different metabolites, different lifestyle factors, all of which are influential (on health),鈥 he said.
Machine learning means computers can analyze hundreds or thousands of characteristics about a patient 鈥 more than a human clinician could possibly process 鈥 and find patterns 鈥渢hat allow us to figure out that for this characteristic of patients, you get treatment A, not treatment B,鈥 Greiner said.
CLINICAL TRIALS
AI鈥檚 ability to go through enormous amounts of data will also save 鈥渢ens of thousands 鈥 probably hundreds of thousands 鈥 of human hours鈥 for researchers analyzing the results of clinical trials, said Sue Paish, CEO of DIGITAL, one of five 鈥済lobal innovation clusters鈥 across the country funded by the federal government.
鈥淎I basically can evaluate billions of pieces of data in a fraction of a second,鈥 said Paish, who is based in Vancouver.
That means that new medications could be evaluated for safety and efficacy much faster, she said.
IMPROVING QUALITY OF DATA
Whether AI is being used for clinical care or for health research, the results it generates can only be as good as the data it鈥檚 fed, experts agree.
鈥淕arbage in, garbage out,鈥 said Greiner.
鈥淚f I train on faulty data, the best I can do is to build a model as good as that data, which is problematic.鈥
One of the priority areas is to make sure AI is getting data from reliable sources, rather than just indiscriminately taking publicly available information, said Sultan.
ChatGPT, for example, is a technology to 鈥渆ssentially scrape the internet,鈥 she said.
鈥淭he problem with that 鈥 is first and foremost, it鈥檚 not always reliable and true,鈥 Sultan said.
鈥淎nd second of all, it is riddled with biases and problematic perspectives that get reinforced when you train something that can鈥檛 make those judgments. It just reads it all, absorbs it and spits it back out for you.鈥
One example of a way to improve the quality of medical analyses AI generates is to train it on medical textbooks rather than the internet, Sultan said.
鈥淚 think the ChatGPTs of the world will seem very caveman, like very rudimentary (in the future),鈥 she said.
Researchers are also developing AI algorithms to find bias in health information, including racial or gender discrimination, Sultan said.
PATIENT SELF-MANAGEMENT
Another key area where AI will grow is in developing technologies that help patients manage their own health, experts agree.
For example, wearable AI has already been developed to help patients with heart failure self-monitor, Sultan said.
AI has also been used 鈥渜uite effectively鈥 in remote areas of Canada to manage some patients鈥 wounds when they weren鈥檛 able to access care during the pandemic, said Paish.
The AI technology attaches to a patient鈥檚 cellphone, takes a 3D image of a wound and assesses whether it鈥檚 infected or healing well.
That information is then sent to a doctor or nurse, who can advise the patient remotely on how to care for the wound.
鈥淚 think we鈥檙e going to see more and more examples of where AI is actually supporting patient health by reducing the need for a human being to take all the steps in assessment and delivery of health-care services,鈥 Paish said.
That will take pressure off overburdened doctors, nurses and hospitals and allow them to provide in-person care when it鈥檚 most needed, she said.
ETHICS AND REGULATION
鈥淥ne of the big flashing yellow lights in the application of AI is making sure that there is very thorough and thoughtful evaluations of how AI is being trained,鈥 said Paish.
鈥淧ublic policy is going to be extremely important.鈥
Dr. Theresa Tam, Canada鈥檚 chief public health officer, said it鈥檚 critical to develop regulations and safeguards that address ethical issues such as patient privacy.
鈥淚 think this is a really opportune time to, you know, more systematically look at 鈥 what governance we have to put in place in order to responsibly use AI,鈥 Tam said in a recentinterview.
Ensuring data is managed in a way that protects privacy must be 鈥渋nterwoven鈥 with AI development, Sultan said, noting that other legal and ethical ramifications are 鈥渦ncharted territory.鈥
鈥淲e鈥檙e all trying to figure out what makes the most sense. So issues like consent, issues like data ownership and data custodianship, those are all going to shift in terms of the paradigm that we鈥檝e looked at them through in the past,鈥 she said.
READ ALSO:
READ ALSO:
READ ALSO:
Nicole Ireland, The Canadian Press