Recently, the American Medical Association (AMA) voted to adopt a proposal that would “more clearly define and control the use of Artificial Intelligence (AI) tools such as Generative Pre-trained Transformers (GPT) in general medical practice.” The need for this is driven by the explosion of public and professional exposure to ChatGPT. It had more than 100 million monthly users within two months of its public launch in November 2022, setting a world record as the fastest-growing web application. There are other significant launches of AI platforms from tech giants like Facebook, Microsoft, and Google, but for now, ChatGPT is by far the dominant, so we’ll use that as the general term for the tool that lies at the heart of the AMA’s current concerns regarding the use of AI in healthcare.
Explain what ChatGPT does in ten words or less.
We asked ChatGPT that very question. Its answer is “ChatGPT generates human-like text responses based on given prompts or questions.” Please note the lack of words like accurate, truthful, reliable, dependable, and authentic.
The concise history of ChatGPT as a tool goes back to just 2018 when ChatGPT-1 was built with a parameter count (which is the number of pieces of distinct information it has to work on ) of 117 million. Through the next three years, the subsequent releases greatly expanded the parameter counts (GPT-2 had 1.5 billion, and the current GPT-3 has 175 billion.) The current release of versions 3.5 and 4 has been estimated to use more than one trillion.
ChatGPT uses that vast pool of fragments of information to answer questions, suggest actions, and write term papers and blog posts. (For the record, all our blogs are written by real people, like me, Henry, nice to meet you).
AI is man-made software that learns from experience, which, by the way, is also what we humans do. The power of AI lies in its ability to quickly search for patterns in enormous pools of existing data and find a set of suitable matches. It then has a feedback loop that takes its responses and learns from the outcome (machine learning) to improve its future searches. It’s like a powerful multiple-choice quiz. In the learning phase, people asked questions and then selected an answer. This answer was fed back into the process, and the software “learned” how the question should have been answered.
One of the simplest uses of AI is in building a chatbot (which is what ChatGPT is), which automatically responds to user questions. A chatbot uses a predefined set of rules learned in training to recognize keywords in the question to identify what kind of help is needed. The chatbot then generates an appropriate response, and using machine learning, it improves over time as it reads the questioner’s responses, interacts with new customers, and receives more data.
What could possibly go wrong with use of AI in healthcare?
Putting it bluntly, the AMA summarized its concerns by voting to “help protect patients against false or misleading medical information from artificial intelligence (AI) tools.” The fundamental problem that lies at the root of all fears about AI tools is that it is nothing more than a guessing mechanism. Asked a question, it can try to match a “best fit” answer based on the pool of data it has, but the data itself has not been verified either for content or relevance.
Even more troubling is that there is no direct or implied liability to guarantee the correct answer. An article published by Monash University in Australia in March 2023 succinctly puts it, So sue me: Who should be held liable when AI makes mistakes? As the article goes on to say, “Can AI make mistakes? Yes, it makes mistakes.” AI is more likely to make mistakes than humans because it relies on often incomplete or inaccurate data.
Liability in business and liability in medicine are two very different things. Losing money due to poor decisions based on AI is something people and companies can do all the time, and they can recover. Making a decision where lives are at stake is a different matter. Extreme caution is being taken before AI is allowed to play a significant role in diagnosis and treatment.
There are two areas of concern about use of AI in healthcare that the AMA policy addresses. On the one hand, it is the professional use of AI tools in patient treatment. In this regard, public tools like ChatGPT have minimal usability, and the concern is whether AI will deliver diagnostic and treatment platforms that will start to make decisions that could affect people’s lives. As the AMA’s proposal says, “Unless a GPT’s medical advice is filtered through healthcare providers, it can endanger patients with inaccurate and misleading information.”
Some AI models can already pass licensing examination practice tests with high scores. Essentially, what they can do is to statistically match symptoms with causes, and produce the most likely diagnosis. However, a diagnosis is only complete once every alternative has been explored, no matter how remote the possibility is. Unexplained symptoms must be followed just as rigorously as the ones for which AI could find a likely cause. That’s why doctors learn anatomy, physiology, biology, and psychology. Doctors learn to read from patients’ facial expressions, body posture, eye movements, voice, and hundreds of other clues that are not “yes/no” responses to questions or readings from diagnostic tools. Doctors (at least the good ones) see their patients as whole people, not as data pools. AI can’t take a human view of a patient – it only sees data.
The second problem related to public LLM models like ChatGPT is that people can become fooled into believing that an answer they got from ChatGPT is truthful and reliable. As we said at the beginning, AI takes guesses and learns from its mistakes. In some cases, those “mistakes” are given as plausible answers, and the result can be wrong and misleading, but the questioner has no idea of that until things start to go wrong. Self-medication, failure to seek proper medical care, self-diagnosis, and many other serious consequences can flow from public reliance on AI answers.
Follow the WHO and AMA
In May 2023, the World Health Organization issued a paper that called for “caution to be exercised in using artificial intelligence (AI) generated large language model tools (LLMs) to protect and promote human well-being, human safety, and autonomy, and preserve public health.” WHO goes on to stipulate its principles for ethics and governance of AI for health, which are:
- protect autonomy
- promote human well-being, human safety, and the public interest
- ensure transparency, explainability, and intelligibility
- foster responsibility and accountability
- ensure inclusiveness and equity
- promote AI that is responsive and sustainable.
Where does IsraelPharm fit in with AI concerns?
IsraelPharm uses technology to help us deliver a seamless experience to our customers. We invest in our automation of systems that help us provide customer support. While we appreciate the gifts of tech, we are also aware of the power of person-to-person support. Using medication is a very personal process in which trust in the source of information, advice, and delivery is vital. IsraelPharm injects human interaction into every level of our processes, from creating our website with product information, through writing informative blogs, to accepting and vetting orders, dispatching, and responding to comments and queries.