There is a reason Artificial Intelligence (AI) is the next big thing. It holds the promise of being able to do so many things. It will make the computer revolution of the 60’s look like a mere ripple in the pond. It is probably not an overstatement to say that AI will change everything, including your medical care.
AI has already shown great promise in medical applications. It is far better, for example, than its human radiology counterparts in detecting early lung cancer in chest x-rays. It can do it better and faster than its human counterparts and it does not get tired or distracted. Although not yet completely studied, it can probably replicate this success in many other forms of radiological interpretation.
It can compare a patient’s symptoms, lab results and imaging with those of hundreds of thousands of other patients and suggest diagnoses. It can monitor prescriptions and warn of drug interactions. It can analyze patient charts and identify those at greatest risk for sepsis or other hospital disasters. It can control pumps and devices that deliver medications, such as insulin to diabetics.
It is already changing life for some doctors who are using an AI program to record and organize their patient charting. According to the doctors using the program, it saves them time charting, which can now be spent with the patient, has improved their communications with their patients, and has created good chart notes for inclusion in the patient’s electronic medical record.
Of course there are warts. There are always warts. Users must always follow the old maxim, “Trust but verify.” AI is so smart that it can easily appear to be accurate when it is anything but. It has been known to misinterpret medical terms, add to the chart medications the patient is not taking, and misapply algorithms. Doctors must be careful to check AI’s work.
AI programs must be educated. This involves exposing the programs to data on thousands or hundreds of thousands of patients. This highlights a problem, which was noted at the beginning of the computer age as well: Garbage in means garbage out. An AI program is only as good as the data it has been instructed to use and the amount of that data. Sometimes the data is less than what we would hope it to be or there is not enough of it for the program to be adequately trained. For example, some AI programs have been found to have racial bias reflecting the biases of the information provided to them to form the basis for their actions.
Some creators of AI programs may take short cuts and skimp on the training of their programs, with predictable results. This will be a significant problem for the foreseeable future until the bad programs are weeded out and the reliable creators are identified. The federal government is attempting to address this issue by requiring sellers of AI programs to disclose how the program was trained and tested.
One question that is already being discussed is the effect of AI on medical malpractice litigation. Many malpractice cases arise because of errors in diagnosis. If a doctor using an AI program reaches the wrong diagnosis because the program misled her and suit is brought, who should be liable? Should it be the doctor? Should it be the developer of the program? Should it be the distributor of the program? Should it be all of them? How much of a defense should it be for a doctor to say that an AI program misled her? All of these are interesting questions to which there are as yet no answers.
As patients, there is little we can do to protect ourselves from doctors or hospital personnel who are too trusting of the accuracy of a given AI program. We will not be interacting directly with the AI program. That will be the doctor or hospital staff person, but it is our health which will be impacted by a rogue program whose results are blindly accepted. Cross your fingers and keep them that way.
The post Artificial Intelligence Is Coming To Your Healthcare – Warts And All first appeared on Sandweg & Ager PC.