ARTIFICIAL INTELLIGENCE MISGAUGED: WHO IS TO BE INCULPATED?

0
971

Artificial Intelligence: Unseating the Inevitability Narrative | Mind  Matters

by Prachi Tripathi and Zubia Rehan    2 September 2021

INTRODUCTION

Which will you plump for, keeping in mind the future of medicine: Droid or Human?

In the not-so-distant future, catechizing the uncertainty of whether the medical conundrums will tumble down in the face of more coherent and less debilitating Artificial Intelligence (hereinafter referred to as ‘AI’) is no less than a quandary. AI is being discerned as the spearhead approach in the field of medical science. The whole world is bouncing off the walls to be a witness of the coalescing of artificial intelligence into the healthcare system. Many technical virtuosos and health care experts are of the view that AI can diagnose and treat a patient with more precision and perfection and is considered to be the bee’s knees as compared to the other medical slicks. Although indefatigable efforts have been made at making AI efficient contributors in the medical field, the silhouette of a question that comes into play is “What if it leads the way for Tortious acts of Medical Negligence?” This question needs to be addressed to sort out the miasma of precariousness that AI has fabricated in this field.

TESTS TO DETERMINE THE LIABILITY UNDER TORTIOUS ACTS OF MEDICAL NEGLIGENCE

Now the question which needs illumination is “Who will bear the responsibility for the acts of medical wrongdoings committed by AI?” Is it the clinician, the AI programmer, or the droid himself? This debate mirrors the concerns around the growing importance of AI in the coming future and will try to be answered by the existing legal regime.

MEDICAL DERELICTION: ROAD BACK TO SOLITUDE

Medical dereliction is a subtype of tortious acts of negligence law, whereby the medical practitioner has a duty to take care of the patient and breaches this duty by failing to act as a reasonably prudent medico and in furtherance causes mutilation to the patient. A provider-patient tie-up is a prerequisite to a medical dereliction action. If that bond exists, the next scrutiny is whether the patient has voluntarily consented to that particular treatment and whether medical care is properly discharged. AI adds a layer of a can of worms to medical malpractice cases, with minimal current law available to help with the evaluation and judgment. In a medical negligence claim involving a treating physician maven with another physician, the plaintiff can only establish liability against the other physician if a provider-patient relationship existed between them. Without the said bond, the consulting physician has no legal duty to the plaintiff. AI could be considered analogous to a consulting physician. In the landmark case of Hill v. Kokosky, for a consultation, the treating physician called two additional doctors on the phone. The consulting physicians never met with the patient or viewed her medical records; instead, they provided the treating physician with informal medical advice. According to the court, the consulting physicians failed to create a physician-patient connection with the plaintiff and so owed her no obligation. In the instance of, a similar decision was held in Ranier v. Grossman. If courts recognize AI as a consulting physician, it will owe no legal obligation to a patient, making medical negligence claims impossible. In addition, AI’s involvement in a medical negligence case might change the standard of care that clinicians must adhere to. Even if courts decide that medical malpractice claims involving AI are permissible, it’s unclear if machine learning algorithms are capable of performing negligent acts. Some experts are of the view that as AI runs on a programmed system of algorithms, assessing the risks and outcomes of the acts done may not lead to any breach of duty as its findings are innately reasonable by necessity. But as it is said that mistakes are a fact of life, it’s the response to the error that counts, then on what contention can we say that AI is not prone to commit any fallacies? Who will be held accountable for the medical wrongdoings done by AI?

VICARIOUS LIABILITY: AN IMPUTATION AGAINST THE ‘AI’

Vicarious liability refers to that tortious system whereby one individual can be considered lawfully liable for the acts of another and this is exhibited normally in the master-servant relationship. It is based on the principles of “qui facit per alium facit per se” meaning “he who acts through another does the act himself” and “respondeat superior” meaning “let the principal be held liable”. Medical services organizations can be expected vicariously to take responsibility for the demonstrations of their workers, including doctors or medicos, who perpetrate clinical dereliction. As in likely, the said foundations may likewise be expected vicariously to take responsibility for the activities of their machines, as the AI. In Clark v. Southview Hospital & Family Health Center, a young woman died due to negligent emergency medical care for an asthma attack. The Apex Court of Ohio ordered that hospital authorities will be held accountable for the negligence of medico, no matter he was an independent contractor. In Clark, the court found that in light of the fact that the hospital addressed to the public that the doctors worked for the emergency clinic, the hospital would be vicariously responsible as an evident specialist. Just as health care systems may be held vicariously liable for a physician’s negligence, palais de justice could likewise hold the health care centres vicariously liable for injuries caused by its artificial intelligence programmers. Whether AI would be viewed as a physician or a machine by the courts is an unsettled but rudimentary question. True vicarious liability will only arise if AI is deemed to be concatenated to an employee. Now the wrangle which holds up is, “Can AI be analogous to an employee or will it be considered as a machine with no human consciousness and thus, no liability?” This question needs to be answered to sort out the contention which has arisen due to the uncertain nature of liability of AI in a medical setup.

PRODUCT LIABILITY: POTENTIAL PERIL IN THE MEDICAL FIELD

Product liability elucidates the obligations analogous to product allocation. A manufacturer or retailer can be made liable for making or selling an unreasonably menacing product. In Marcus v. Specific Pharmaceuticals, aggrieved parties recorded a putative class activity against Forest Laboratories, the manufacturer of name-brand drug Lexapro, battling that the mark exaggerated the item’s viability in treating a significant burdensome issue in youths. The government law inherently acquires plaintiff cases on the grounds that the Federal Food, Drug, and Cosmetic Act deny Forest from freely changing its FDA-supported name and in this manner litigants were held responsible under the rule of product liability. Under current product responsibility laws, the producers and developers of AI computerization and AI calculations that are as of now being utilized in medication may conceivably be at risk if the innovation goes off centre. Responsibility could exist dependent on the product’s liability principle that assuming the product, or AI, causes disfigurement, that harm is intimated evidence of some obstacle inside the calculations or innovation. The reason for forcing risk on the maker depends on the rule that the manufacturer should be considered responsible for any mischief or harm brought about by the innovation. The use of a product liability responsibility hypothesis to AI is convoluted. The developer or manufacturer of AI or AI innovation can’t really predict how the innovation will act once it is being utilized from a reasonable perspective in a genuine clinical setting. Therefore, it would be over the odds to throw labels and recrimination to someone whose work was far removed from the actual operation of the technology in a medical setting. As many organizations and individuals, such as artificers, engineers, and developers, perform tasks in unison to create AI systems and machine parameterization, it makes it especially difficult to throw accusations on only one individual. Now the smoldering question which arises is will AI programmers be held liable for any malpractice committed by machines or droids in the medical field, solely on the principle of the product liability, or is it the right time to analyze other factors as well when assigning the culpability?

INFORMED CONSENT LIABILITY: RUDIMENTARY FOR INVASIVE MEDICAL PROCEDURE

While the debate is centered around the contention of AI impacting traditional medical negligence, it may also influence informed consent claims. Informed consent action foregrounds its view on patient autonomy which is considered to be a genus of medical negligence. In explaining the ambit of the informed consent liability, courts consistently expressed that doctors legitimately will undoubtedly advise patients regarding any material data that might influence them, for example, dangers, advantages, and substitutes to a proposed course of treatment. In Hurley v. Kirk,  a case including a laparoscopic hysterectomy, i.e., an obtrusive surgery to eliminate the uterus, the Supreme Court of Oklahoma held that “the doctrine of informed consent requires a physician to obtain the patient’s consent before using AI to perform significant portions of a surgery for which the physician was engaged to perform thereby subjecting the patient to a heightened risk of injury.” At an innate level, it proves to be an impediment to defining what exactly ‘informed’ fore holds in the context of an exhortation where no one knows how it works. Looking broadly at the panorama of informed consent cases, Glenn Cohen holds out three types of cases that may be relevant to medical AI: “provider experience”, a case where a consulting physician fails to disclose the use of AI during the informed consent process in an effort to stash naivety; “substitute physicians”, a case where the patient may be ignorant of the role of AI in a particular part of treatment; and “pecuniary conflicts of interest”, a case where a physician fails to reveal a personal pecuniary interest in AI used in the patients’ treatment. However, the question that arises is, “Whether it is the duty of the physicians to warn the patients under an unavoidably unsafe products approach, or can we say that as AI is bound to make no error due to set programmers or algorithms, so no liability of informed consent on part of the physician?”

CAN ‘AI’ BE DELINEATED AS A ‘PERSON’?

When the whole world got backed against the wall with the question of bearing onus on medical sloppiness, some polymath asserted that “gadgets or machinery devices which are adept of untrammeled ingenuity and of formulating their own proposition are feasibly more judicious to be contemplated as persons rather than machinery gadgets” and hence should bear the encumbrances themselves. In Quoine Pte Ltd v. B2C2 Ltd, the Singapore International Commercial Court sets out significant direction on the most proficient method to apply the law of slip-up in conditions where legitimately official agreements were performed by a mechanized contracting framework without human intercession, and subsequently, the issue emerges- “Can AI be treated as a person?” The liability of machines cannot be authorized for now, as an interpretation of a legal person does not inculcate droids under its ambit. Therefore, the question which got simmered is can ‘AI’ itself be held accountable, just on the basis of its briskness or liveliness which is predominantly the outcome of a set of algorithms used in a particular droid?

FUTURE PROSPECTS ON THE QUESTION OF MEDICAL LIABILITY 

This unabridged wrangle can only be rectified by an appropriate legal route, which in contemporary situations is utterly absent. So, the exigency is to create the best of both worlds by the enactment of legislation. It is propounded that to know which way the wind is blowing and to perceive its solution, the legislature should be invigorated to volunteer the situation and introduce legislation that can address the persisting unsettled question of liability. Furthermore the “common enterprise liability” is another proposition for deliberating the dilemma of liability. It has been suggested by various polymaths that while gritting the issue one and all engaged in the methodology should bear the accountability. This is favorable as encumbrance or onus is being apportioned among all and sundry. The next suggested contention in line is to “modify the standard of care.” Another out-of-wood solution that can be endorsed in persistent situations is to customize the regulation of standard of care. This stipulates and necessitates medicos or physicians to administer the practice of assessment of black-box algorithms. The obligation of medico extends not only to accessing or evaluating algorithms but also to give authenticity or validation as well. With the following recommendations or advisory guidelines, the smoldering question which left the world in a tight corner can be settled. Representatives around the world should join hands and leave no stone unturned to address this question which is of prime significance for the entire legal arena.

CONCLUSION

The world is witnessing an incessant storm of technological expansion. AI is the most acclaimed evolution of this expanding regime. The droid substituting medicos is posing a challenge: who will bear the encumbrances in case of medical dereliction? The question remains unresolved with the persisting legal framework as expounded above. The exigency of the time demands a pertinent legal regime that can cut the mustard. The intercession of AI in the clinical area isn’t something unknown to the world, it is a continuous advancement that in the not-so-distant future will get a handle on the overall world. So, the paramount question of liability should be elucidated before it leads to a cataclysmic situation.