There is a moment in the hacker movie “The Net” that no longer feels like fiction. Dennis Miller’s Dr. Chapman lies in a hospital bed, dependent on machines that are presumed to be neutral, clinical, and trustworthy. Somewhere outside the room, unseen and unaccountable, the system is altered. No alarms. No confrontation. Just a change in inputs, and the outputs follow. The patient dies. That is no longer a cinematic abstraction. It is the emerging architecture of modern healthcare.
The first real-world manifestation of that architecture has already arrived. In Utah, regulators authorized artificial intelligence systems to independently renew prescriptions across a broad class of medications. Patients interact with a chatbot or digital interface, provide structured information, and the system determines whether a refill is appropriate. In many cases, no physician ever directly evaluates the patient in real time. The system determines whether the refill is warranted and necessary in light of the patient’s history and diagnosis and current status, and evaluates drug interactions with other drugs prescribed or taken Over The Counter. It also determines whether a different drug might give a “better” result. But, there’s the rub. A “better” result for whom?
This is not merely clinical decision support. It is delegated decision-making.
The justification is straightforward. Healthcare systems are overloaded. Physicians are constrained. Routine prescription refills—maintenance medications for chronic conditions—are seen as low-risk, rules-based determinations that can be automated. In the US alone, the FDA’s adverse event reporting system found 14,723 deaths linked specifically to drug interactions out of roughly 167,000 reported interaction cases. AI promises faster turnaround, reduced administrative burden, and expanded access, particularly in underserved areas.
The model is gaining traction. Startups and health systems are moving quickly to deploy AI-driven prescribing workflows, including for more complex categories such as psychiatric medications. The machine is not just assisting the physician. It is increasingly standing in for one.
The law, however, has not kept pace with this shift, and the fault lines are already visible.
Start with regulation. Under the Federal Food, Drug, and Cosmetic Act, software intended to diagnose or treat disease may qualify as a “medical device.” 21 U.S.C. § 321(h). The U.S. Food and Drug Administration has attempted to distinguish between software that merely supports clinical decisions and software that drives them. The former may avoid regulation; the latter increasingly falls within the FDA’s jurisdiction as Software as a Medical Device.
AI refill systems strain that distinction. Where the system’s recommendation is effectively determinative—and where its reasoning cannot be independently reviewed—it begins to look less like a tool and more like a regulated device.
Licensure presents a parallel issue. State medical practice acts generally require that only licensed professionals may prescribe medications. See, e.g., Md. Code Ann., Health Occ. § 14-101 et seq. The Utah model sidesteps this by maintaining the formal position that the physician remains responsible, even where the AI performs the substantive evaluation. While the doctor “writes” the prescription, he or she does so in name only. The AI program writes it, and the doctor rubber stamps. Indeed, that’s the whole point. That’s how “efficiency” is generated. The legal fiction is preserved; the operational reality is transformed.
Liability follows the same pattern. When harm occurs, malpractice law will continue to focus on the physician’s duty to exercise independent medical judgment. But is a wrongly written or prescribed (or refilled) scrip a medical malpractice issue, or a product liability issue (with the AI software being a product?) or is it subject to the disclaimers in the software license agreement? AI vendors may face product liability claims. Health systems may be liable for negligent implementation. Insurers—whose formularies and utilization controls may shape algorithmic outputs—may be drawn into the litigation.
Insurance and reimbursement mechanisms further complicate the landscape. Payers can embed cost-containment strategies directly into AI-driven workflows, effectively steering prescribing behavior through algorithmic recommendations. This raises potential issues under federal anti-kickback statutes and related doctrines where financial incentives intersect with clinical decision-making. See 42 U.S.C. § 1320a-7b(b),
In short, the legal framework assumes human judgment, transparency, and accountability. AI systems challenge all three. But the most significant problem is not regulatory classification or liability allocation. It is epistemological.
AI systems operate as black boxes. Their outputs are the product of complex models trained on vast datasets, incorporating variables and weightings that are neither visible nor explainable in any meaningful sense. The physician sees the recommendation. The patient experiences the outcome. Neither sees the reasoning.
That opacity would be concerning even in a neutral system. It becomes critical in a system shaped by competing economic incentives. The business model for many AI vendors is transactional—fees per refill, per authorization, per processed decision. That creates an incentive to maximize throughput. Insurers, seeking to reduce costs, can influence the system to favor lower-cost alternatives. Pharmaceutical companies, seeking to increase revenue, exert influence in the opposite direction through the data and clinical frameworks that inform the model.
These forces do not announce themselves. They are embedded. The result is a system that presents recommendations as objective and evidence-based, while reflecting underlying biases that are neither disclosed nor readily detectable. The physician is positioned as the ultimate decision-maker, but is increasingly reliant on outputs that cannot be independently validated. We are simply left to assume that the refill decisions are based on the “best medicine” and that they are not skewed by the economics. Or that they should not be. But that may not be the case.
Today, the risk is not limited to malicious interference. It includes systemic bias, economic steering, and model opacity—all operating within systems that are increasingly entrusted with clinical decisions.
There is no Hippocratic Oath for machine learning. No duty of loyalty. No inherent obligation to prioritize patient welfare over cost, efficiency, or optimization metrics. The system does what it is designed to do.
The robot will see you now. The question is whether anyone—physician, patient, regulator, or court—will be able to see what the robot is actually doing.
Recent Articles By Author
