Pharma5: Five Challenges of Implementing AI
- By BSTQ Staff
Excitement over artificial intelligence (AI) in healthcare is understandable and even warranted. Faster diagnostics. Clerical assistance. Improved accessibility. Advanced drug discovery and development. AI’s capacity to help clinicians deliver improved care is unprecedented.
But AI is not without challenges. While the industry can and even should move forward with AI with cautious optimism, it is important to understand AI’s limitations and liabilities. Here are five important elements of AI to consider:
1. Bias
AI algorithms are trained using big datasets, but those datasets often reflect information gleaned from a narrow demographic source. As such, bias may be built into the AI system, which may lead to disparate care. AI algorithms may not recognize patterns in people groups with which they are unfamiliar, whether it be a racial or gender minority.
Notably, the Department of Health and Human Services Office of Civil Rights has finalized a rule creating new liability for physicians using AI that results in discriminatory harm to patients. This could include, for example, AI that utilizes algorithms with race adjustments or returns otherwise biased results to physicians and patients. The final rule prohibits discrimination by clinical algorithms and requires physicians, hospitals, health systems and others to use “reasonable efforts” to both identify algorithmic discrimination and to mitigate resulting harms.1
2. Data Privacy and Cybersecurity
At the heart of AI learning is data collection, and in healthcare, that data comes straight from patients’ sensitive personal health information (PHI). According to the American Medical Association’s (AMA’s) Principles for Augmented Intelligence Development, Deployment and Use, “AI systems require large datasets to train and operate effectively, increasing the risk of large-scale data breaches. Additionally, the complexity of AI algorithms can make them opaque and vulnerable to manipulation, such as adversarial attacks that can lead to misdiagnoses or inappropriate treatment recommendations.”1
If the system is breached, personal PHI is exposed, causing a disruption in care, inaccurate treatment plans and, ultimately, breach of trust. When patients lose confidence in the healthcare system, they are less likely to follow treatment plans or seek medical attention going forward.
3. Accountability
When it comes to AI decision-making in healthcare, who is held responsible if something goes wrong? Who is liable for patient harm? Are providers responsible for diagnostic errors or incorrect treatment plans, or are developers on the hook? How can safety assurances be implemented to protect against such harm?
Answers to these questions are still unclear, but one thing is clear: Official protocols and human oversight and review of AI output are essential to both preserve provider autonomy and patient safety when using AI solutions.2
4. Cost
Developing and implementing AI solutions into healthcare is expensive, with prices ranging anywhere from tens of thousands of dollars to a million dollars, depending on the size and scope of the AI solution. There’s the cost of the AI solution itself, and then there’s the cost of maintenance to consider. Upkeep involves software engineers, data scientists and project managers; hardware and software maintenance; security measures, audits and legal counsel; and training, all of which contribute to the expense. Costs may be within reach of large healthcare conglomerates, but smaller medical facilities may not have the financial resources for extensive AI solutions.3
5. Transparency
How does AI make decisions? It’s a question we’re still trying to answer. While we know AI uses datasets to inform its decisions, we don’t always know what datasets they are using. Users can see data going in and decisions coming out, but they cannot see the internal workings — namely, how or why the AI system reaches its conclusion. We often don’t know why AI produces the results it produces or how it reaches its conclusions. This so-called “black box AI” requires transparency.4
Simply put, “Transparency in AI is important because it provides a clear explanation for why things happen with AI,” said Candace Marshall, vice president of product marketing, AI and automation at Zendesk. Marshal emphasizes there are three important levels of AI transparency, including:5
- Algorithmic transparency: Explains the logic, processes used by the AI system and systems process data, reach decisions and factors influencing those decisions.
- Interaction transparency: Focuses on the interaction between the AI system and users, making communication between the two clear and comprehensible.
- Social transparency: Emphasizes ethics, legal and societal concerns of AI and the regulations and standards needed to oversee the use of AI.
References
- American Medical Association. Augmented Intelligence Development, Deployment, and Use in Health Care. AMA, November 2024. Accessed at www.ama-assn.org/system/files/ama-ai-principles.pdf.
- Nouis, SCE, Uren, V, and Jariwala, S. Evaluating Accountability, Transparency, and Bias in AI-Assisted Healthcare Decision-Making: A Qualitative Study of Healthcare Professionals’ Perspectives in the UK. BMC Medical Ethics, 2025:26(89). Accessed at bmcmedethics.biomedcentral.com/articles/10.1186/s12910-025-01243-z.
- Alkhaldi, N. Assessing the Cost of Implementing AI in Healthcare. Itrex, June 18, 2025. Accessed at itrexgroup.com/blog/assessing-the-costs-of-implementing-ai-in-healthcare.
- Kosinski, M. What Is Black Box Artificial Intelligence? IBM. Accessed at www.ibm.com/think/topics/black-box-ai.
- Marshall, C. What Is AI Transparency? A Comprehensive Guide. Zendesk, updated Aug. 7, 2025. Accessed at www.zendesk.com/blog/ai-transparency.