- AI Impact
- Posts
- AI, Politics, and Healthcare: Unveiling the Hidden Bias in PDMPs
AI, Politics, and Healthcare: Unveiling the Hidden Bias in PDMPs
Use case of algorithm bias in healthcare and how you can make a difference.
Unmasking Algorithm Bias: A Deep Dive into Prescription Drug Monitoring Programs
Welcome back, dear readers, from our brief hiatus, an interlude that was more of a strategic retreat than an unplanned lull.
For those of you follow who me, the AiBossLady, on Twitter (@AiAndPolitics) you're likely aware that we purposely opted out of sending out last Thursday's newsletter. The reason? A small event you might have heard of - the indictment of one Donald Trump. Just as the proverbial storm broke, inboxes and notifications across the globe found themselves under siege, a virtual avalanche of updates, opinions, and breaking news alerts.
In the midst of such digital deluge, we decided that discretion was indeed the better part of valor. After all, why would we want our painstakingly crafted newsletter to be relegated to the depths of an already overloaded inbox or, worse still, lost in the swirling vortex of notifications? Especially when this edition, a grand finale of sorts, represents the culmination of the myriad insights and analyses we've been sharing over the past week.
So, with a knowing wink and a nudge, we say: Let's dive right in...
Policy Pulse
Here are 3 quick bytes of info to keep you up to speed on the latest news, spotlighting Algorithms and their impact:
In response to the rise of AI, U.S. senators have introduced two bipartisan bills, one mandating transparency and appeals processes for AI decisions in government interactions, and the other proposing an Office of Global Competition Analysis to maintain U.S. competitiveness in AI and strategic technologies.
The Justice Department has unsealed a 37-count federal indictment against former President Donald Trump, alleging he retained and mishandled classified documents after leaving office, with charges ranging from violation of the Espionage Act to obstruction of justice.
Yesterday (Monday June 12) was the deadline for public comments on the federal register concerning the use case algorithm discussed below; Click here to read the submitted comments to gain insight into other viewpoints on this database.
The Algorithmic Bias in the Prescription Drug Monitoring Program

"Nothing in all the world is more dangerous than sincere ignorance and conscientious stupidity."
MLK Jr never spoke explicitly about algorithm bias, but quotes from his time offered warnings against ignoring any issue causing injustice, including algorithmic bias, emphasizing the importance of knowledge and understanding in avoiding pitfalls.
The quote below likely originated in the 1950s-1970s when computers were gaining prominence, reflecting concerns about technological advancements and the risk of biases being amplified by algorithms, although not specifically addressing algorithm bias. It highlights the potential dangers of losing sight of human values in the face of rapid technological progress.
"It has become appallingly obvious that our technology has exceeded our humanity."
I wonder what they would have to say about where we are today!
AI Foundations - Algorithms in Medicine
Areas within Medicine Driven by Algorithms
Medical Imaging Analysis
Drug Discovery and Development
Electronic Health Records (EHR) Analysis
Precision Medicine and Personalized Treatment
Remote Patient Monitoring
Disease Diagnosis and Prognosis
Medical Research and Clinical Trials
Healthcare Resource Optimization
Health Monitoring and Wearable Devices
Fraud Detection in Healthcare Claims
Patient Safety and Adverse Event Detection
Genomic Analysis and Precision Oncology
Medical Robotics and Surgical Assistance
Artificial Intelligence (AI) in Medicine - AI in medicine refers to the application of machine learning algorithms and computational models to analyze medical data, make diagnoses, assist in treatment decisions, and improve overall healthcare delivery.
Example 1: AI algorithms can be used to analyze medical images, such as X-rays, CT scans, or MRIs, to help detect abnormalities and assist radiologists in making accurate diagnoses.
Example 2: Natural Language Processing (NLP) techniques can be applied to analyze unstructured medical text, such as clinical notes or research articles, to extract valuable insights and support evidence-based decision-making for healthcare providers.
Machine Learning (ML) in Medicine - Machine learning is a subset of AI that enables computer systems to learn and improve from experience without being explicitly programmed. In medicine, ML algorithms can learn patterns and relationships from medical data to make predictions or assist in clinical decision-making.
Example 1: ML algorithms can be trained on large datasets of patient records to predict disease outcomes, identify individuals at risk for certain conditions, or personalize treatment plans based on individual characteristics.
Example 2: ML algorithms can be used to develop predictive models for patient monitoring, allowing healthcare providers to anticipate deteriorating conditions and intervene earlier, leading to improved patient outcomes.
Algorithm Use Case - PDMP and Bias
Enough teasing. As promised, it is time to share the critical use case that illustrates how biased algorithms can cause harm to individuals. Enter the PDMP by NarxCare: PMDP = Prescription Drug Monitoring Program; NarxCare = the main software company responsible for managing this massive database for individual states, the Veterans Administration, and others. Prior to joining OPAC, a large portion of my efforts as an activist were dedicated to addressing this and related issues, and I trust my passion for resolving healthcare disparities will shine through my writing. While the bias in this case is likely unintentional, it underscores how easily well-intentioned initiatives can inadvertently lead to adverse outcomes.
Prescription Drug Monitoring Programs (PDMPs) like NarxCare are tools designed to aid in the prevention of prescription drug misuse and overdose. They work by collecting, analyzing, and reporting data on the prescribing and dispensing of controlled substances, with the aim of supporting public health initiatives and clinical decision-making. However, while these tools can be beneficial, it's crucial to understand that they are not infallible and their use has already been shown to lead to unintended consequences due to inherent biases in the algorithms that drive them. In this comprehensive analysis titled 'Dosing Discrimination: Regulating PDMP Risk Scores', Jennifer D. Oliva, a professor at UC Law, San Francisco, digs deep into the potential biases and discriminatory practices in Prescription Drug Monitoring Program (PDMP) predictive surveillance platforms.
The risk scores generated by tools like NarxCare are driven by an algorithm, the misuse of which can lead to the introduction of bias and potentially harmful outcomes. Let's unpack how this can unfold:
Data Bias: The algorithm is only as good as the data it's trained on. If the data used to train the algorithm is biased, the algorithm will also be biased. For instance, if the data disproportionately represents certain groups of people over others, the algorithm will be more accurate for those groups and less accurate for others. In the case of NarxCare, the algorithm uses data from state registries and potentially other sources like medical claims data, electronic health records, EMS data, and criminal justice data. We know that some of those data sources contain biases, therefore those biases are reflected in the algorithm's output.
Lack of Contextual Understanding: Algorithms don't understand context in the way humans do. In the case of Kathryn from an article by the brilliant Maia Szalavitz in Wired, the algorithm didn't understand that the multiple prescriptions under her name were for her pets, not for her. This lack of contextual understanding can lead to false positives and negatives, which can have serious consequences for patients.
Opaque Scoring Mechanism: The scoring mechanism used by these algorithms is proprietary and not transparent. This means that while healthcare providers and even law enforcement are expected to rely on this data to make decisions about a patients care or potential charges on a prescribing healthcare providers, they don't have a clear understanding of how the scores are calculated. This lack of transparency makes it difficult to identify and correct biases in the algorithm and has led to tragic outcomes for both patients and prescribers.
Overreliance on Algorithmic Output: There's a risk that healthcare providers may over-rely on the algorithm's output when making decisions about patient care. This can lead to decisions that are not in the best interest of the patient. For instance, a high risk score might lead a provider to deny necessary treatment to a patient, even if the high score is due to a bias in the algorithm. This is becoming the case more often than anyone wants to admit, as many prescribers now admit they decide if and what to prescribe based on their risk of law enforcement misusing the PDMP to bring criminal charges versus what they know is medically correct for the patient.
Lack of Redress Mechanisms: If a patient is adversely affected by a biased algorithm, there is no clear mechanism for them to challenge the decision or to have the decision reviewed. This lack of redress mechanisms can exacerbate the harm caused by algorithmic bias. In addition to harms to the patient, careers have been ruined when prescribers are investigated and/or publicly raided and charged based on reports created by the database. Even if they are later found to have done nothing wrong, they have likely lost their medical license, their reputations are ruined, and their patients have been forced to suffer needlessly.
While algorithms can be powerful tools in healthcare, it's crucial to be aware of their limitations and potential for bias. It's important to use these tools as part of a holistic approach to patient care, and to continually review and refine the algorithms to ensure they are as fair and accurate as possible.
A challenge for you:
In the Policy Pulse section, we discussed the introduction of a new bill in Congress concerning algorithm transparency. This is a step in the right direction, but there is still more work to be done. I challenge you to advocate for greater transparency in how tools such as the PDMP operate. This could involve supporting the new bill and pushing for additional legislation that requires companies like NarxCare to disclose how their algorithms are calculating risk scores when those score hold the potential to cause harm or perpetuate existing biases.
Additionally, I urge you to engage with healthcare providers in your community. Listen to their valuable perspective and insights on this and other ways AI and algorithms are being threaded into medicine by policy at all levels. By understanding their experiences, you can better advocate for policies that ensure these tools are used responsibly and effectively.
Now, let's shift gears and inject some levity into our conversation. As we move forward, let's take a moment to enjoy a little light-heartedness and humor before we dive into our joke and comics segment. Laughter is the best medicine after all and has a way of refreshing our minds and opening up new pathways of thought. So, sit back, relax, and get ready for a smile-inducing moment!
Artificial Intelligence Illustrated
Before we share today’s cartoon, here's todays joke written by STAR, the custom AI system built by and for us here at OPAC:
What do politics and algorithms have in common?
Often, they are both run by algorithms and hidden agendas.
Building on the humor and thought-provoking nature of today's joke, let's transition to a cartoon that highlights the presence of regulatory agencies and their increasing interference in the doctor-patient relationship. This humorous - yet unfortunately true - depiction highlights the challenges and bureaucratic hurdles that today's healthcare providers encounter in delivering patient-centered care. The cartoon serves as a lighthearted reminder of the complexities inherent in the healthcare system, emphasizing the need for more transparency regarding how and why various regulatory bodies are increasingly interfering more in the name of public safety and saving money.

That's a wrap for today's edition, dear readers. We've navigated through some heavy topics, but hopefully, we've also sparked some thought and even a chuckle or two. Remember, knowledge is power, and together we can make a difference.
Stay tuned for tomorrow's newsletter when we get back to our regularly scheduled programing, diving into the fascinating intersection of AI and politics.