EnBloc AI™ Research Brief
Title: Comparing Bias Reduction Strategies to Improve Healthcare AI Fairness
Author: Neeley Jo Minor | Founder, EnBloc Co™
🧠 Introduction
Healthcare AI has immense potential to advance diagnosis, treatment, and access. However, algorithmic bias can lead to harmful disparities—especially for women, racial minorities, and individuals with chronic illness. EnBloc AI™ is committed to identifying and integrating effective bias reduction strategies to ensure fair, accountable AI.
🗞️ The Neeley Daily: What Went Wrong—Day After Day
“I felt like I was dying. Week after week, I begged for help, but the doctors wouldn’t listen. When I finally found out what was wrong—it was buried in my records the whole time. And still… they said it wasn’t urgent.”
This is the lived reality of too many patients—especially women—who face dismissal, delayed diagnoses, and medical gaslighting. EnBloc AI™ is designed to bring these stories forward, uncover what’s hidden in the documentation, and transform ignored suffering into truth-telling power.
If you’ve ever been told “it’s all in your head,” you’re not alone.
Our tools help you decode your records, fix the notes, and reclaim your health story.
📝 Health Note
Keeping track of your medical records is not just about paperwork—it’s about protection, pattern recognition, and power.
Here’s what to review regularly:
-
SOAP Notes (Subjective, Objective, Assessment, Plan)
-
Specialist summaries
-
Lab result trends (especially inflammation and immune markers)
-
Missed or altered symptoms in visit notes
-
Referrals and follow-up timelines
EnBloc AI™ helps you:
-
Spot hidden patterns in autoimmune flares
-
Correct false or missing notes
-
Compare symptom clusters with known triggers
-
Track your health history visually
Remember: Your story matters. Your notes matter more.
🩺 Why Medical Notes Matter™
Your medical notes shape your future care. Every word, omission, and miscode can:
-
Delay diagnosis
-
Deny proper treatment
-
Undermine insurance or legal action
-
Affect how every new provider sees you
EnBloc AI™ exists to make sure your notes tell the truth about what you’ve lived through.
We believe:
-
📌 If it’s not documented, it didn’t happen.
-
🧠 Bias in records leads to bias in care.
-
📣 Fixing your notes can save your life—and someone else’s.
#MedicalNotesMatter is more than a hashtag. It’s a movement. Join us.
📉 Strategy Comparison Table
Strategy Type | Description & Methods | Strengths | Weaknesses | Impact on Fairness & Performance |
---|---|---|---|---|
Pre-processing | Modifies training data (e.g., resampling, reweighting) | Easy to apply before training; addresses data imbalance | May slightly lower accuracy | Improves fairness; useful when retraining model is difficult |
In-processing | Adds fairness constraints or training methods (e.g., adversarial learning) | Directly optimizes fairness; can tailor per subgroup | Requires more resources and group data | Strongest fairness gains; small performance tradeoffs |
Post-processing | Adjusts predictions or thresholds after model training | Works on deployed models; minimal disruption | Limited by pre-existing model bias | Helpful for tuning; less effective on deep-rooted bias |
Human-in-the-loop | Human reviews and feedback at key AI development points | Adds nuance, context, and ethical oversight | Time and resource intensive | Enhances trust and captures subtle bias |
Transparency | Model explainability and decision traceability | Builds trust; allows for bias identification | Doesn’t fix bias alone; explanations may be unreliable | Supports fairness through auditability and accountability |
Stakeholder Input | Involves patients, advocates, and clinicians throughout design and deployment | Ensures real-world relevance and equitable goals | Slower development; resource heavy | Critical for ethical, inclusive AI |
🔍 Key Research Takeaways
-
In-processing shows the greatest ability to equalize outcomes across subgroups.
-
Pre-processing is ideal when model access is limited, but risks performance tradeoffs.
-
Post-processing is fast but less effective alone.
-
Human and stakeholder engagement is critical for ethical AI, especially in healthcare.
✅ Practical Recommendations
-
Combine pre-, in-, and post-processing strategies for layered fairness.
-
Integrate clinicians, patients, and advocates from design to deployment.
-
Maintain auditable logs and explainability tools.
-
Regularly audit and retrain models to correct emerging bias.
📈 Real-World Examples
Strategy Type | Application Example | Outcome |
Pre-processing | Reweighting EHR data by race | Improved fairness with slight accuracy loss |
In-processing | Stratified cardiac imaging training | Equalized performance across groups |
Post-processing | Calibrated thresholds on risk scores | Some fairness improvement, limited scope |
Human oversight | Clinician review of flagged predictions | Detected edge cases, increased trust |
Stakeholder input | Patient advocate involvement in model rollout | Higher patient alignment + trust |
💜 Conclusion
Improving fairness in healthcare AI is not about choosing one perfect method—it’s about combining technical solutions with human-centered oversight. The EnBloc AI™ approach incorporates layered fairness strategies and community collaboration to bridge healthcare bias and empower better outcomes for all.
Visit: EnBloc.org Contact: support@enbloc.org #BridgeTheBias #FairAI #HealthcareEquity #MedicalNotesMatter
Medical Notes Matter
Medical Notes Matter: Using AI to Bridge Healthcare Bias
Discover how EnBloc AI™ combines AI, storytelling, and patient power to fight medical bias and transform health records into advocacy tools.
Published: May 15, 2023|Reading time: 6 min
Medical Notes MatterUsing AI to Bridge Healthcare Bias
When Sophia Neeley received her medical records after a spinal cord injury, she discovered something alarming: her symptoms were repeatedly downplayed, her pain dismissed as “anxiety,” and critical information about her ASIA impairment scale was missing entirely.
This wasn’t just poor documentation—it was a pattern that could affect her treatment, insurance coverage, and long-term care. Unfortunately, Sophia’s experience isn’t unique.
The Hidden Crisis in Medical Documentation
Medical notes serve as the official record of a patient’s health journey, but they’re often riddled with biases that disproportionately affect women, people of color, and those with disabilities. These biases can manifest as:
- Minimizing symptoms and pain levels
- Attributing physical symptoms to psychological causes
- Omitting critical diagnostic information
- Using stigmatizing language that affects future care
“Your medical record isn’t just documentation—it’s the story that follows you through every healthcare interaction for the rest of your life. Make sure it’s accurate.”
Introducing EnBloc AI™: Your Medical Advocate
EnBloc AI™ is a groundbreaking tool designed to help patients identify, understand, and address potential biases in their medical records. Using advanced natural language processing, it:
How EnBloc AI™ Works:
- Analyzes your medical notes for language patterns associated with bias
- Identifies missing information critical to your diagnosis and treatment
- Suggests evidence-based corrections you can request from your provider
- Generates a professional amendment letter you can submit
The Impact of Biased Documentation
Biased Documentation |
Potential Consequences |
EnBloc AI™ Solution |
“Patient appears anxious about pain” |
Pain undertreated, psychological focus |
Flag psychologizing language, suggest objective pain documentation |
Missing ASIA impairment scale |
Incomplete treatment, insurance denial |
Identify missing clinical assessments, generate amendment request |
“Non-compliant with treatment” |
Stigma affecting future care |
Suggest neutral alternatives that capture context |
Real Stories, Real Impact
Sophia’s Story
After discovering critical omissions in her spinal cord injury documentation, Sophia used EnBloc AI™ to generate an amendment request that included:
- Proper documentation of her ASIA impairment scale
- Correction of dismissed symptoms later confirmed by imaging
- Removal of speculative psychological attributions
Result: Complete revision of her medical record, approval of previously denied treatments, and a formal apology from her healthcare provider.
Taking Control of Your Medical Narrative
Your medical record is your story—and you have both the right and responsibility to ensure it’s accurate. Here’s how to get started:
1. Request
Request your complete medical records from all providers
2. Review
Use EnBloc AI™ to analyze your records for bias and omissions
3. Respond
Submit amendment requests using our templates
Ready to Take Action?
Join thousands of patients who are transforming their healthcare experience by ensuring their medical records accurately reflect their reality.
Join The Neeley Daily Community
Get updates, tools, and real stories that drive change in healthcare.
Subscribe
#EnBlocAI#MedicalNotesMatter#PatientAdvocacy#ASIAAwareness
SN
Sophia Neeley
Patient advocate, healthcare equity researcher, and founder of EnBloc AI™. Sophia’s work focuses on using technology to empower patients and address systemic bias in healthcare.
Related Articles
Article Image
The Gender Pain Gap: How Women’s Pain Is Undertreated
Exploring the research behind gender disparities in pain management…
Article Image
Understanding Your Rights to Medical Record Amendments
A step-by-step guide to requesting changes to your health records…
Article Image
AI in Healthcare: Promise and Pitfalls
How artificial intelligence is transforming patient care and advocacy…
💥 Ready to decode your records?
© 2023 The Neeley Daily. All rights reserved.
Box content
EnBloc AI™ Research Brief
Comparing Bias Reduction Strategies to Improve Healthcare AI Fairness
Author: Neeley Jo Minor | Founder, EnBloc Co™
🧠 Introduction
Healthcare AI has immense potential to advance diagnosis, treatment, and access. However, algorithmic bias can lead to harmful disparities—especially for women, racial minorities, and individuals with chronic illness. EnBloc AI™ is committed to identifying and integrating effective bias reduction strategies to ensure fair, accountable AI.
Text content
📉 Strategy Comparison Table
Strategy Type | Description & Methods | Strengths | Weaknesses | Impact on Fairness & Performance |
---|---|---|---|---|
Pre-processing | Modifies training data (e.g., resampling, reweighting) | Easy to apply before training; addresses data imbalance | May slightly lower accuracy | Improves fairness; useful when retraining model is difficult |
In-processing | Adds fairness constraints or training methods (e.g., adversarial learning) | Directly optimizes fairness; can tailor per subgroup | Requires more resources and group data | Strongest fairness gains; small performance tradeoffs |
Post-processing | Adjusts predictions or thresholds after model training | Works on deployed models; minimal disruption | Limited by pre-existing model bias | Helpful for tuning; less effective on deep-rooted bias |
Human-in-the-loop | Human reviews and feedback at key AI development points | Adds nuance, context, and ethical oversight | Time and resource intensive | Enhances trust and captures subtle bias |
Transparency | Model explainability and decision traceability | Builds trust; allows for bias identification | Doesn’t fix bias alone; explanations may be unreliable | Supports fairness through auditability and accountability |
Stakeholder Input | Involves patients, advocates, and clinicians throughout design and deployment | Ensures real-world relevance and equitable goals | Slower development; resource heavy | Critical for ethical, inclusive AI |
📣 My Breast Implant Imaging Was Ignored — But the Pictures Tell the Truth
📣 My Breast Implant Imaging Was Ignored — But the Pictures Tell the Truth
By Neeley Jo Minor | #MedicalNotesMatter Series
When I first got breast implants, I never expected that one day I’d have to decode my own medical records just to be heard.
But that’s exactly where I found myself—alone, exhausted, and dismissed. Doctors told me my pain was “in my head,” even while I struggled with symptoms like fatigue, brain fog, chest pain, and burning nerve sensations.
So I started looking for answers the only place I knew they might still exist: the images.
🖼️ What My Imaging Showed
After years of symptoms, I reviewed two major scans:
1️⃣ CT/MRI Scan – April 2021
Finding: A clear “subcapsular line sign”—a well-known indicator of intracapsular silicone rupture.
What It Means: Silicone had leaked inside the scar capsule surrounding the implant. This is a documented rupture, even if it’s not “emergency level” yet.
2️⃣ Mammogram – May 2021
Finding: Distorted implant shape, shadowing, and unclear breast tissue density.
What It Means: The scan couldn’t read the breast volume or tissue properly—because the implant blocked visibility. The system literally couldn’t calculate it.
And yet… no one called me.
No follow-up.
No plan.
🚩 The Problem with Being “Too Complicated”
Because I had implants, my scans became “technically difficult.”
Because I had symptoms “that didn’t fit one diagnosis,” I was told I was anxious.
Because I questioned it, I was seen as noncompliant.
This is medical gaslighting. And it’s far too common for women with implants.
💬 Why This Matters for You
If you have breast implants and feel something is wrong, trust yourself. Your body knows.
Start asking:
❓ What did my last scan actually show?
❓ Was my breast density readable?
❓ Is there a report of “shell collapse,” “subcapsular line,” or “distortion”?
You may be able to spot red flags before your doctor ever mentions them.
🔎 The Real Cost of Being Ignored
A ruptured implant isn’t just cosmetic. It can:
Trigger chronic inflammation
Aggravate mast cell activation
Mimic autoimmune diseases
Cause neurological issues
Be hard to remove once the silicone leaks further
This isn’t just about looks. This is about life.
💡 Takeaway: You Deserve to Be Taken Seriously
Your symptoms are real. Your imaging matters.
And your voice—especially when backed by evidence—is powerful.
That’s why I started sharing my own scans. Not to scare anyone. Not to shame doctors.
But to help someone else realize:
“If no one else is listening—your images still are.”
🧠 Related Posts to Read Next:
🩻 What the Subcapsular Line Sign Means (And Why Doctors Miss It)
📸 Breast Density, Implants, and Missed Diagnoses
🧾 How to Write an Addendum Request for Your Medical Record
💌 Ready to Advocate?
Download my Medical Notes Matter Checklist or use my free templates to review your own records.
Follow @EnblocAI or [@SimplyNeeley] for tips, templates, and real truth from real women.
📣 Because we are not too complicated.
We’re just tired of being ignored.
Welcome This is our story

Coffee is a brewed beverage using roasted coffee beans, the seeds of cherries from various Coffea species. The genus Coffea is native to tropical Africa, Madagascar, the Comoros, Mauritius, and Réunion in the Indian Ocean, amoung others.
We’ve never been so excited to share with you our coffee knowledge and the story that every bean can tell. Come on in, take a sip, and explore with us!
