メインコンテンツに飛ぶ

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice

Why Blackbox AI is a nonstarter in Healthcare

Robert Connely,
シェアする Xで共有 LinkedInで共有 Copying...
ログインしてブログを購読する

And what “glass box” governance really means for value-based care

Healthcare does not have a data or analytics problem; it has an execution problem. For years, payers and providers have invested millions in technologies to analyze data and predict risk, flag at-risk patients, process claims, and respond to disputes. Yet, turning these powerful insights into consistent, timely, and compliant actions remains the industry's greatest challenge. Today, this work falls on people who manage contracts and processes.

The promise of AI, especially generative and agentic models, looks to close this gap. These technologies can do more than just analyze data. They can reason through situations, draft communications, and initiate the very workflows that improve outcomes. Instead of replacing people, the AI assists in performing tasks that free people to address more critical, high-value engagements that require a human touch.

But this power comes with risk. ChatGPT, Claude, Gemini, and other LLMs operate like a "black box." We can see the input (the prompt) and the output (the summary, decision, or action). But the internal processes, how the AI reasoned, is hidden from view, making it impossible to audit or explain. This is fundamentally incompatible with highly regulated systems like healthcare. If AI is a black box, it's a non-starter.

In a recent conversation with leaders from Humana, CVS, Oscar Health, and AmeriHealth Caritas, we discussed the future of value-based care. One theme surfaced immediately: The technology choices we make right now will either humanize healthcare or further mechanize it.

"We are at a crossroads where we can use technology to either further mechanize medicine or to finally rehumanize it."


Harnessing the power of generative and agentic AI in this manner could address these problems and free up people to do more. But in a regulated environment like healthcare, every action requires accountability and a clear, auditable trail. If an AI system influences that decision, we must be able to answer:

  • Why did it act?
  • What specific data did it use?
  • What specific policy or rule did it follow?

A black box, by its very nature, cannot provide these answers.

The "glass box" approach: Case management as the governance framework

Another way to deploy generative and agentic AI safely is within a "glass box." This approach avoids the prompts normally associated with AI chatbots and instead embeds AI in a workflow case. This makes every decision and action visible, auditable, and governable. Within this framework, AI is not a free-thinking, autonomous entity. Instead, it is deployed as a series of specific, well-defined "skills" that are called at precise moments in the workflow to assist people, not replace them.

This approach creates a glass box by default:

  1. Traceability is built-in: Every AI action is just a step in the case. When an AI skill (Predict Readmission Risk) is invoked, the case automatically records the inputs it received, the score it produced, and the next step the workflow took as a result. The entire lineage of the decision is captured in an immutable audit trail.
  2. Constraints are the default: AI skills operate within the rigid "guardrails" of the case workflow. An agentic AI can't "improvise" a new step in a regulated prior authorization process. It can only execute the specific task it was assigned at that specific stage, such as Summarize Attached Clinical Notes. Its actions are constrained by the pre-defined, compliant business process.
  3. Escalations are explicit: The case workflow defines the escalation paths. A business rule can be set that says, "IF the AI-predicted Readmission Risk is GREATER THAN 80%, THEN create a high-priority assignment for a human Care Manager." The decision to involve a human is not left to the AI's discretion; it's an explicit, governable rule within the glass box.

From insight to outcomes

Value-based care requires execution that is consistent with trusted tools to drive execution. Today, people are required to manage relationships and activities that ensure the VBC relationship is successful. AI offers great promise, but its black-box nature makes it inherently untrustworthy for core operations. Still, the value of using AI is enormous.

A better model is one that uses a glass-box approach, built on a foundation of case management. By operating AI within a workflow case structure, we enable a framework to harness the incredible power of AI while limiting its unpredictability, lack of governance, and propensity for hallucinations.

In healthcare, AI doesn't need to be mysterious. It needs to be accountable. Glass-box governance isn't a nice-to-have. It's the minimum bar for scaling AI responsibly and getting work reliably done.

Ready to close the execution gap? Dig deeper with the Agentic AI for Value-Based Care ebook or build a blueprint to see what this looks like for your workflows.

タグ

Industry: ヘルスケア
トピック: インテリジェントオートメーション
製品エリア: プラットフォーム
課題: オペレーショナルエクセレンス
課題: カスタマーエンゲージメント
課題: カスタマーサービス

著者について

Robert Connely, Global Industry Market Leader, Healthcare, Pegasystems: A successful healthcare technology innovator and entrepreneur, Robert’s 30+ years of experience brought him to Pega. He believes that applying AI decisioning and automation to the complex processes between providers, payers, and people is needed to gain value from our current digital investments and use them to positively impact member experience, quality of care, operational efficiency, and reduce healthcare costs.

シェアする Xで共有 LinkedInで共有 Copying...
シェアする Xで共有 LinkedInで共有 Copying...