28. Artificial Intelligence (AI) Governance and Use Policy

1. Purpose

This policy establishes governance, security, privacy, and compliance requirements for the development, deployment, and use of Artificial Intelligence (AI) systems at Luma Health.

The purpose of this policy is to:

  • Ensure AI systems at Luma Health are developed and used responsibly and securely
  • Protect customer and patient privacy and maintain HIPAA compliance
  • Prevent unauthorized use of Protected Health Information (PHI) in AI systems
  • Establish safeguards for use of third-party AI models and services
  • Reduce risks associated with AI including data leakage, hallucinations, bias, and misuse

2. Scope

This policy applies to:

  • All employees, contractors, and third-party developers
  • All internally developed AI-enabled products
  • All third-party AI tools, models, APIs, or platforms used in product development or operations

This policy covers:

  • AI features embedded in Luma Health products
  • Internal AI tools used by employees
  • Use of third-party models including but not limited to LLM APIs and hosted AI platforms

3. Definitions

Artificial Intelligence (AI)

Software systems capable of generating outputs such as predictions, recommendations, classifications, or generated content based on input data.

Third-Party AI Models

AI models developed and hosted by external vendors and accessed through APIs or services.

Protected Health Information (PHI)

Individually identifiable health information as defined by the Health Insurance Portability and Accountability Act (HIPAA).

AI System

Any software feature or product component that incorporates machine learning, generative AI, or automated decision-making.

4. Policy Statement

Luma Health develops AI-enabled products using third-party models while maintaining strict privacy protections.

Key principles include:

  • Customer data and PHI are never used to train AI models
  • AI systems must be designed with privacy, security, and transparency by default
  • All AI use must comply with HIPAA, internal security policies, and vendor risk management requirements
  • AI outputs must not be relied upon for medical or clinical decision-making unless specifically validated and approved

5. Prohibited Uses

The following activities are strictly prohibited:

  1. Using customer data or PHI to train AI models unless explicitly approved by Legal, Security, and Compliance and covered under appropriate agreements.

  2. Sending PHI to external AI services unless:
    • The vendor has an executed Business Associate Agreement (BAA)
    • The data flow has been reviewed by Information Security.
  3. Using AI tools that:
    • Retain input data for model training
    • Do not meet company security requirements
    • Have not passed vendor risk review.
  4. Using AI to:
    • Generate clinical diagnoses
    • Replace professional medical judgment
    • Automate medical decisions without regulatory approval.

6. AI Data Handling Requirements

6.1 Training Data Restrictions

The organization does not use customer data or PHI for AI model training.

Training datasets must only include:

  • Public datasets
  • Synthetic datasets
  • Licensed datasets
  • Internal non-sensitive data

Training data must be reviewed to ensure it does not contain:

  • PHI
  • Personally identifiable information (PII)
  • Customer proprietary data

6.2 Input Data Controls

Where AI features accept user input:

  • Systems must implement safeguards to prevent entry of PHI when not required
  • Clear warnings must be displayed when inputs may be processed by AI services
  • Data minimization must be applied.

6.3 Data Retention

AI input and output data must follow existing:

  • Data retention policies
  • Logging and monitoring policies
  • Privacy requirements

Third-party vendors must contractually confirm that:

  • Inputs are not retained for training
  • Data is not shared with other customers

7. Third-Party AI Vendor Requirements

All external AI vendors must undergo Vendor Risk Management review prior to use.

Vendors must demonstrate:

  • Compliance with industry security standards (SOC 2, ISO 27001, or equivalent)
  • Encryption in transit and at rest
  • Data isolation between tenants
  • No training on customer inputs unless contractually approved
  • HIPAA compliance if PHI is processed

If PHI is processed, the vendor must sign a Business Associate Agreement (BAA).

8. Security Controls for AI Systems

AI systems must implement appropriate security controls including:

Access Controls

  • Role-based access to AI development environments
  • Authentication and authorization controls for AI APIs

Logging and Monitoring

  • Logging of AI system interactions
  • Monitoring for abnormal or malicious inputs

Output Controls

  • Guardrails to prevent generation of harmful or inappropriate content
  • Validation of outputs used in product functionality

Prompt Injection Protection

AI systems must implement safeguards against:

  • Prompt injection attacks
  • Data exfiltration attempts
  • Model manipulation through malicious inputs

9. AI Risk Assessment

All AI systems must undergo an AI Risk Assessment before production release.

The assessment must evaluate:

  • Privacy risks
  • Security vulnerabilities
  • Model misuse risks
  • Bias and fairness concerns
  • Data exposure risks
  • Impact on healthcare workflows

Risk assessments must be documented and approved by Information Security and/or Product Management.

10. Transparency and User Disclosure

Luma Health Products that incorporate AI functionality must:

  • Clearly disclose when AI is used
  • Provide appropriate disclaimers
  • Avoid representing AI outputs as medical advice

Users must be informed when:

  • AI-generated content is displayed
  • AI assists with automated tasks or recommendations

11. Human Oversight

AI systems must maintain human oversight mechanisms where appropriate.

Controls include:

  • Human review for high-risk outputs
  • Escalation procedures for incorrect AI outputs
  • Ability to disable AI features if issues are identified

12. Testing and Validation

AI systems must undergo testing prior to release including:

  • Functional testing
  • Security testing
  • Prompt injection testing
  • Data leakage testing
  • Accuracy and reliability testing

Tested can either be manual, automated or a combination of both.

Testing results must be documented, either within an appropriate ClickUp ticket, GitHub PR or Slack thread.

13. Employee Use of AI Tools

Luma Health employees may only use AI tools that have been approved by Information Security.

Luma Health employees must not input the following into public AI tools:

  • PHI
  • Customer confidential information
  • Security-sensitive information

Violations may result in disciplinary action.

14. Incident Response

AI-related security or privacy incidents must be reported to Information Security immediately.

Examples include:

  • Exposure of sensitive data through AI outputs
  • Unauthorized use of AI services *Vendor AI breaches affecting company systems

Incidents will be handled under Luma Health’s Incident Response Plan.

15. Governance and Oversight

AI governance is overseen by:

  • Information Security and Compliance
  • Legal
  • Engineering and product leadership

Responsibilities include:

  • AI risk assessments
  • Vendor reviews
  • Policy enforcement
  • AI usage monitoring

16. Policy Enforcement

Violations of this policy may result in:

  • Revocation of system access
  • Disciplinary action
  • Contract termination (for vendors or contractors)

17. Policy Review

This policy will be reviewed annually or upon significant changes in:

  • AI technologies
  • Regulatory requirements
  • Organizational risk posture.