1. Knowledgebase
  2. Security Policies

Artificial Intelligence Policy

 

Policy Area

IT Policy Library

Approved Date

October 17, 2024

Approved By

Senior Director Information Security & Technology 

Effective Date

October 17, 2024

Current Version

1.0


I. Overview

Artificial Intelligence governance is a part of an organization’s corporate governance.  Artificial Intelligence governance is focused on Information Systems compliance with requirements and related risk management.  Artificial Intelligence governance helps ensure that the organization properly manages its Artificial Intelligence related risks, projects, service delivery, and compliance requirements.

II. Purpose

Artificial Intelligence governance manages the application of technology to business needs and ensures alignment with the enterprise architecture and compliance requirements.

III. Scope

This policy applies to all employees, contractors acting on Bravida Medical’s behalf, and personnel acting on Bravida Medical’s behalf.

IV. Policy 

  1. Overview

Artificial Intelligence helps an organization make smarter, faster, and more informed business decisions.  It helps provide enhanced productivity, increased efficiency, and an improved competitive edge.


This Artificial Intelligence Policy exists to:

  • Align the use of Artificial Intelligence with the organization’s business goals and objectives
  • Enable high-quality enterprise Artificial Intelligence planning and management 
  • Define the roles and responsibilities necessary to create and sustain a comprehensive Artificial Intelligence governance framework
  • Enable new strategic capabilities that allow Bravida Medical to operate efficiently and effectively
  • Identify and manage Artificial Intelligence risk and protect Bravida Medical’s Information Systems
  • Help ensure that the application is trustworthy and operates in an accurate, reliable, safe, and non-discriminatory manner

  1. Artificial Intelligence Committee

An Artificial Intelligence Committee (Committee) shall consist of Bravida Medical’s Executive Management and Department Heads.  The Senior Director of Information Security & IT shall oversee the Artificial Intelligence Committee and provide guidance to the other members of the Committee on appropriate artificial intelligence requirements, controls, and related impacts.


  1. Environmental, Social, and Governance Considerations

When implementing and managing Artificial Intelligence solutions, the Committee shall consider environmental, social, and governance impacts.  For more information, see the ESG Policy.


  1. Artificial Intelligence Bill of Rights

Artificial Intelligence applications shall meet the requirements specified in the White House AI Bill of Rights - Making Automated Systems Work for the American People.  Users of Artificial Intelligence applications shall:

  • Be protected from unsafe and ineffective systems
  • Not face discrimination by algorithms and systems should be used and designed in an equitable way
  • Be protected from abusive data practices via built-in protections and users should have agency how data is used
  • Know that an automated system is being used and understand how and why contributes to outcomes that impact the user
  • Be able to opt-out, where appropriate, and have access to a person who can quickly consider and remedy problems the user encounters

  1. Artificial Intelligence Risk Management Framework

Artificial Intelligence applications shall meet the requirements specified in the National Institute of Standards and Technology Artificial Intelligence Risk Management Framework (AI RMF 1.0). 

  • Framing risks.  
    • AI risk management shall offer a path to minimize potential negative impacts of AI systems, such as threats to civil liberties and rights, while also providing opportunities to maximize positive impacts. Addressing, documenting, and managing AI risks and potential negative impacts effectively can lead to more trustworthy AI systems.
  • Audience.  
    • Identifying and managing AI risks and potential impacts, both positive and negative, requires a broad set of perspectives and actors across the AI lifecycle. Ideally, AI actors will represent a diversity of experience, expertise, and backgrounds and comprise demographically and disciplinarily diverse teams.  The AI RMF shall be used by AI actors across the AI lifecycle and dimensions.
  • AI Risks and Trustworthiness.  
    • For AI systems to be trustworthy, they often need to be responsive to a multiplicity of criteria that are of value to interested parties.  Approaches that enhance AI trustworthiness can reduce negative AI risks.  Characteristics of trustworthy AI systems include: valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed.
  • Effectiveness of the AI Risk Management Framework.  
    • Periodically evaluate whether the AI Risk Management Framework has improved the ability to manage AI risks, including but not limited to the policies, processes, practices, implementation plans, indicators, measurements, and expected outcomes.

  1. Approved Artificial Intelligence Applications

The Committee shall prepare and maintain a list of Approved Artificial Intelligence Applications.  Only approved Artificial Intelligence applications shall be used by Staff.


Any member of the Committee may submit a request for an Artificial Intelligence application to be added or removed from the list of Approved Artificial Intelligence Applications.  The Committee shall determine if such request is approved or denied.  When determining if an application shall be approved or denied, the Committee shall consider Artificial Intelligence Application Controls.


  1. Artificial Intelligence Application Controls

The Committee shall ensure that proper Artificial intelligence related controls and procedures are implemented and maintained.  Such controls shall include, but not be limited to, the following.

  • Audits.  
    • The security of the Artificial Intelligence application shall be reviewed and evaluated by an independent source on an annual basis and after an important change or upgrade to the application.  For more information see the Audit Policy.
  • Fit for purpose.  
    • Requirements shall state how the Artificial Intelligence application works, inputs needed, outputs, integrations with other systems, and user interfaces.
  • Input controls.  
    • It is our policy to control and restrict Staff input into Artificial Intelligence applications
  • We require anonymity of content to avoid disclosing confidential information or giving away Intellectual Property (“IP”) data or data rights. 
  • An employee may not, under any circumstances, share any confidential data inside of an AI platform, unless given explicit authorization from a member of the AI committee that the AI platform they are using is safe to do so, that the data is not being modeled, trained upon, or disclosed in any other fashion. This includes AI built into commonly-used platforms such as Adobe, Microsoft Teams, etc.
  • We restrict or prohibit the use of generative Artificial Intelligence applications for Staff in connection with works in which we want to have enforceable rights - for example, engineers writing code generated using AI must not be used by itself and put into production. It must be human authored enough to be considered our intellectual property, including an audit of the code generated for compliance or security gaps before being used in a material way.
  • Privacy.  
  • Require privacy related controls are sufficient and appropriate
  • Document lawful reasons for processing data, prepare a data protection impact assessment
  • Ensure the ability to comply with individual rights requests (for more information see the Privacy Policy)
  • Service level.  
    • A service level formally identifies the services to be delivered, including important metrics such as guaranteed uptime and response times, and point of contact for Staff who need assistance.

  1. Artificial Intelligence Staff Controls
  • Access control.  
    • Staff shall only access the Artificial Intelligence Application from locations approved by the Committee.  For more information see the Access Control Policy and Password Policy.
  • Confidential Information.  
    • Staff shall not upload or share any Confidential Information without prior approval from the Committee.  For more information see the Network Security Policy, Security Policy, Security Privacy Controls NIST 800-53 Policy, Systems and Communications Protection Policy, and Third Party Service Providers Policy.
  • Security policies.  
    • Staff shall not share passwords/phrases or other Sensitive Information with third parties.  
    • Staff shall follow our organization’s security policies and best practices.  
    • These include, but are not limited to: 
      • Acceptable Use Policy, Approved Application Policy, Audit Policy, Backup Policy, Business Secrets Policy, Certification and Accreditation Policy, Change Management Policy, Cloud Service Provider Policy, Data Classification Policy, Data Retention Policy, Disposal Policy, Encryption Policy, Ethics Policy, Logging Policy, Password Policy, Patch Management Policy, Physical Access Policy, Protecting CUI NIST 800-171 Policy, Remote Access Policy, Reporting Violations Policy, Risk Management Policy, Third Party Service Providers Policy, Securing Sensitive Information Policy, Security Controls Review Policy, and  Software Development Policy.
  • Training.  
    • Ensure Staff receive proper training on how to use the application in a manner that minimizes risk.

V. Enforcement  

Any Staff found to have violated this policy may be subject to disciplinary action, up to and including termination.

VI. Distribution

This policy is to be distributed to all employees.



Policy History


Version

Date

Description

Approved By

1.0

10/16/2024

Initial policy release

Matt Geary

       
       
       
       


References:

COBIT EDM01.01, EDM01.03, EDM02.03, EDM03.02, EDM03.07, APO01.03, MEA03.01

GDPR Article 24, 28, 37

HIPAA 164.308(a)(2), 164.308(b)(4), 164.314(a)(2)(i)

ISO 27001:2022 5, 9, A.5

NIST SP 800-37 3.1, 3.7

NIST SP 800-53 All XX-1 controls, PM-24, SI-5

NIST Cybersecurity Framework ID.GV-1-4, DE.CM-1-6, DE.DP-2

PCI A3.1.2