Bridges Curriculum Generative Artificial Intelligence Usage Policy

Purpose

This policy provides guidance for the ethical and secure use of generative artificial intelligence (AI) tools in undergraduate medical education.

Overview

Generative AI refers to a category of artificial intelligence (AI) algorithms that generate new outputs including text, images, sounds, animation, 3D models, and other types of data using a chatbot (prompt) interface. Generative AI enables users to generate content quickly and easily using a variety of inputs, with growing benefits to education, clinical care, research, and advocacy. Many generative AI tools have been trained on massive amounts of text data and can produce human-like text and generate responses; these are referred to as large language models [LLMs].  

At UCSF, in collaboration with our Privacy, Procurement, and Legal teams, we now have generative AI technology through UCSF’s Microsoft Azure cloud environment using the Azure OpenAI service called Versa Chat. For more information, please see: ChatGPT/ Large Language Models (LLM)/ Artificial Intelligence (AI).

As the use of generative AI tools continues to expand, students, GME trainees, staff and faculty must understand how to use these tools safely and appropriately.
 

Related LCME Standards

5.9         Information Technology Resources/Staff
7.4         Critical Judgment/Problem-Solving Skills
7.7         Medical Ethics

Principles

The following principles guide the policy and recommendations for AI usage in the Bridges Curriculum. To employ AI safely and effectively in medical education and practice, users must: 

  1. understand and adhere to UCSF and UC system-wide policies and expectations on AI use, 
  2. have sufficient proficiency in the subject area to verify the accuracy of the output, 
  3. understand and acknowledge potential biases and limitations of information produced by generative AI tools, and 
  4. follow processes for documentation and appropriate attribution of AI use. 

Policy

  1. Use of any AI tools must comply with the policies and procedures outlined by the applicable campus, department, and health system or clinical setting. 
  2. Versa (UCSF platform) generative AI tool is allowed for students, faculty, and staff including patient care activity at UCSF Health sites because it is HIPAA compliant and approved for UCSF data including patient information. While use of Versa is allowed at all sites, use with patient data is currently permitted only at UCSF Health sites.
  3. Use of any AI tools for sensitive UCSF data, including identified or de-identified patient, personnel, confidential, faculty-owned/copyrighted and other proprietary materials or otherwise sensitive UCSF data, is prohibited on commercial platforms (e.g. ChatGPT, Bard, Bing, Llama, etc.).
  4. The use of AI tools is prohibited for activities in which students are evaluated as representing their own knowledge or skills unless explicitly granted permission by the faculty. Students are responsible for verifying whether the use of generative AI tools are permissible from the faculty. Violations of this policy will be considered academic misconduct, underscoring the seriousness of adherence to these guidelines.
  5. The student is responsible for reviewing all AI generated texts for accuracy and appropriateness.

Accountable Dean or Director: Associate Dean for Assessment, Improvement, and Accreditation

Approval and Governing Body: 12/10/2024, Committee on Curriculum  and Educational Policy