Ethics in Artificial Intelligence

With generative artificial intelligence tools being integrated into every aspect of life, there is ever-increasing dialogue about ethical concerns in this realm. Regulations and governance are difficult to implement since these are enacted in response to ethical situations arising. It is quite nearly impossible to anticipate every concerning outcome of the array of AI tools.

A few UT Southwestern faculty, staff, and students convened earlier this year to kickstart conversations around ethical issues in AI. Below is the report that stemmed from this initial workshop.

Discussion Forum (May 14, 2024) 

Session Report 

Artificial Intelligence (AI) is becoming integrated into all aspects of our work, bringing potentially huge transformations in every field of medicine. AI is rapidly evolving, and the guardrails and guidelines for its ethical development and use are only starting to be defined. We convened a forum to bring together voices from across UTSW to examine the dynamic AI landscape, build institution-wide working knowledge of ethical challenges and requirements for AI design and use, and form a network of colleagues engaged in planning a future with responsible and accountable AI. 

Our session started with an introduction round followed by explanations of what AI technology is and defining ethics. Key points are summarized below: 

  • Not everything labeled AI is necessarily AI. Generative AI builds upon machine learning algorithms and simple data association networks by generating complex content based on prior experiences/data. For example, targeted ads on websites are simple data association algorithms that capture user metadata to push similar content. Whereas ChatGPT takes historical data to generate content for a task/prompt. 

  • The formal definition of ethics is the use of words to figure out right and wrong. Ethical analysis is retrospective and can vary against the backdrop of different contexts. The experiences of the past – with all the biases of the past – shape the experience of AI. 

Key questions were highlighted for discussion as stated next: 

  • In the context of ethics in AI in biomedical research, what should be considered while developing AI models that are used to perform research & what should be ethically governed for AI employed to generate meaningful insights from big biomedical data? 

  • In the context of medical education, what should be taught about ethics and how should the curriculum be reframed? 

  • In the context of patient care, ethical use of AI tools need regulation/governance to prevent undue monetization of private information. 

The main discussion centered around handling of data and privacy. Could generative AI models be built to ignore personal data points by retrieval augmented systems? A key ethical concern revolves around data collection and data processing streams; do we then only use aggregate data? Like societally dictated educational qualifications, do we institute certificates for AI?  

The discussion group consensus was that human oversight is and will be always necessary for AI since it’s experience is second or third hand. An institution-wide AI governance committee or even a department could be initialized. Duties of such entities would be to ensure version control of AI models, provide institutional credentials to launched AI models, and empower innovative AI design with transparency and accountability. Every attending member agreed for a longer second session to discuss in length the ramifications of AI design and integration in all aspects of research, clinical care, and education at UT Southwestern. It would be helpful to include executive leaders and institutional legal representatives.