Skip to main content
Not Found
Digital Seminar

Accountable Innovation AI and Mental Healthcare


Average Rating:
Not yet rated
Speaker:
Henry Xiao, PhD
Duration:
2 Hours
Copyright:
12 Nov, 2025
Product Code:
POS150425
Media Type:
Digital Seminar
Access:
Never expires.


Description

This is a 2-hour, C.E. eligible presentation presented by Henry Xiao, PhD, a licensed clinical psychologist and the assistant director of operations for Penn State’s Counselling and Psychological Services.

Generative AI is continuously evolving and remains an exciting hot topic across many sectors. At the same time, there remains uncertainty and well-warranted concern about its ethical implementation. For many, the idea of using GenAI remains uncomfortable and perplexing for personal use and/or professional settings. With that in mind, this presentation will focus on understanding GenAI, especially “chatbots” (large language models like ChatGPT, Gemini, Claude, etc.), and why they are so publicly popular (by demonstrating use cases), alongside the myriad ways AI may be impacting us all.

We will explore how GenAI works, discuss major ethical issues, and demonstrate cases that you can independently test immediately. This talk will be particularly helpful for anyone interested in the exploration of personal or professional use of these tools while remaining better informed on their limitations. Even if you have never used or remain highly skeptical of using AI, this talk will help you increase your understanding and awareness of how AI is already impacting the world around us.

This presentation will feature unsponsored example prompt demonstrations using free versions of multiple GenAI tools to highlight the personal and professional active uses of the technology.

CPD

Disclosure of Program CoSponsorship

This program was developed through the joint providership of PESI, Inc. and Center for Collegiate Mental Health (CCMH).

For a record of activity attendance and completion, please log into your account on pesi.com. For additional record questions or concerns, please contact our customer service team at www.pesi.com/info or 1-800-844-8260.


Planning Committee Disclosure - No relevant relationships

All members of the PESI, Inc. planning committee have provided disclosures of financial relationships with ineligible organizations and any relevant non-financial relationships prior to planning content for this activity. None of the committee members had relevant financial relationships with ineligible companies or other potentially biasing relationships to disclose to learners.  For speaker disclosures, please see the faculty biography.



CPD

This online program is worth 2.0 hours CPD.



Handouts

Speaker

Henry Xiao, PhD's Profile

Henry Xiao, PhD Related seminars and products


Henry Xiao, PhD, is the assistant director of operations at Penn State’s Counseling and Psychological Services (CAPS), where he merges clinical expertise with technological innovation to enhance mental healthcare for college students. Henry is a licensed clinical psychologist and has conducted extensive research on psychotherapy process and outcomes. For over a decade, he has been integrating data-driven decision-making into college counseling systems through his graduate training at Penn State University alongside his work with the Center for Collegiate Mental Health. In his current role at CAPS, he oversees implementation of technology and data security, works closely with the University’s IT team to plan and implement technology changes, and maintains a strong connection to CCMH, now serving as a research advisor, ensuring that CAPS’ mental health services remain both effective and up to date. His passion for technology combines a professional and personal interest.

 

Speaker Disclosures:
Financial: Henry Xiao has an employment relationship with Pennsylvania State University. He receives a speaking honorarium from PESI, Inc. He has no relevant financial relationships with ineligible organizations.
Non-financial: Henry Xiao is an ad hoc reviewer with Cogent Mental Health and Psychotherapy Research.

 


Objectives

  1. Identify key ethical issues and considerations of personal and professional usage of AI.
  2. Summarize how AI works and is currently being used in the mental health field.
     

Outline

Rationale- Why this talk and what to expect

  • Disclaimer; speaker is not an AI engineer or programmer, nor are they paid/sponsored by any AI company
  • All demos in presentation created using free version of cited websites
  • Speaker’s identities informing the talk (Pope is also exploring AI, indicating international/global importance)
  • Zoom questions to assess audience familiarity with AI; roughly 1-2 min each, results to be shared live
  • Discussion on some informed beliefs about AI the user holds, based on research and professional/personal exploration (to set context for the talk)
  • Learning Objectives 

Utilization statistics- who is using AI, and why is it important?
Additional note here about sources- each use of tool is cited and dated, and bibliography in general contains mix of empirical research and more up-to-date articles/sources

  The majority of the world in surveys is now using weekly

  • True use likely different due to fast changing nature
  • Younger individuals use with higher frequency
  • Example of ChatGPT use and speed of adoption
  • Who is using- international geopolitics and the importance of AI

AI Primer- What is AI, how does it work?

  • Introduce and speak about AI -> Generative AI -> Large Language Models
  • Large Language Models introduced as a main focus and public phenomenon
  • How do LLMs work? Emphasis on pattern prediction without true “understanding” of truth vs fiction
  • Baby and Generative AI analogy- broadly speaking, neither truly “understand” why they do what they do, but they still react with accuracy 
  • Examples of commonly used LLMs; identifying ChatGPT, Claude, Gemini, DeepSeek, and Perplexity 

Ethics and Considerations-what are the current concerns with AI across sectors?

  • DeepSeek January 2025 release: demonstrating “chain of thought” and unveiling the “black box” in an accessible way
  • Deepseek TianAnMen Square censorship and power of owning company to determine alignment with values/safety
  • DeepSeek release as a good example of multiple ethical considerations, including geopolitics and economics, censorship and guardrails, and mass public use
  • Agenda for categories of Ethical Considerations- much to talk about it, hope is to generate some though around these areas not enough time to fully explore each
    • GIGO and Bias
      • LLM/GenAI output is wholly dependent on the data it is trained on
      • Example of impact on non-English speakers, and also example of Clocks stuck at 10:10
  • Privacy, Responsibility, and Data Security
    • Beyond geopolitical value, identity theft as a very strong reason to not put sensitive data into a public LLM
    • Health Care Records (HIPAA level data) particularly still valuable
    • Ethics Boards growing to accommodate responsible use of AI, American Psychological Association as an example
    • International organizations (United Nations conference)
    • Red Queen Hypothesis- the technology rapidly advances, so “staying still” is not a viable option
  • Ecological and Energy Costs
    • Electrical upkeep, high training and inference costs
    • Part of the advance is also decreasing cost by increasing efficiency, but still unequal distribution of cost
    • Human rights and ethics- RLHF in Kenya
  • Humanity: Critical Thinking
    • Higher ed and education in general- LLM impacts
    • Example of how complexly an LLM can answer a question
    • LLMs as making “knowledge” more accessible without necessarily requiring “wisdom”
    • Microsoft self-report study on LLM use and critical thinking
    • Small example of everyday impact; Google’s AI Overview
  • Humanity: Social Connection
    • Example of a voice conversation with AI
    • Study of public forums (I.e., Reddit/Quora) and how users use LLMs for companionship
    • More in Mental Healthcare section
  • Humanity: Creativity/Artistry
    • Music example (if time)
    • Video example
    • Haiku; personally written; but now impossible to prove
    • Discussion on what constitutes “art” as defined as a human quality
    • Summary of research findings related to human preference    
  • Mental Healthcare
    • Current uses; no free LLMs offer HIPAA security; you’d need to pay for proper security
    • Chatbots for therapy; pros and cons, including studies and anecdotal Reddit Comments
    • Parasocial use, including lawsuit against Character.AI and google
    • CCMH data- increasing social anxiety, anxiety, and the role of avoidance
    • Discussion of Harlowe’s Monkey study and how AI might be the “cloth monkey” subjectively for individuals in difficult situations
    • How do we continue to discuss distress tolerance?

Examples:

  • Broad categories of use, as defined by Microsoft study
  • Example of personal use, “everyday” prompts
  • Example of personal/professional use: how do I use an LLM?
  • Example of professional use, generation of a client worksheet
  • Example: language comparison of ChatGPT and Claude; importance of personal exploration and model alignment
  • Additional Suggestions to explore AI

Time for Questions
 

Target Audience

  • Addiction Counsellors
  • Certified Case Managers
  • Counsellors, Educators 
  • Marriage and Family Therapists
  • Nurses
  • Physicians
  • Psychologists
  • Social Workers
  • Art Therapists

Reviews

Satisfaction Guarantee
Your satisfaction is our goal and our guarantee. Concerns should be addressed to info@pesi.co.uk or call 01235847393.

Please wait ...

Back to Top