Chain of Risks Evaluation (CORE): a structured framework for safer large language models in public mental health (email for draft)
Date:
Large language models (LLMs) have witnessed widespread adoption due to their superb abilities to understand and generate natural language. However, they also raise important public mental health concerns, including inequity, stigma, dependence, medical risks, and security threats. This personal view provides a novel perspective within the actor-network framework, clarifying technical architectures, linguistic dynamics, and psychological effects underlying human-LLMs interactions. Upon this theoretical grounding, we identify four types of risks with increasing difficulties to identify and mitigate—universal, context-specific, user-specific, and user-context-specific risks. Correspondingly, we propose CORE: chain of risk evaluation, which offers a structured framework for assessing and mitigating the risks associated with LLMs in public mental health. By treating developing responsible LLMs as a continuum from technical to public efforts, we summarize technical approaches and potential contributions from psychiatrists to evaluate and regulate risks in human-LLMs interactions. We call for crucial efforts from psychiatrists including collaborations with LLMs developers, empirical studies, guidelines for LLMs, and public education etc.