Published: 2025

Safety analysis in the era of large language models: A case study of STPA using ChatGPT

CATEGORIES

RISK-BASED PROCESS SAFETY ELEMENTS

Research Summary

Applies ChatGPT to Systems Theoretic Process Analysis (STPA) and experimentally measures how interaction style, input semantic complexity, and prompt engineering affect the hazards, unsafe control actions, and safety constraints produced. The authors report that STPA‑specific prompt engineering yields statistically significant and more pertinent results than domain‑agnostic prompt guidelines, while fully automated use can be unreliable. This is highly relevant to Process Safety Management because STPA is a structured hazard identification method and the findings generalize to other structured analyses (HAZOP/What‑If/LOPA): you need domain‑specific prompt templates, explicit output structure, and human oversight. The paper provides evidence-based guidance on when and how prompt engineering improves safety-analysis quality and where to place human-in-the-loop controls.

AUTHORS

Yi Qi; Xingyu Zhao; Siddartha Khastgir; Xiaowei Huang

CITATIONS

Y. Qi, X. Zhao, S. Khastgir, and X. Huang, "Safety analysis in the era of large language models: A case study of STPA using ChatGPT," Machine Learning with Applications, vol. 19, p. 100622, Mar. 2025, doi: 10.1016/j.mlwa.2025.100622.