Published: 2022

Large Language Models are Zero-Shot Reasoners

CATEGORIES

RISK-BASED PROCESS SAFETY ELEMENTS

Research Summary

Shows that large language models can perform multi‑step reasoning in a zero‑shot setting by adding a simple instruction such as “Let’s think step by step,” often eliciting chain‑of‑thought intermediate reasoning and large accuracy gains on reasoning benchmarks. For PSM, many tasks require ordered logic (e.g., deviation → causes → consequences → safeguards → recommendations in HAZOP; or change description → affected equipment/docs → hazards → actions in MOC). The paper is a foundational prompt‑engineering result: a small, reusable prompt pattern can materially improve completeness and logical consistency, making it easier to generate structured, auditable analyses that a safety reviewer can follow.

AUTHORS

Takeshi Kojima; Shixiang Shane Gu; Machel Reid; Yutaka Matsuo; Yusuke Iwasawa

CITATIONS

T. Kojima, S. S. Gu, M. Reid, Y. Matsuo, and Y. Iwasawa, "Large Language Models are Zero-Shot Reasoners," arXiv:2205.11916, May 2022.