Published: 2022

Least-to-Most Prompting Enables Complex Reasoning in Large Language Models

CATEGORIES

RISK-BASED PROCESS SAFETY ELEMENTS

Research Summary

Proposes “least‑to‑most prompting,” where the model first decomposes a hard problem into simpler sub‑questions and then solves them sequentially, using earlier answers to support later steps. The method improves performance on problems that are harder than the examples shown in the prompt by enforcing a decomposition plan. This aligns closely with PSM workflows that already rely on structured decomposition (PHA/HAZOP worksheets, LOPA steps, MOC checklists, cause–consequence chains). As a prompt template strategy, it helps prevent “jumping to conclusions” and supports repeatable, stepwise safety reasoning, especially when the analysis spans multiple units, safeguards, or documentation sources.

AUTHORS

Denny Zhou; Nathanael Schärli; Le Hou; Jason Wei; Nathan Scales; Xuezhi Wang; Dale Schuurmans; Claire Cui; Olivier Bousquet; Quoc V. Le; Ed H. Chi

CITATIONS

X. Wang et al., "Self-Consistency Improves Chain of Thought Reasoning in Language Models," arXiv:2203.11171, Mar. 2022.