Artificial intelligence (AI) is rapidly being adopted across environmental health and safety (EHS) programs. In this context, AI refers to systems that perform one or more cognitive functions: perceiving conditions (such as computer vision and sensor systems), recognizing patterns, predicting outcomes, generating language, or recommending actions. From camera systems that flag unsafe behaviors to algorithms that forecast injury rates and generative tools that summarize regulations, AI is often marketed as a neutral, tireless safety professional that never looks away.
In practice, however, AI systems inherit the assumptions, blind spots, and data limitations of the humans who design and deploy them. When these systems fail, the consequences are not abstract. They are physical injuries, missed hazards, flawed compliance decisions, and false assurances of safety.
A widely reported example occurred in November 2023 at an agricultural distribution center in South Korea, where a worker inspecting a sensor on a robotic lifting system was fatally crushed when the machine’s vision software failed to distinguish him from the boxes it was programmed to handle. This was not a simple mechanical malfunction. The sensors worked and the code executed, but the system lacked the contextual understanding to differentiate between “product” and “person.” 1
This incident highlights a critical vulnerability in modern automation: algorithms do not see the world; they classify patterns based on prior training. When classification fails, the outcome can be catastrophic.
The Illusion of Objectivity in AI Safety Systems
One of the most persistent myths surrounding AI in safety is that it is objective. Algorithms do not evaluate hazards the way an experienced industrial hygienist or safety professional does. They identify statistical patterns in historical data. If that data is incomplete, biased, or unrepresentative of real-world conditions, the system will confidently produce incorrect conclusions.
AI-based camera systems designed to detect missing personal protective equipment (PPE) or unsafe proximity to equipment may perform well in controlled environments. However, glare, dust, shadows, unconventional PPE, or atypical body positioning can cause both missed detections and false alarms. When alerts are frequently incorrect, workers develop automation-driven alert fatigue – a new risk pathway created not by human complacency, but by technological overconfidence.
AI does not understand context. A worker entering a restricted area during routine operations may represent unsafe behavior. The same action during an emergency repair may be necessary and lifesaving. Algorithms enforce rules based on pattern recognition, not situational judgment.
Predictive Analytics and Reporting Bias
Predictive safety platforms claim to forecast injuries by analyzing incident reports, near misses, and behavioral observations. In theory, this allows earlier intervention. In practice, these systems often amplify existing reporting biases.
If near misses are underreported or minor incidents are discouraged from documentation, the model interprets the absence of data as the absence of risk. High-risk tasks may appear statistically “safe” simply because events were normalized and never recorded. Conversely, more visible or recently scrutinized work groups may appear disproportionately risky.
Early adopters of predictive tools have discovered that models frequently prioritize frequency over severity. Numerous minor ergonomic complaints may outweigh a single low-frequency but catastrophic process safety hazard. Human safety professionals recognize that low-probability, high-consequence risks demand disproportionate attention. Many algorithms do not.
Generative AI in Safety Practice: A New Risk Frontier
For many safety professionals, exposure to AI now occurs primarily through generative large language models (LLMs). These tools are increasingly used to summarize standards, draft safety procedures, generate training materials, and interpret regulatory requirements.
While efficient, generative AI introduces distinct failure modes, LLMs may produce confident but incorrect regulatory interpretations, fabricate citations, oversimplify complex compliance obligations, or omit critical exceptions. Because their language is fluent and authoritative in tone, errors may go undetected unless carefully reviewed by a qualified professional.
In safety and compliance contexts, an inaccurate summary of an Occupational Safety & Health Administration (OSHA) requirement or a misinterpreted exposure limit is not a minor inconvenience; it may influence policy decisions, documentation, or enforcement posture. Generative AI can assist with drafting an information synthesis, but it cannot replace regulatory expertise, professional judgment, or legal accountability.
Automation Does Not Eliminate Risk – It Redistributes It
Automation in robotics, material handling, and industrial systems has improved efficiency and reduced certain physical exposures. In narrow, repetitive, well-controlled tasks, AI systems may outperform humans in speed and consistency.
However, automation does not remove risk; it redistributes it. Mechanical risks may decrease while systemic, classification, oversight, and governance risks increase. When organizations treat AI as a replacement for hazard analysis rather than as an input to it, failures become inevitable.
AI is trained on historical data. Safety, by definition, is concerned with preventing novel and future harm. Algorithms codify past patterns; they do not anticipate unprecedented failure modes.
A Realistic Role for AI in Health and Safety
AI can be valuable when treated as an assistant rather than an authority. It can surface weak signals, identify trends across large datasets, reduce administrative burden, and support documentation efficiency.
The most effective safety programs treat AI outputs as prompts for professional evaluation, not final determinations. When an algorithm flags risk, a competent safety professional investigates. When it reports no risk, that absence is questioned rather than accepted blindly. AI should augment the foundational principles of anticipation, recognition, evaluation, and control – not override them.
Technology Does Not Eliminate Responsibility
Health and safety failures involving AI are rarely failures of technology alone. They are failures of governance, expectation management, and professional oversight. No algorithm can be held accountable during an OSHA inspection, deposition, or incident investigation. Responsibility remains with employers and safety professionals.
As AI becomes more embedded in EHS systems, the need for human expertise becomes more critical – not less. The central question is not whether AI can improve safety. It is whether organizations understand its limits before those limits result in harm.
ETI: Professional Judgment in an AI-Driven Safety Landscape
HETI’s team of certified industrial hygienists and experienced environmental health & safety professionals helps organizations critically evaluate and responsibly integrate AI-based tools into comprehensive EHS programs. Our services include assessing automated system limitations, validating AI-generated safety data through field verification, identifying gaps in hazard recognition, and ensuring that professional oversight remains central to risk management decisions.
By combining technical expertise, regulatory knowledge, and real-world observation, HETI helps clients use artificial intelligence to enhance safety performance without compromising worker protection.
Reference-
1 The Guardian. “Industrial robot crushes man to death in South Korean distribution center,” citing Yonhap News Agency, November 8, 2023. Retrieved January 20, 2026. https://www.theguardian.com/technology/2023/nov/08/south-korean-man-killed-by-industrial-robot-in-distribution-centre
To find out more about HETI’s industrial hygiene and safety services, please contact us.
Daniel Farcas, PhD, CIH, CSP, CHMM Senior Industrial Hygienist
