Skip to content
All posts

Algorithmic Management Started in Control Rooms — Long Before AI

When the NSW Government recently introduced amendments to work health and safety legislation to address risks associated with digital work systems, much of the commentary focused on modern technologies such as artificial intelligence, automated workforce management platforms, and algorithm‑driven labour scheduling tools.

These technologies are increasingly used to allocate work, monitor performance, and influence worker behaviour. Regulators have recognised that such systems can create health and safety risks, particularly psychosocial hazards such as stress, cognitive overload, and excessive performance pressure.

But there is an important question that has received far less attention.

What if many organisations have already been using digital systems that allocate and shape work for decades?

Operational control rooms across industries such as electricity, water, transport, and energy have long relied on sophisticated digital systems to monitor assets and guide operational decisions. These systems often include SCADA alarm management platforms, decision‑support tools, automated prioritisation logic, and rule‑based operational workflows.

In other words, many control room environments have been practising a form of algorithmic work management long before the current conversation about artificial intelligence began.

The new NSW legislation does not suddenly make these systems unsafe. However, it may prompt an uncomfortable but important question for organisations that operate them.

Have these digital systems ever been evaluated as part of the workplace safety system, or have they only ever been treated as engineering tools?

Digital Systems Already Direct Operator Behaviour

In most operational environments, digital systems play a central role in shaping how work is performed.

Consider the role of a typical SCADA alarm system. The system is designed to monitor thousands of process points and notify operators when abnormal conditions occur. It prioritises events, suppresses others, and presents information in a way intended to guide operator attention.

In practice, this means the alarm system is constantly determining what demands attention and what can be ignored.

During steady operations this may not seem significant. But during disturbances or abnormal events, the alarm system effectively dictates the operator’s workload and decision priorities.

Similarly, many control rooms use decision‑support tools that analyse network conditions, calculate operational limits, or recommend control actions. These tools may not automatically execute decisions, but they can strongly influence how operators respond to system conditions.

Rule‑based automation also plays a role. Alarm suppression logic, automated sequences, and procedural prompts can structure the workflow of control room staff. These mechanisms influence what tasks operators perform and how they respond to emerging situations.

Seen through a different lens, these systems are not just monitoring assets. They are shaping how humans interact with complex infrastructure.

In many ways, they already function as digital systems that organise and allocate work.

When Digital Systems Create Workplace Hazards

The recent WHS legislative changes recognise that digital systems can create workplace risks, particularly psychosocial hazards.

Control room environments already provide clear examples of how this can occur.

Alarm floods are perhaps the most well‑known issue. When large numbers of alarms are generated during a disturbance, operators can be presented with dozens or even hundreds of alerts in a short period of time. Each alarm competes for attention and may require interpretation or action.

Poor alarm prioritisation can make the situation worse. If critical alarms are buried within large volumes of lower‑priority notifications, operators must expend significant cognitive effort simply to determine what matters most.

Decision‑support systems can also contribute to workload pressure. When multiple advisory tools produce recommendations simultaneously, operators may need to reconcile conflicting guidance while responding to evolving system conditions.

The result can be intense cognitive load, time pressure, and stress.

These challenges have long been recognised within engineering disciplines such as alarm management and human factors design. Standards such as EEMUA 191 and IEC 62682 have been developed to guide the design and governance of alarm systems in order to prevent these kinds of problems.

However, these standards are typically framed as engineering or operational best practice rather than workplace safety requirements.

The regulatory lens may now be shifting.

The Governance Gap Many Organisations May Have Missed

Most organisations that operate complex control systems already have processes for managing technology risks.

Engineering teams rationalise alarms, modify system logic, and improve performance over time. Change management processes govern modifications to control system software. Incident reviews examine operational events.

These activities are essential and often well established.

But they are usually conducted from an operational reliability perspective rather than a workplace safety perspective.

The distinction matters.

If a digital system influences operator workload, decision pressure, or stress levels, it may now fall within the broader category of workplace systems that must be assessed for health and safety impacts.

This raises several uncomfortable but important questions.

  • Has the organisation ever formally assessed whether its alarm system behaviour creates cognitive overload?

  • Are alarm floods treated purely as operational performance issues, or are they recognised as potential workplace hazards?

  • Have decision‑support tools been evaluated to understand how they influence operator workload and stress during abnormal events?

If a regulator or workplace investigator asked how the organisation ensures that digital operational tools do not create unsafe working conditions, would there be a clear answer?

Many organisations may discover that the governance surrounding these systems focuses heavily on engineering performance but provides limited documentation demonstrating that operator wellbeing risks have been assessed.

Why Legacy Systems Are Not Automatically Exempt

One of the reasons this issue may have been overlooked is that many control room systems were implemented long before psychosocial hazards became a major regulatory focus.

SCADA platforms and alarm systems deployed in the late 1990s or early 2000s were primarily designed with reliability, visibility, and operational effectiveness in mind. Human factors considerations were often secondary.

Over time, alarm rationalisation projects and system improvements may have addressed many technical shortcomings. However, these efforts may not have been framed explicitly within a workplace safety context.

In other words, the technology may be mature, but expectations around workplace governance have evolved.

The new legislative focus on digital work systems does not necessarily mean that existing systems are deficient. But it does encourage organisations to re‑examine long‑standing assumptions about how these systems interact with human operators.

A Scenario Worth Considering

Imagine a significant operational disturbance in a power system control room.

Within minutes, the alarm system generates a large number of alerts. Operators must quickly determine which alarms represent real threats to system stability. At the same time, several decision‑support tools begin producing advisory outputs related to system constraints and contingency management.

The operator must interpret these inputs while coordinating responses and maintaining situational awareness.

The pressure is intense.

Now imagine that an investigation later examines the event from a workplace safety perspective. Investigators may ask questions such as:

  • Was the alarm system designed to prevent cognitive overload during disturbances?

  • Were alarm rates monitored and managed according to recognised human factors guidance?

  • Did the organisation assess how digital decision‑support tools influence operator stress and decision pressure?

  • Could the organisation demonstrate that it had considered these risks so far as reasonably practicable?

These are not purely theoretical questions. They represent the kind of inquiry that may arise when regulators examine how digital systems shape workplace conditions.

An Opportunity to Re‑examine Digital Operational Systems

For most organisations, the emergence of digital work system regulation should not be seen as a threat.

Instead, it presents an opportunity to strengthen governance around the digital tools that operators rely on every day.

Many control room environments already apply elements of good practice through alarm rationalisation programs, performance monitoring, and operational review processes. The challenge is ensuring that these practices are recognised and documented within a broader workplace safety framework.

This may involve incorporating human factors analysis into system reviews, monitoring alarm system performance metrics more systematically, and explicitly considering how digital tools influence operator workload and decision making.

None of this requires abandoning existing operational practices. It simply means recognising that digital operational systems are part of the overall work environment experienced by control room staff.

The recent NSW legislative changes serve as a reminder that workplace safety is not only about physical equipment and procedures. It also includes the digital systems that shape how people interact with complex infrastructure.

And in many control rooms, those digital systems have been quietly managing work for decades.