Minimising the Risks of Major Industrial Accidents

The UK Government has recognised large industrial accidents as major risks to infrastructure in the UK. Serious accidents (and near misses) still regularly occur worldwide in all ‘high-hazard’ industries. They have the potential for major loss of life, environmental damage, and a massive impact in terms of lost production, company value, and reputation.

About the research

At first sight, major ‘events’ such as the Columbia Shuttle disaster, the Texas City oil refinery accident and the collapse of a pedestrian walkway at the Port of Ramsgate may appear to have little in common. They occurred in different industrial settings, involved very different engineering failures, and happened in different operational contexts. However, analysis of the findings from the investigations that took place following these disasters reveals significant similarities in the organisational and cultural precursors.

Various well-established and widely used tools are available to address engineering and human performance failures. However, the recurrence of these organisational and cultural failures in events spanning many decades, suggests that current preventative approaches are not sufficient. New approaches and associated tools are required to minimise the organisational and cultural precursors leading to events.

This is being addressed by multidisciplinary research at the Safety Systems Research Centre at the University of Bristol, working closely with Dr Andrew Weyman of the Psychology Department at the University of Bath to ensure that both a systems engineering and social science perspective is taken. Industry and regulators have supported the research and remain involved.

Twelve major events have been studied based on published investigation reports, including several from the petrochemical and nuclear industries, and some from transport and major civil engineering projects. It has allowed common organisational and cultural findings to be brought together based on actual experience and should enable this important and sometimes neglected area to be addressed more systematically and enable greater sharing of knowledge between industries.

Identifying vulnerabilities

Identified recurring factors have been grouped under ten headings or repeating ‘themes’ which were observed in the events studied:

1. Leadership issues
2. Operational attitudes and behaviours (operational ‘culture’)
3. Safety management systems
4. Impact of the business environment (often commercial and budgetary requirements)
5. Oversight and scrutiny
6. Competence and training (at all levels)
7. Risk assessment and management (at all levels)
8. Organisational learning
9. Communication failures
10. Supply chain (management of contractors)

Shortfalls in safety leadership have been a precursor to nearly all events studied. If excellence in safety is to be achieved, it is vital that leaders at all levels ‘set the tone at the top’ and reinforce this through their own actions and visible commitment. In many, commercial pressures (such as the need to complete a project to a very tight schedule or carry out major organisational change) have led to shortcuts being taken and other deficiencies.

Good safety performance requires the presence of a well-understood management system that makes clear who is accountable for what and where the workforce follows clear and respected procedures. In some of the events studied, a culture of ‘casual compliance’ has developed over time without management awareness or action. Even where procedures are followed, however, there is a danger that errors have built up in them often as a consequence of changes elsewhere in the plant or process. This calls for continuous vigilance with a good understanding of the wider system.

The build-up of problems such as these commonly reflects a shortfall in safety culture within the organisation. This is often characterised by a failure to adopt a questioning attitude, breakdowns in communication and a failure to learn from events both within and outside the organisation. It has also led to a failure to assess risks in plant or system design and construction as well as in operations. These shortcomings have frequently been exacerbated by reductions in training and competence, often a result of budgetary pressures.

In some of the events studied, contractors have been used to carry out major tasks. Contracts have not always been formulated in such a way as to encourage the reporting of emerging issues and the client-contractor interface has been poorly managed.

Finally, providing strong oversight and scrutiny in the organisation should allow the shortfalls discussed above to be addressed before major problems develop. In many cases, this has been absent or weak, or findings have not been addressed or fully implemented.

These ten themes are being followed-up by developing statements of good practice designed to provide a basis for organisations to compare their own ‘expectations’ for performance in each area. The intent is that these are put into operational practice at all levels from the boardroom to the workplace. To check that this occurs, ‘penetrating’ questions are being developed to enable organisations to explore whether ‘reality aligns with expectation’. Use of these statements and questions will help organisations to identify their vulnerabilities.

Events studied

1.  Port of Ramsgate walkway collapse (UK, September 1994)
2.  Heathrow Express NATM tunnel collapse during construction (UK, October 1994)
3.  Longford gas plant explosion (Australia, September 1998)
4.  Tokai-Mura criticality accident (Japan, September 1999)
5.  Hatfield railway accident (UK, October 2000)
6.  Davis Besse nuclear reactor pressure vessel corrosion event (USA, February 2002)
7.  Loss of the Columbia Shuttle (USA, February 2003)
8.  Paks nuclear plant fuel cleaning event (Hungary, April 2003)
9.  Texas City oil refinery explosion (USA, March 2005)
10. Loss of containment at the THORP Sellafield reprocessing incident (UK, 2005)
11. Nimrod air crash (Afghanistan, 2006)
12. Buncefield oil storage depot explosion (UK, 2005)

Fire following explosion at Texas Oil Refinery, 2005. Courtesy of Chemical Safety and Hazards Investigation Board

Addressing vulnerabilities

New systematic approaches are also needed to address these vulnerabilities. However, the analysis shows that these will need to model complex processes. Contributory factors exist within a complex network of causes and thus, what appears to be a straightforward change (such as improving a procedure) can lead to unintended consequences.

A simple example illustrates this. A technique known as Causal Loop Modelling is used to depict the interactions between causal factors and the potential consequences of intended changes. It can explain why consequences can be subtle or hidden, how time lags can be important, and expose unforeseen long-term trends.

The figure below shows how the approach can be used to analyse the possible consequences of actions to improve learning by increasing the number of ‘events’ being reported.

The arrows represent causality. An ‘S’ means a similar change is caused (i.e. an increase causes an increase and a decrease causes a decrease). An ‘O’ means an opposite change is caused. The right-hand loop shows that more reporting leads to more investigations and more corrective actions. Unless carefully controlled, prioritised and resourced, this may lead to a significant increase in the workload and as this increases, the number of visible improvements and completed actions may go down because people cannot cope. This can lead to disillusion, cynicism from the workforce and a decrease in efforts to report events. Thus a worthwhile initiative can leave the organisation worse off than before it was launched, unless actions are taken at the outset to mitigate these unwanted consequences.

Diagnosing current vulnerabilities based on the ‘library’ of findings obtained from the events studied, combined with new approaches to plugging vulnerabilities, can help to make organisations more resilient to major events in all ‘high hazard’ industries.

report 41 minimising industrial accidents

‘The importance of investigating and understanding the organisational causes of accidents cannot be overstated’

(Page 459, ‘Inquiry into the loss of RAF Nimrod XV230 - a failure of leadership, culture and priorities’ - Charles Haddon-Cave QC, HMSO, 2009.

Policy implications

• Organisations should move towards to a more holistic approach to risk management policy, which takes account of behavioural and psychological responses to change.

• To achieve this, it is necessary to use a ‘whole system’ approach that engages staff at all levels with different approaches, aspirations and levels of motivation, and which anticipates their different reactions to proposed initiatives and thus minimises unexpected ‘knock-on’ effects.

• Regulatory bodies will benefit from developing a greater appreciation of these organisational and cultural factors and will then be able more effectively to scrutinise new proposals and to hold duty holders to account when failings occur.

• In investigating ‘events’ there is sometimes a tendency to identify an immediate ‘cause’ (e.g. people were not sufficiently competent) without asking ‘why’? By exposing the mechanisms of underlying organisational and cultural factors, our research can provide a basis for deeper, more effective investigations.

Finally, the approach we are developing may apply more widely - for example, when failures occur in areas such as the financial sector or in health and social care.

Further information

• A Study of the Precursors Leading to ‘Organisational’ Accidents in Complex Industrial Settings, Taylor, R. H., Van Wijk, L. G. A., May J. H. M., and Carhart N.J.,2016, Process Safety and Environmental Protection (PSEP), I.Chem. E, Vol. 93, 50-67.

• Understanding Organisational and Cultural Precursors to Events, 2017, Taylor, R, H, May, J, Weyman, A, Carhart, N,J, Forensic Engineering (ICE), 170(3), 1-10.

Contact the Researchers

Dr Richard (Dick) Taylor, Visiting Professor, Safety Systems Research Centre, Department of Civil Engineering, University of Bristol, Queens Building, University Walk, BS8 1TR
email: Richard.Taylor@bristol.ac.uk

Dr John May, Reader, Director of the Safety Systems Research Centre, South West Nuclear Hub, Department of Civil Engineering, University of Bristol, Queens Building, University Walk, BS8 1TR
email: J.May@bristol.ac.uk

Dr Neil Carhart, Lecturer in Infrastructure Systems, Department of Civil Engineering, University of Bristol, Queens Building, University Walk, BS8 1TR
email: Neil.Carhart@bristol.ac.uk

Richard Voke, Visiting Research Fellow, University of Bristol; Partner, Ashfords LLP
email: R.Voke@ashfords.co.uk

Authors

Dr Richard Taylor, Visiting Professor, Dr John May, Reader, Dr Neil Carhart, Lecturer, Richard Voke, Visiting Research Fellow

Edit this page