How To Review Process Safety Through Engineering Design

Louise Smyth

Chris Flower analyses the latest data on process safety in engineering design

If you ask any responsible company what its priorities are, safety will always be at the top. You would therefore expect that reviewing major projects from a process safety perspective would yield little in the way of significant findings.

I expect a sceptical response to this last statement as most engineers’ experience is different. There can be differences between data and people’s feelings or perceptions, so I will provide some data from a series of projects undertaken with some thoughts of why mistakes occur and how they can be reduced.

ABB has undertaken 22 projects over a five-year period, reviewing the process safety in the design of new builds, major modification projects at FEED and Detailed Design, and existing process plants. Each project looked at various aspects of the design from philosophies through relief and blowdown design, high pressure/low pressure interfaces (HP/LP) to ESD, LOPA and HAZOPs. Although each of these studies found and addressed the concerns identified on an individual project level, an interesting question was raised. Why were errors being made and how can they be reduced?

The first step, as with any improvement project, is to collect and analyse the available data. Within the verification projects the findings were categorised according to the seriousness of the finding, codes 1, 2, 3 and A. This categorisation allowed prioritisation of any remedial work at project stage gates and defined tolerance for the next project stage. The codes varied depending on the phase of the project, but generally the codes can be summarised as:

•  Code 1: Design is demonstrably unsafe/major design flaw or significant documentation omission identified. The flaw carries significant cost/schedule impact and should be rectified and re-verified before progressing.

•  Code 2: Design is unsafe or potentially unsafe, or critical gaps in the safety documentation. Action needs to be taken as soon as possible.

•  Code 3: Design is safe but errors are apparent in the design or non-safety critical gaps in the documentation. Errors to be addressed.

•  Code A: Design acceptable

These coded findings from each review were analysed and classified into different categories to identify the common errors/omissions for each of the areas reviewed; i.e. data/document omission, discrepancies, failure to comply with standards, etc. In addition, these common themes have been further grouped into categories:

•  Administrative (A): errors have been made, due to inappropriate QA procedures, e.g. document or data omission.

•  Judgement (J): errors have been made, due to engineering decisions made that are not clearly defined within standards e.g. uncertainty in project scope/boundaries.

•  Competency (C): errors have been made, even with a well-defined methodology/standard to follow, e.g. errors during relief device sizing.

It was recognised that categories can overlap but the purpose was to give a broad idea of where and why errors are occurring to gather suggestions on how the design process can be improved.

The purpose of this article is not to review all 1,552 findings, but to try to draw out conclusions and propose a better way forward; how to use the equation rather than derive it in engineering terms. However, to understand the conclusions, some of
the data provides the context.

The areas generating the most severe findings were Relief and Blowdown and Layers of Protection. These areas showed very different pictures. Relief and Blowdown is dominated by data omissions where the design could not be substantiated followed by technical observations where definable errors could be identified. The Layers of Protection is almost a mirror image, with technical observations dominating and data omissions following.

On the face of it, these findings don’t seem too bad. Combining these two areas, 53% of all the findings were simply data omissions; which is an administration issue. Yet 25% of the findings, 371, were code 1 or the most severe; it really does depend on what data is missing!

A more detailed analysis of the data has been presented to various groups of operators and the first response is normally ‘this data provides numbers to our general feeling on where projects don’t perform as they should but it has always been’. What isn’t said but is implied is an addition to the last part of the sentence; but it has always been and always will be.

So returning the question of why were errors being made and how can they be reduced. The main conclusions identified from the analysis were:

•  Flaws in the quality (QA/QC) of documentation/data included within the safety design documentation. This can be improved by implementing a series of internal audits led by the project management teams.

•  Omission of data and documents, across all areas, showing that there is a failure to comply with the company standards. It was thought that this is related to the unwieldy nature of the standards – which could be improved by revising it into a smaller standard, supplemented by a guidance note on how to apply the standard.

•  Although there is a good corporate HAZOP standard, the use of contractor standards has led to poor quality studies and reports. This can easily be rectified through the project management teams implementing the company HAZOP standard. The HAZOP process can be further improved through the implementation of a pre-HAZOP design audit, the output of which can be included in the safety design dossier.

•  Technical issues, relating to the competence of the SIL Determination teams, has led to a number of errors associated with the application of SIL/LOPA. Again this can be rectified through the project management teams undertaking a design audit, comparable to a stage 1 functional safety assessment. This design audit could also be used to ensure the competency of the SIL team leader and members.

What became clear from this analysis and experience on other projects, is the need for a robust and competent project team on the client’s side, as well as the contractor side. This, I believe, is an issue in the industry. Operators have become leaner and meaner, which results in smaller client project teams that may not have the breadth of competencies or the resources, especially time, to fully understand and constructively challenge the design contractor. This may require support from an independent third party to provide these skills.

Another conclusion that arose from this analysis is the need to provide clear guidance and simple standards. How many company standards are thick wordy documents that mix requirements with detailed explanation? Providing examples on ‘what good looks like’ sets the standard right at the start of the design process. This may explain why design contractors tend to use their own standards rather than the company’s: it simplifies the process for them, but it doesn’t always provide the operator with what they require. 

For every problem there is a simple solution and it is most certainly wrong. Projects, like many other things in life, are complex problems and to believe there is one simple solution is foolish. However, improvements can be made and providing clarity of requirements and robust constructive challenge is one area that can be improved and will realise better results.

Chris Fowler is process manager at ABB

Recent Issues