Skip to main content
  • Guide

Beyond Installation: Challenges in Passive Fire Protection Compliance

The article explores the challenges of ensuring passive fire protection systems in buildings meet safety standards amidst installation variations and the need for reliable assessments.

Three construction engineers inspecting work

When it comes to the technical conformity of passive fire protection products or systems installed in buildings, someone ultimately must accept systems, as installed, are fit for purpose. Those who may have a stake in that conformity decision include the code authority, building control, fire service, main developer, main contractor and end occupier/building owner. Specialist contractors may also accept evidence that supports that an installation conforms to the required standards. All these stakeholders are seeking confidence that the product or system installed will perform as intended in a fire situation.

The supporting evidence needs to confirm that the required performance is the likely outcome in a fire. This is not a guarantee but a strong indicator that the installed product or system has performed in a representative fire test and the performance in that test indicates conformity or nonconformity with test standards, local codes and regulations. For example, a pipe penetration seal needs to provide 60 minutes fire protection in the building, and test evidence shows that the product met that criterion on a test.

Overview of a fire protection company
Diagram of the stakeholder’s stake in a conformity decision

But what if the evidence used to support the assurances needed doesn’t exactly match what is actually installed? This is not referring to incorrect installations, but installations that have a variation from the specimen tested in the standard (specified) laboratory tests.

As-built construction details often vary from an ideal tested/certified construction. This commonly occurs with firestopping. For example:

  • The aperture size differs from what was tested.
  • The number of cables exceeds what was tested.
  • The cable size is larger or smaller than those tested.
  • The position of dampers varies from what was tested.
  • The particular configuration of firestopping multiple services through the same aperture was not tested on the test rig.
  • The penetration angle differed from what was tested; most are tested at a 90° angle.
  • A substrate that doesn’t match the one the penetration was tested through; standard substrates are used in testing to represent common wall or floor constructions.

Discrepancy in evidence can lead to last-minute requests for approval of the on-site installation. Any such approval needs an expert judgment or an urgent test to provide confidence to the stakeholder that the variation on site will not affect the products’ or systems’ performance — that is, that variations such as those mentioned above would perform to the same required level in a test situation as that tested — thus providing an assurance of performance.

However, urgent testing of the specific variation is often not practical, given the very full test schedules that U.K. fire testing laboratories have. A six-month wait for a test slot is not uncommon, and that length of delay on a live construction site is not practical nor wanted. However, if other methods of assuring the product or system variation are not possible, then this may be the only option, aside from replacing the installation with something that does have evidence of conformity if a tested assembly can be identified.

Addressing fire protection failures on site

During construction, the issues with variations to fire protection may go unnoticed and therefore may never be identified by anyone. There are many examples of fire protection issues only being identified after the building becoming occupied. Unfortunately, sometimes issues are only discovered after a fire occurs, when failure becomes obvious.

Sometimes issues may have been identified before occupation, but someone ignored the problem in the hope that the issue isn’t noticed by building control or the client and that it doesn’t cause a problem in the event of a fire in the future.

It may well have been identified, and the specialist installer may have thought it was acceptable. There potentially may be conflict of interest with the installer making that judgment. They installed it without sufficient evidence to support the installation in the first instance, something that should not happen on site.

Ideally, the issue is discovered, and further assurances requested. The manufacturer or independent specialist can seek a desktop evaluation (an engineering judgment, or EJ) of the proposed system. When EJs are required, they are most often provided by a product/system manufacturer who knows the products very well due to their knowledge of the testing and development required for their materials.

Quality and impartiality concerns of manufacturer assessments

However, manufacturers offering assessments on their own products may introduce issues, including:

  • Questions about the impartiality, qualifications, experience and competence of the person (often a junior technician or engineer) conducting assessments and writing reports.
  • A lack of independent oversight of the assessment quality management process.
  • A lack of investigation into the assessment methodology used.
  • A lack of confirmation of peer review, or the technical accuracy of assessments.

Those who need to rely on manufacturer assessments as evidence to accept and approve firestop installations on site may be concerned about risks incurred by acceptance without adequate assurance — i.e., should the firestop fail in a future fire situation, who is culpable for acceptance and on what basis? To protect themselves in the event of a failure and subsequent investigation, stakeholders with responsibility for accepting firestopping — including building control, fire service, building safety regulators, insurers, warranty providers, independent inspectors, main contractors, specialist contractors and developers — must have evidence of robust due diligence.

When it comes to acceptance of evidence that firestopping installations will meet the required performance criteria, stakeholders often ask: Why can’t manufacturers test everything?

Stakeholders often prefer primary test evidence, but manufacturers cannot test every variation. However, they do test the most likely variations that would be specified and installed in an ideal situation. Given that buildings are not always designed in an ideal way, firestopping may be installed with variations from what was tested due to factors such as errors in construction or building design features.

There isn’t a definitive number of firestopping-related assessments that are needed in the construction sector, but globally, it could easily be in the tens of thousands, based on conversations with major product manufacturers. Testing every conceivable variation would be prohibitive for manufacturers, given that the permutations of how products/systems get installed is potentially limitless.

The sheer number of assessments needed means there isn’t a credible independent alternative with sufficient capacity to complete this work. Engineers from independent testing laboratories and specialty engineering firms offer assessments in support of testing, but their capacity can be limited, and waiting for a report may be impractical for a construction project.

Firestop details are often repeated throughout a building on many floors or in multiple locations. If those details don’t have sufficient supporting evidence, the whole project could be delayed by waiting for a fire test to prove out performance.

Therefore, manufacturer assessments are absolutely needed, and unless there is a significant move toward designing buildings around assured firestopping and not the other way around, they are here for the foreseeable future.

Increasing stakeholder confidence with independent oversight

Independent third-party oversight of the manufacturers’ assessment activities establishes requirements against which a manufacturer is externally audited against. A quality assurance program provides an additional layer or scrutiny into the technical accuracy of assessments, giving stakeholders more confidence in accepting assessments from the manufacturer as evidence of conformity.

The UL Solutions Technical Evaluation Developer Program (TEDP) is an independent scheme designed to improve the content and quality of manufacturers’ engineering decisions for firestopping products, providing an added level of confidence for those who need to rely on manufacturer assessments.

The scheme supports the due diligence process for stakeholders by setting quality benchmarks that the manufacturer must meet and that are applied to the production of assessments. Specifically, this program establishes a documented management system aligned with accepted industry methodologies for technical assessment writing and audited by UL Solutions.

The manufacturer must have a documented quality manual that addresses all aspects of the process of writing assessments, from initial enquiry to delivery, and outlines how the process is strictly monitored and controlled. The manual must specifically cover how the company proves a consistent and technically accurate judgment is issued, mapping out the step-by-step checks and measures they take and monitoring to avoid decisions that would not stand up to external expert scrutiny. This recorded set of standard operating procedures must be internally audited and reviewed and must be independently audited by UL Solutions to demonstrate the company adheres to all quality policies and procedures. Industry best practice guides, underpinning the technical methodology of writing assessments, must be used. These guides outline the key accepted principles of how to assess passive fire protection products and systems for firestopping in a structured and rational way.

Demonstrating assessor competence, skills, knowledge, experience and behaviors

UL Solutions confirms that competences defined in industry best practice guidance documents are applied to the quality management systems the company operates and independently tests the competence of individuals involved in the preparation and review of the assessments for the company. UL Solutions will also check the methodology of training and internal verification of the assessors and confirm that assessment requests are allocated to assessors who have proven competence to complete them. In addition, technical assessors must demonstrate continued professional development, which is monitored under the companies’ quality management systems and audited by UL Solutions.

Independent audits conducted by UL Solutions

Testing of engineered systems is based on the total volume of assessments issued in a calendar year by the manufacturer. UL Solutions chooses the systems to be tested based on type of evaluation conducted, installation and type of product included in the technical evaluation. This sample testing focuses on the manufacturer demonstrating sound fire engineering principles are applied to all assessments, as fire testing potentially could be used to confirm any judgment, they make is sound.

Manufacturers who participate in this program are expected to be ethical in their use of technical evaluations. They cannot use technical assessments to avoid reasonable testing requirements. If a variation on site becomes a common installation, manufacturers should test that common use and pursue certification of the variance instead of assessing the variation again and again.

The manufacturer’s organization will be issued a certificate to acknowledge active enrollment in the UL Solutions Technical Evaluation Developer Program. The manufacturer’s name and contact information will be published in UL Product iQ® as a participant in the UL Solutions Technical Evaluation Developer Program. This enables stakeholders to confirm that the company is still in the scheme and that assessments completed fall under the requirements of the scheme.

UL Solutions engages with companies who wish to enroll in the program and holds them accountable by auditing and testing to stringent quality measures. Nonconformances to those measures instigate additional surveillance requirements and can increase audit testing, depending on the severity of the audit findings. This shows that companies are held accountable for managing the quality of the assessments they produce. This scheme supports the high level of the due diligence stakeholders need when presenting assessments as conformity evidence. When faced with an assessment from a UL Solution listed manufacturer, stakeholder can have confidence in their acceptance of the installation on site.

For more details of the scheme requirements, visit our Technical Evaluation Developer Program page or contact your sales representative.

X

Get connected with our sales team

Thanks for your interest in our products and services. Let's collect some information so we can connect you with the right person.

Please wait…