Working with functional safety systems in the automotive industry often reveals implementation issues due to misinterpretations of the standard. The most significant impact occurs during the analysis phase, affecting the entire project. For instance—and this is not unique to ISO 26262—challenges arise when assigning responsibilities to different project stakeholders. Misunderstanding these roles can lead to losing sight of objectives or solving problems not pertinent to one’s role. These errors appear at all levels: engineers expected to become experts on the standard after a single reading, Governance teams promoting overly complex processes for non-relevant actions, or even OEMs misunderstanding terms and assigning responsibilities incorrectly.
The purpose of this article is to clarify a critical point:
The risks considered in this standard exclusively pertain to human safety: vehicle users, other drivers, or road users.
Therefore, safety risks and objectives must be assessed at the vehicle level.
Project-specific technical or economic risks are not within this scope.
Only the OEM (manufacturer) is responsible for defining risks. If we work on a component in a multi-tier chain, we should receive high-level Functional Safety requirements from the outset.
Where Does the Problem Begin?
This issue likely stems from ISO 26262-1:2018, clause 3.84, which defines an “item” as “a system or combination of systems, to which ISO 26262 is applied, that implements a function or part of a function at the vehicle level.”
Thus, the Item Definition (ID) is a high-level description of vehicle functions. These functions are implemented through a chain of components (systems, elements, etc., as per ISO 26262 terminology). The ID is a high-level abstraction (e.g., “the braking system”).
However, the term “item” can be misleading, as common language might lead one to interpret an item as a tangible, countable entity. Is the system an electrovalve? Then, is the item an electrovalve?
If the ID is focused on our component rather than the vehicle level, our risk analysis will yield specific safety functions, such as “opening precision must be within three degrees.” These become requirements we must implement. But are they necessary?
Consequences of This Misinterpretation
Firstly, the purpose is lost. Misconceptions arise, such as “my component must be ASIL D because its failure can cause fatalities.” However, the risks referenced in the standard are not due to a single component’s failure but rather a failure of a function. A sequence of failures must occur to produce this effect. A component will only be classified as ASIL D if it is designated as such by the OEM, which controls the safety chain.
Consider an electrovalve controlling a battery’s coolant circuit. A blockage will limit coolant flow, increase temperature until lithium reacts, and fire ensues. But this scenario depends on more than one component failure. Additional conditions include:
- Failure of the electrovalve and gradual temperature increase,
- Failure to detect the temperature rise,
- Detection without triggering load disconnection, or
- Load disconnection failure.
This sequence is designed by the OEM. Ignoring this, we design based on assumed requirements. I mean we design what we think we should, not what actually is.
Furthermore, understanding ASIL Decomposition (ISO 26262-9:2018), which breaks down high-integrity functions into subfunctions, is critical. For instance, the OEM may designate that the electrovalve requires only ASIL B due to circuit redundancy. ASIL decomposition applies to both the vehicle and component levels (ASIL tailoring), making it challenging to trace this process externally. This chain of safety is more evident in railway standards, such as CENELEC.
The top-down approach to risk analysis is also often overlooked. Formally, risk analysis dictates the architecture (via the Functional Safety Concept and Technical Safety Concept), not the reverse. When safety justifies pre-existing design decisions, inconsistencies in requirements, architecture, and ultimately, audit findings may arise.
Justifying a design artifact as a risk mitigation measure requires risk identification. We design redundancies as countermeasures when we demonstrate that not doing so poses an unacceptable risk to people. However, some risks are not considered because “our system prevents it.” This approach is not only incorrect formally but may also omit risks that could violate safety goals. ISO 26262 metrics require identifying critical failures, residual safety fractions, and combined failures of those fractions. Avoiding risk documentation because they were preemptively designed out is an error; we must ensure mitigated risks have been addressed.
In an ideal scenario, the OEM would meticulously control the Safety project. Unfortunately, physical or technical limitations often compromise this control, creating uncertainty. When the OEM encounters situations resulting from the TIER’s misunderstanding, trust erodes, and significant costs ensue.
Responsibility
At the outset of a client-provider relationship within this standard, a document (Design Interface Agreement, DIA) specifies responsibilities, detailing who does what, how, and when to exchange deliverables per a RACI matrix. Nevertheless, this document may be inadequately observed or fully understood due to lack of clarity or mutual confidence.
So, does a TIER not perform risk analysis under this standard? Yes, it does. Risk analysis is necessary at several design stages, including safety. A Failure Mode and Effects Analysis (FMEA) is, in essence, a risk analysis. Additionally, an Out-of-Context (OOC) component design considers the critical failures propagated through its interfaces, and these risks must align with the OEM’s analysis.
If a component manages an intrinsic Functional Safety function (e.g., a windshield wiper), its failures may directly threaten people, warranting risk analysis and ID involvement for that component.
Lastly, a TIER can indeed conduct a full vehicle HARA upon OEM request. However, the responsibility for that HARA remains with the OEM.
Conclusion
Limited time and insufficient expertise exacerbate the ambiguity within different standards. For example, in ISO 21434’s Threat Analysis and Risk Assessment (TARA), threats must be considered at the vehicle level. If our work is restricted to a Battery Management System, we are not responsible for TARA. However, the standard lacks clarity. Railway standards, like 50701:2023, define use zones more precisely.
Such situations frequently occur, particularly in companies new to this field, which tend to produce unnecessary and incoherent work. For many companies, it is essential to consolidate knowledge before layering complex standards onto these regulations. The automotive model’s rapid evolution (Functional Safety, SOTIF, Cybersecurity), alongside increased autonomous control demands, challenges companies’ ability to consolidate safety management knowledge. As an external development consultant, I observe recurring issues with growing complexity. Ultimately, companies will need to adapt to these scenarios quickly