Earlier today, I quoted the longtime aviation writer J. Mac McClellan on the one-in-a-billion risk factor to which modern aircraft design is held. Someone familiar with such standards writes in:
I'm a system safety engineer for a small-ish system supplier, so I'm pretty familiar with the 10^-9 standard. There are a number of issues with probabilistic risk assessment, but I think the history of the 1 in a billion standard is pretty interesting. This is an excerpt from the proposed rule change to FAA regulations regarding system design, referred to as the ARSENAL draft of 25.1309. [Excerpt begins:]
“The British Civil Airworthiness Requirements (BCAR) were the first to establish acceptable quantitative probability values for transport airplane systems. The primary objective in establishing these guidelines was to ensure that the proliferation of critical systems would not increase the probability of a serious accident. Historical evidence at the time indicated that the probability of a serious accident due to operational and airframe-related causes was approximately one (accident) per one million hours of flight. Further, about 10 percent of the total accidents were attributed to failure conditions caused by the airplane’s systems. Consequently, it was determined that the probability of a serious accident from all such failure conditions should not be greater than one per 10 million flight hours, or “1 x 10 -7 per flight hour,” for a newly designed airplane. Commensurately greater acceptable probabilities were established for less severe outcomes.
“The difficulty with the 1 x 10 -7 per flight hour probability of a serious accident, as stipulated by the BCAR guideline, was that all the systems on the airplane must be collectively analyzed numerically before it was possible to determine whether the target had been met. For this reason, the (somewhat arbitrary) assumption that there would be no more than 100 failure conditions contributing to a catastrophe within any given transport category airplane type design was made. It apparently was also assumed that, by by regulating the frequency of less severe outcomes:
“ * only 'catastrophic failure conditions' would significantly contribute to the probability of catastrophe, and
“ * all contributing failure conditions could be foreseen.
“Therefore, the targeted allowable average probability per flight hour of 1 x 10 -7 was apportioned equally among 100 catastrophic failure conditions, resulting in an allocation of not greater than 1 x 10 -9 to each. The upper limit for the average probability per flight hour for catastrophic failure conditions became the familiar “1 x 10 -9 .” Failure conditions having less severe effects could be relatively more likely to occur." [Excerpt ends.]
They basically worked backwards from the existing accident rate, made a few assumptions about contributions from complex systems and got us this number. There are a few questionable assumptions such as the number of catastrophic failure conditions. Thankfully, more goes into safety now than estimating probabilities such as human factors and common cause assessments. But it does point out that the standard was arbitrary to begin with, so changes in public perception may eventually change the standard.
One minor correction on Mr McClellan's note. The 10^-9 standard is referred to as "extremely improbable" rather than just "improbable" and it is in terms of average probability per flight hour, not per average flight. See section 7c(1) of the arsenal draft of 25.1309 under "Probability Ranges."
(1) Probability Ranges.
(i) Probable Failure Conditions are those having an Average Probability Per Flight Hour greater than of the order of 1 x 10-5 .
(ii) Remote Failure Conditions are those having an Average Probability Per Flight Hour of the order of 1x 10-5 or less, but greater than of the order of 1 x 10-7 .
(iii) Extremely Remote Failure Conditions are those having an Average Probability Per Flight Hour of the order of 1x 10-7 or less, but greater than of the order of 1 x 10-9.
(iv) Extremely Improbable Failure Conditions are those having an Average Probability Per Flight Hour of the order of 1x 10-9 or less.
I sent this to Mac McClellan, and he replies as shown below:
Yes, the standard evolved over time and has some interesting twists. For example, passengers can be seated in a turbine engine rotor burst zone and would presumably be killed by a burst. A rotor burst energy is now treated as infinite and debris will pass through anything or anybody in the zone. However, no required crew can be located in the burst zone. The 10-9 standard doesn't necessarily apply to a passenger or passengers staying alive, but to the airplane and it's ability to reach a runway.
Also, much of the historic data floating around is historic. Rules change constantly, and there was a big change after the DC-10 in Iowa where the center engine exploded. Douglas had installed triple hydraulic lines to the tail control surfaces but the lines were routed close together. The engine burst took out all three systems. Up to that time triplex was enough. After that triplex was only good enough when you could demonstrate that no single foreseeable event would take out all three.
The 777 was certified under pretty current rules, as was its FBW which does meet the 10-9 through triple redundancy and several levels of computer participation. The final level is direct law where the cockpit controls command direct movement of a surface with no enhancement or protection for speed or CG [Center of Gravity] or other considerations.
This is way more technical detail than most people will want to follow. But so much of this story, which continues to command interests, turns on precise technical details; and for those who are interested in the safety and redundant-design criteria of modern aircraft, this will be instructive.