3. Preselection of a Basic System Configuration

The most effective reliability efforts are those expended during the preliminary design phase while the system configuration is being determined During this period the reliability considerations can influence design decisions and ensure that the chosen configuration is one that can be developed to meet the reliability requirements The process is an iterative one, with the designer continually looping back and trying different system configurations until all constraints are satisfied At this time the principal effort is being expended at the drawing board Errors of judgment may be corrected with an eraser instead of a jackhammer Reliability considerations are hot the only constraints imposed on the design Other constraints include capability,
si7e, shape, weight, cost, schedule, and customer prefer­ence, all of which must be adequately satisfied Reliability analysis provides a disciplined framework within which the interplay of these constraints can be viewed with sharper perspective Frequently the result is not only a more reliable system but also a better system as judged by all other applicable criteria

In the preselection of a basic system configuration, the designer should (1) define success for the system, (2) estab hsh adequate reliability goals, (3) propose alternate designs, and (4) evaluate the reliability potential for each design These four tasks are discussed in the following sections.

(a) Defining “Success” for the System. The designer must know exactly what his system is expected to do This may sound trite, but many a design has been impaired because the designer either did not fully know or perhaps had lost track of the real reason for having the system The definition of success should include the environmental constraints in force and the length of time or the number of cycles the system is expected to endure There may be two or more valid success definitions for the same system, each requiring a separate analysis

For example, assume an instrumentation system associ ated with a set of isolation valves There may be two definitions of success imposed, one for safety reasons and the other for operational or economic reasons

Success #1 Given that the pipeline downstream of the isolation valve is broken (complete severance), the instru­mentation system shall detect the resultant leak within 10 sec and signal the isolation valve to close

Success #2 Given that the pipeline downstream of the isolation valve is not broken, the instrumentation shall not signal the isolation valve to close

Operating conditions The cable and detector environ­ment is 120°F and 50% relative humidity prior to the break and 212 F and 100% relative humidity following the break The instrumentation system is tested every 3 months and calibrated annually

In every case the boundaries of the system under consideration must be explicitly defined In the above example, it is intended that the valve itself be excluded For this reason a transition point from the instrumentation system to the valve must be chosen so that every component or potential point of failure is certain to be included within one system or the other, but never both

(b) Establishing Goals. The designer must have some measure of achievement for the reliability of his system One simple and effective goal that has been used on critical systems for many years is the so-called “single-failure” criterion, namely, that the system shall fulfill its success definition in the event of failure of a single active component Basically, this criterion has served the nuclear industry rather well in spite of some limitations One limitation is that it is not readily adjustable to match the whole range of consequences of system failure If the
single-failure criterion were universally applied, a high-level warning instrument on a waste-water sump would need to be just as reliable as the reactor protection system, even though the consequences of failure are vastly different. In addition, the single-failure criterion does not adequately protect against multiple independent failures that are more probable than should be allowed. Despite its shortcomings, the single-failure criterion for all active components should be imposed as the minimum goal on all reactor instrumenta­tion systems where safety and the potential for economic loss are important considerations

The techniques of reliability analysis, properly applied, yield a numerical measure of the expected system reliability or availability. For this reason a numerical goal serves an especially useful purpose Although such numerical goals are commonplace m the aerospace industry, they are only recently coming into use in the nuclear industry. A numerical goal can be established in any one of a number of ways.

1. Risk acceptance Ideally the goal should be a func­tion of the highest risk the public will accept in return for the benefits derived from nuclear power. Risk is defined as the product of the probability of failure and the conse­quences of that failure. The consequences may be measured on any convenient scale, such as dollars, curies of 1 3 11, and injuries. Unfortunately this concept is not very far ad­vanced and not universally accepted However, an examina­tion of some of its precepts does yield some insight into the relative reliability required for various systems

2 Grandfather systems. Even though the nuclear indus­try is relatively young, there are some instrumentation systems that have gained wide acceptance for a given application and enjoy a reputation for being adequately reliable A reliability analysis of one or more of these systems will yield a numerical result that should prove useful in establishing a realistic numerical goal for new systems

3 Industry standard goals Industry committees con­cerned with the safety of nuclear plants (see Chap 12) are beginning to address themselves to the matter of goals On the international scene, a goal of 10 5 probability of failure has been proposed for the reactor-protection-system scram function The IEEE Nuclear Science Group Technical Committee on Standards (see Vol 2, Chap. 14) has con­sidered goals, but it is currently recommending that each designer set a goal to meet the particular need.4

(c) Proposing Alternate Designs. Before any attempt is made at a detailed design, a wide range of design alternates should be blocked out for evaluation. The instrumentation system must be considered as an integral and essential part of the overall functional system. In other words, instrumen­tation systems perform an essential service for the func­tional systems in the plant, instrumentation does not exist for its own sake.

For example, assume that the functional system is an emergency cooling loop. The engineer designing the func­tional system and the instrumentation engineer must work together to propose alternates, such as the following

1. One loop, two 100% capacity pumps per loop

2 One loop, three 50% capacity pumps per loop.

3. Two loops, one 100% capacity pump per loop

4. Two loops, one 100% capacity pump per loop with a crosstie

5. Two loops, one 50% capacity pump per loop plus one 50% capacity pump shared by both loops

The list should be made as inclusive as possible so that no worthy configuration is omitted All proposed alternate designs should pass the capability test before being evalu­ated for reliability or availability. Obviously the probability for system success can vary widely, depending on the system configuration The instrumentation systems to start and stop the pumps and open and close valves are very different for the relatively few configurations cited.

(d) Evaluating the Reliability Potential for Each De­sign. It is not practical to perform a detailed design on each proposed system before making a selection that is based on, among other considerations, a detailed reliability analysis Therefore it is particularly important that the proposed designs be carefully screened to eliminate those which do not have the potential for development into a system with adequate reliability.

The foregoing may be accomplished by adhering to the following discipline

1 Construct a simple reliability model for each pro­posed design The blocks from which the models are constructed should encompass as much of the system’s equipment as is reasonably possible I or example, a block called “pump” could effectively include the pump, its driving motor, coupling, and circuit breaker.

2 Use a consistent set of failure data throughout the comparative evaluations Where failure-rate data have been reported, use them as a base, but do not hesitate to adjust them upward or downward to reflect best judgment, duty factors, or environmental conditions. Where failure-rate data do not exist, choose a value that reflects the best judgment of knowledgeable people in the field but use the same assumption consistently throughout the evaluation.

3. Reflect the expected operating conditions If in one design certain components are exposed to environmental conditions more severe than normal, that design should be properly penalized by adjusting the failure rates upward by an appropriate К factor to reflect the higher level of imposed stress

4 Allow each system proper credit for its compatibility with testing. In general, the unreliability of a component increases almost linearly with the interval between thorough tests (See Fig 11.4 ) 1 herefore a component that is physically inaccessible for test except during a refueling shutdown should be penalized in comparison to one that is readily accessible and frequently tested.

5. Solve the models for a numerical index of reliability If the model is really kept at its simplest level, the

probabilistic solution should not be difficult If the solution is difficult, concentrate first on simplifying the model to get an approximate solution rather than straining at the mathematics for these preliminary design evaluations

6 Conduct sensitivity studies to identify the dominant components contributing to the unreliability of each system This may be done by making a significant change in the assumed failure rate of a particular component and noting the change in the overall probability of system failure Figure 112 shows a plot on a log—log scale of component unreliability vs system unreliability The refer­ence or expected failure probability for component 1 is indicated by an arrow Note that the arrow falls on the flat portion of the curve, indicating that this particular compo­nent does not contribute significantly to system unreliabil lty The reference value for component 2 is on the steep part of the curve, indicating that a change in failure rate here will have a dominant effect on system unreliability Good safety design dictates that the overall system have a sufficiently low value of unreliability Good economic design dictates that, in general, the least expensive compo­nents should not be dominant contributors to unreliability

7 Redesign the proposed systems, as appropriate, to minimize the areas of apparent weakness revealed by the sensitivity analysis This becomes an iterative process, but this is where the big payoff comes, in being able to bring quickly into focus the systems with the greatest potential for detailed design consideration

8 Reexamine the final proposals to be sure that they satisfy all other operational and physical constraints that mav be appropriate

9 Select the one or two exploratory designs that show the greatest potential for maturing through a detailed design process and for adequately satisfying all constraints, including reliability

Fig 11 2—Sensitivity study of system failure vs component failure

11- 3.3 Detailed System Design for Reliability

The proposed design or designs selected from the evaluations of reliability potential are subjected to detailed design Components of known quality are selected and applied well within their rating for the expected environ­ment If a new component of unknown quality is to be applied, it may be appropriate to subject it to test, particularly if the assumed best judgment failure rate indicates (through the sensitivity study) that the compo nent has a dominant influence on reliability

In every possible way the designer must endeavor to emulate the model and to be certain that the boundary assumptions are satisfied If the model assumes that the failure of one component or group of components is statistically independent of other failures, the designer should try to ensure that this is true For example, if two channels of instrumentation are assumed to be indepen­dent, they should be so located that only a highly improbable event could disable both The routing of signal cables and the location of power sources must be carefully considered Historically, localized overheating and fire have been the two most common single-event failures that can cause other failures to be interdependent Careful judgment is required to develop a design that gives a reasonable assurance that a fire can be controlled without transgressing the independence of channels

The designer should also recognize that interdepen­dence can creep in by inconspicuous routes If the required level of reliability is such that redundancy is necessary, it may be appropriate to make the redundant channel different just to increase the likelihood that an unknown deficiency or inadequacy in one channel will not be repeated in the other Frequently this can be accomplished by functional diversity, for example, one channel can monitor temperature and another pressure, either signal containing the desired information Where functional diver sity is not possible, equipment diversity may be used to good advantage For example pressure can be monitored b two sets of equipment that operate on two entirely different principles Functional diversity is to be preferred because it almost automatically includes equipment diver­sity

If the model assumes that the component failure rates are constant, the designer should be sure that the mam tenance and replacement practices will not allow worn out components to remain m the system If the model assumes that some of the components are to be tested while the plant is in operation, the designer should be sure that adequate testing facilities are provided If the system or portions of the system perform functions in addition to a safety related task, the designer must ensure that these additional functions do not interfere with the model of the safety related function

Simple straightforward systems arc easy to understand, easy to model, and tend to have high reliability If a system is allowed to develop without the benefit of a model to
emulate, the system can become complex and interwoven in such a way that modeling is extremely difficult and, intuitively, the whole system is suspect A safe rule, then, is never design a system that cannot be reduced to a tractable reliability model.

Of course, the instrumentation system must still meet all its normal objectives of performance The reliability discipline is simply superimposed on the usual detail design procedure. The concepts of reliable system design are not difficult nor are the associated mathematical relations. For this reason it is preferable that a designer with a good reputation for instrumentation design take on the disci­plines of reliability engineering rather than interpose a reliability engineer as a series element in the design chain. A reliability engineer serves the highest purpose when used as a consultant and when he and the designer approach a problem with open minds and an honest desire to under­stand the system.