Category Archives: EXAMPLES OF REACTIVITY-CONTROL SYSTEMS

Grounding of Neutron-Monitoring System

The neutron-monitoring system is a vital part of both the reactor control system and the plant protection system. The system must have proper grounding and shielding. Some factors involved in determining the proper grounding and shielding methods are

1 The length of the signal conductors between the detectors and amplifiers.

2 Methods used for internal grounding of detectors and amplifiers

3 Methods used for grounding the electrical distribu­tion systems of the building

Several types of neutron sensors are widely used by the nuclear industry (see Chaps 2, 3, and 4) Some variation exists m the manufacture of nuclear-instrument circuits in regard to the method of providing internal signal grounding Some nuclear instruments have the signal ground on the chassis, whereas others have the signal ground insulated from the chassis ground.

The following discussion concerns the grounding sys­tems currently accepted by the nuclear industry in oper­ating power plants

(a) Grounding of Signal and Control Cables. The fol­lowing must be kept m mind when considering the methods to be used for grounding neutron-monitoring systems

1 There is always some potential difference between two points on the earth’s surface

2 Because a cable is connected to a ground bus, it is not necessarily a good ground

3 Ground connections are not always noise-free

Since it is virtually impossible to eliminate noise and induced current m ground connections, the proper proce­dure to follow in wiring practice consists of routing the inevitable currents around the equipment in such a manner that the signal input is not affected

image317
Figure 10 17 shows a typical sensor—preamplifier — count rate-meter combination as often installed, along with some of the sources of error and interference due to potential differences generated by the system at various points Figure 10 18 shows the same system with different grounding connections to eliminate ground loops m sensi­tive circuitry If the ground loops shown in Fig 10 17 are not removed, a “battery” voltage composed of noise is impressed on the opposite ends of the cable shield, thus effectively causing a current in the shield that may add to or modulate the desired signal Ground loops are elimi­nated by isolating the system from ground except at the console This is only one of several noise-rejection ground­ing techniques available Others are electric differential input techniques and the use of balanced lines [see Sec 10-5 6(b)]

(b)

image319

Grounding and Shielding Practice.[26] We made a survey of 10 major nuclear power plants in the United States, including 4 of the largest operating plants, according to MW(e) output in service as of January 1969 The purpose was to collect and compile data on methods used in these plants for grounding and shielding the neutron­monitoring systems. Each had experienced noise problems in the neutron-monitoring channels

Figures 10.19 to 10.21 illustrate the grounding and shielding methods used by the nuclear power plants surveyed. Numerous modifications were made on the equipment after the plants were constructed. Such modifications as adding line filters, radio-frequency (r-f) grounds, and n filters were made to the equipment to suppress noise and interference In a majority of the plants surveyed, a single-point ground system was used either by grounding the system at the nuclear-instrument cabinet or at the neutron detector. Other nuclear power plants provided grounding at both the neutron detector and amplifier cabinet

Figure 10.19 illustrates a grounding method in which the neutron-monitoring-system ground is made at the amplifier cabinet The pulse amplifiers, counting circuits, and other associated circuits are grounded to the building ground. Generally instruments in the cabinet are grounded to the building ground by connecting all instruments to a bus bar in the cabinet The grounding bus bar is connected to the building ground through a grounding cable

The entire neutron-monitoring system from the pulse amplifiers to the neutron sensor is insulated and floated above ground. This method prevents circulating ground currents from causing noise and distortion m the electrical signal being transmitted to the control room.

Figure 10.20 illustrates a method where the single­system ground is made at the sensor or at the preamplifier The signal-cable shielding and ground side of the signals in the amplifier cabinet are all insulated and floated above ground.

Figure 10.21 illustrates a method where multiple- system grounds are used. Grounding takes place at both the amplifier cabinet and the preamplifier The neutron — detector cable may be grounded at the reactor or un­grounded the full distance to the preamplifier

image062

image320

Fig. 10.19—Single-point ground system, ground at cabinet.

 

image321

Fig. 10.20—Single-point ground system, ground at preamplifier.

 

BUILDING

GROUND

 

image322

(c) Engineering Data Sheets, Grounding and Shielding.

Engineering data sheets, included as an appendix to this chapter, explain the methods used for grounding the start up channels at 10 major operating nuclear power plants in the United States Included with the system descriptions are comments on operating problems and modifications made to the equipment to prevent noise and interference. These data sheets should be used by both the design and construction engineer in selecting the best method of grounding an instrumentation system. Informa tion is included on how to avoid pitfalls that have been encountered in nuclear power plants and, more important, how to avoid having to make extensive modifications after the plants have been constructed

Grounding and shielding problems in nuclear plants are often associated with the neutron-monitoring start-up channels A number of mechanisms in a nuclear plant generate r-f and low-frequency noise, which finds its way into the neutron-monitoring start-up channels This noise generates signals that must be cut off by a higher discriminator voltage setting, thereby materially reducing channel sensitivity If rate amplifiers and reactor trip circuits are used with the start up channels, inadvertent scrams result where noise increases As the reactor power is increased and current-measuring channels take over from the pulse count-rate channels, the effect of noise is much less important

The following conclusions can be drawn from the information contained in the engineering data sheets

1 A single-point grounding system, grounded to the building ground at the amplifier cabinet with the signal — cable shields and neutron sensors floated above ground, appears to be most widely used by the nuclear industry The entire system is grounded at one point

2 The use of triaxial cable in place of coaxial cable is becoming standard throughout the nuclear industry, thus improving the suppression and rejection of noise and interference in the neutron-monitoring channels

3 In the use of coaxial and triaxial cable, it appears that the required care is not used in the installation of cable connectors and terminations during the construction phase of the plant Rework of cables and connectors is often necessary after plant construction has been completed

4 Eliminating noise at the source is a task that is quite frequently done by operating and maintenance staffs at nuclear plants This task sometimes involves days of tedious work to isolate and eliminate the source of noise Faulty a-c and d-c machinery, relays, switches, motor starters, etc, are sources of noise

5 Techniques, such as the use of line filters, r-f filters, and other electronic means, are being used by some nuclear plants to reject and suppress noise in the neutron­monitoring channels

(d) Noise Filter Design. There are times when a rapid “fix” is needed on the signal input or power input of a piece of instrumentation to eliminate high-frequency noise present on the line Although the use of commercial radio-frequency interference (RFI) and line filters is recom­mended, there are times when a simple 18 dB/octave, low-pass я or T filter can solve a noise problem Figure 10 22 shows design details for networks of this type

L L

2 2

о—a*

INPUT

Ю00 — 00000

tc

l———- 0———— 1

T NETWORK

L

, 000000 —

0 c INPUT =n

о —*

= c <

2

2 <

О——- ———-

b———————————- <

і———— 0———— 1

7Г NETWORK

Fig 10 22—Typical я and T networks R, load resistance (ohms) L = R/7rfc (henries) C= l/7rfcR (farads) fc, frequency at cutoff (hertz)

When a filter of this type is installed, it should be placed in its own metal box where possible If not possible, at least the input and output leads should be separated Such a filter can be used either as a low-level-signal filter or as a high-level power-line filter if the components are suitably rated These filters may be placed in power input lines to any piece of equipment in a neutron-monitoring system and may also be used m signal input leads to equipment as long as the desired input-signal frequency is below fc These filters are not used in pulse amplifier inputs

Nuclear Power Plant E

Type of neutron detector BF3 Type of signal cable RG-59/U coaxial

Location of pulse preamplifier Instrument pit at edge of loading-face shield Location of pulse amplifier Control room Distance between pulse amplifier and preamplifier 200 ft Distance between neutron detector and preamplifier 75 ft

Method of grounding Single-point grounding system The detector and the preamplifier are isolated from ground. The coaxial-cable shields and all external chassis aie grounded to the pulse-amplifier chassis The pulse amplifier is grounded to the nuclear instrument panel Operating problems and modifications When placed in operation, the system was plagued by noise bursts, high-level background noise, cable ringing, and electromagnetic pickup so large in magnitude that neutron pulses were not distinguished by the system Many multiple grounds in the system were found and eliminated. The system was converted to a single grounded system by insulating the detector and preamplifier from ground and tying the system to ground at the pulse amplifier The system was improved greatly, however, some disturbances were still present in the system These disturbances, although not large enough to prevent operation, were annoying, and it was felt that the use of triaxial cable in the system would eliminate the disturbances altogether

Production Inspection and Test

(a) Inspection and Test Instructions. Machined Parts Inspection instructions for machined parts are nor­mally simple enough to be included as part of the production planning When special instructions are re­quired, a separate inspection instruction may be written and referenced on the planning sheet.

Subassemblies Often it is necessary to ensure that subassemblies are correct before progressing to the next

assembly step, particularly where the next assembly step will obscure visual access or where the next assembly operation is expensive and would be wasted should previous operations prove to be faulty The quality engineer must assess the type of inspection or test that is needed after every production operation. He may have to design special test equipment (black boxes) or inspection fixtures, such as “go, no-go’’ gages. Again the complexity of the instruction determines whether it can be a part of the production planning or if a special instruction is required.

Test instructions for modules or boards containing active circuit elements should be derived from engineering test specifications Modules or boards with only passive elements may be inspected visually and tested later as part of a complete instrument. Functional tests are normally performed on each active element-containing module or board at ambient conditions. The functional tests may include the following

1. Zero, balance, and calibration adjustments to speci­fication.

2. Load regulation to specified tolerances.

3. Linearity, pulse width, waveform, and dynamic range to specification.

4. Trip-circuit accuracy, hysteresis, load, and range to specification.

Electronic Assemblies Typically nuclear electronic assemblies include rack-mounted chassis-type equipment, such as power supplies, source-range monitors (count-rate meters), intermediate-range monitors (log N current ampli­fiers), and power-range monitors (flux amplifiers), as well as wide-range monitors (picoammeters) with their associated logic and trip circuits. Test instructions for this equipment should be derived from engineering test specifications For special-purpose instrumentation, test instructions may be derived from customer specifications as well as from engineering test specifications. Standard tests that may be included in the test procedures include the following

1 Mechanical zeroing of all meters

2 Power-supply input and output voltage, ripple, and line and load regulation checks to specified tolerances

3 Zero, balance, and calibration adjustments to speci­fication.

4. Rise time, linearity, pulse width, waveform, and dynamic range checks to specification.

5. Overall response time to specification with simulated maximum-input cable capacitance

6. Trip-circuit accuracy, hysteresis, and range to speci­fication.

7 Calibration checks to specified tolerances at all outputs.

8. A full-load run for 24 hr at maximum specified ambient temperature, followed by an operational recheck

Nuclear Sensors and Peripheral Equipment Test in­structions for sensors should be derived from engineering test specifications and may include

1. Gamma and neutron sensitivity checks

2 Gamma compensation check.

3. Insulation-resistance and high-voltage (hi pot) checks

4. Mass-spectrometer leak test.

5. Cable-resistance test at room temperature and at maximum specified temperature.

6. Pressure test at room temperature.

7. Dye-penetrant and radiographic tests as specified

8. Continuity checks

Systems Test Instructions Systems test instructions may be derived from customer specifications or test specifications provided by Engineering Standard system tests normally include the following

1 Point-to-point wire check of all interconnections per connections diagram.

2 Tests at 1000 volts above normal control voltage and/or insulation-resistance tests at 500 volts on all power and control wiring. (Instruments, including meters and recorders, are disconnected and/or shorted during test.)

3. Functional electrical tests with interconnections to simulate field wiring, which may include

a Simulated (including cable capacitance) current or pulse signals to all neutron — and gamma-monitoring channels through all ranges or decades, b Externally connected loads of specified im­pedance.

c Operation as system elements of all switches, meters, relays, recorders, lamps, logic circuits, and other panel-mounted devices, d Recording of specified data, such as accuracy, response times, trip points, logic operation, sta­bility, and repeatability

4 Functional checks on each panel-mounted process instrument, which may include the following

a Rough accuracy checks at or near 10 and 90% of scale using pneumatic or electrical signals to simulate process variables

b Interlocking, alarm, and trip contacts operation at panel terminal-board points.

Final Inspection of Systems To assure the quality of the completed system, the inspector should use a checklist A typical checklist is given m Appendix В to this chapter.

(b) Test and Inspection Equipment. General Test equipment may differ for factory or field use Factory test operations involving high production rates require auto­mated or semiautomated devices Care must be taken that data obtained by such devices can be verified by equipment available to the field Factory test equipment used for development, design, and low-production-rate work should be selected from commercially available items Equipment used for production prototype final design and tests should have specifications that can be duplicated in all significant characteristics by factory and field equipment

Field test equipment, in addition to duplicating factory equipment characteristics, should be selected, insofar as

possible, from vendors who have nationwide service capa­bilities (or worldwide in the case of systems sold overseas) Some required field test equipment may not be com­mercially available, in which case instrument-system vendors must supply special portable test equipment using normal design and production methods plus issuance of complete operation and maintenance instructions

System manufacturers must issue formal listings of all test equipment required, including either catalog numbers or essential characteristics. If system manufacturers are responsible for field check-out, they should insist on the right to review customer-purchased test equipment for compatibility. Obviously factory training programs should use the same listed test equipment.

High — Production-Rate Test Equipment Although a hard and fast rule cannot be made, a continuing production rate of 500 relatively complex circuit boards of one type per year usually justifies automated testing and the associ­ated investment m design and equipment A subsystem of interconnected assemblies comprising many circuit boards or modules of several types might justify automated tests at subsystem production rates of five per year

Automated equipment is usually designed and built by the instrument-system manufacturer There are also test — equipment vendors specializing in custom design and/or building of such devices The cost of such equipment must be weighed against expected product design life It should be recognized that the more highly automated the device, the less adaptable it is, in general, to changes m production design.

Semiautomated test equipment usually consists of specially designed devices that interface conventional signal sources, the production item under test, and conventional output readouts The interface device accepts a circuit board, a module, or an interconnected assembly directly by mating connectors. It may also contain signal conditioners, voltage, load, or other parameter-changing elements that can, when a few switches are operated and external input signals are varied, test the production item

Although no rigid distinction exists between semi­automated and fully automated devices, the latter would replace the manual switching and variation of input signals with electromechanical or electronic switching and stepping circuits Manual output logging is usually replaced by a digital tape printer The item under test is simply plugged in, a start button pushed, and the test is automatically completed with data printed out Large interconnected assemblies can be connected by mating connectors or terminal fanning strips. One type of automated test device used for circuit-board and module testing continuously compares production items with known good standard boards or modules on a go, no-go basis and alarms when defects are found. Some testers may even localize the failed circuit element or region.

Examples of nuclear-instrument circuit boards and modules lending themselves to semiautomated or fully automated testing are d-c amplifiers, trip circuits, voltage regulators, power supplies, and generally items used with uncommon signal-conditioning boards or modules in assemblies comprising an instrument entity, such as a count rate meter, a log N amplifier, or a mean-square — voltage neutron monitor Base-mounted multichannel m — core power-range monitor subsystems, flux-mapping systems, and control-rod-position information systems may contain “card files” of identical circuit boards, which, along with their systems, can profitably be tested with automated equipment.

The degree of customization required for this type of test equipment precludes any detailed description here

Field and Factory (Nonautomated) Test Equip­ment Field and factory test equipment should be se­lected, wherever possible, from commercially available items, preferably with availability as extensive as the market to be served by the instrument manufacturer In development work, where the circuits are not directly used in production equipment, relatively more sophisticated items can be justified, a greater variety of items can be used than would be appropriate in final design and quality — control work. Development extras include sampling oscil loscopes, spectrum analyzers, multichannel pulse-height analyzers, harmonic wave analyzers, noise generators, double-pulse generators, and similar equipment High accuracy is desirable but not mandatory

Test equipment used m design, quality-control, and field work need not be as sophisticated as that used in development, but it must yield accurate and reproducible results. National Bureau of Standards traceable devices to generate or measure alternating and direct voltage and current, time and frequency, resistance, inductance, capaci­tance, pressure, and temperature must be available, either owned or leased from a local calibration service, and used for calibration of design, quality-control, and development test equipment Test equipment used in the field at locations where local commercial calibration service is not available must be supplemented with minimal portable standards, such as current sources, which can be periodi cally sent out for calibration and recertified

As noted earlier, devices used for factory design, factory test, and field test should duplicate each other in all significant characteristics This is especially true of equip­ment used for pulse or complex wave-form generation and analysis As an example, if the designers use a 50-MHz response oscilloscope in the factory to obtain wave-form or response-time data (to be included in the instruction manual) on a fast preamplifier driven by 10-nanosecond rise-time input pulses and a 25-MHz oscilloscope in the field, the test pulses will display different wave forms and lead to unnecessary troubleshooting Interface devices, such as terminating elements, must also be specified in detail by designers so that results can be duplicated Considerable frustration can be experienced in reactor instrument-system check-outs because of nonduplication and failure to prop­erly interface

Special Test Equipment Special test equipment de­notes equipment that is required for factory or field test of production equipment but is not commercially available Although this would include the automated test equipment previously discussed, here it shall be assumed to be portable equipment required for factory or field check-out of instruments or systems Examples are test fixtures used to simulate control-rod-position indicators or multiple current inputs or to generate squaring circuit calibration signals Test devices should also be provided for modules or circuit boards used in systems that do not permit bypassing during operation

The nuclear instrument manufacturer is responsible for reviewing the testability of his product and offering, as manufactured and documented items, all special test equipment needed in the field to calibrate the product and demonstrate its operability This includes jumpers for interconnecting items under test with test devices and power supplies as well as special input or output load simulators

Specific Test Equipment for Nuclear and Process Sensors and Instruments Nuclear sensors and channels, such as logarithmic and linear count-rate meters and logarithmic and linear current amplifiers, are used at most reactors Process sensors and instruments for measuring temperature, pressure, level, and flow are also used Mean-square-voltage monitors and power averaging mstru ments for both in core and out of-core applications are gaining popularity

The lists of equipment given here are intended to be typical rather than complete Except where noted, required test equipment is commercially available, usually from several vendors Overseas applications must specify proper line voltage and frequency Some available devices may combine listed functions

Table 111 lists the equipment required for testing and inspecting nuclear sensors Table 112 is a similar list for eight basic nuclear instruments log or linear count-rate meters, log N amplifiers, period meters, linear mean-square — voltage monitors, linear direct-current (often called d-c — current wide-range monitors, single-range d c (power-range) monitors, power-averaging instruments, and process and area radiation monitors Table 113 lists general-purpose equipment used in troubleshooting, instrument power supply voltage setting, and calibration of other test devices

Because of piping requirements, process primary sensors and associated transmitters are frequently calibrated or tested in place This requires portable test devices Control room indicators, controllers, signal conditioners, recorders, and similar items lend themselves more readily to instru­ment shop test or calibration A clean shop air supply must be provided for pneumatically operated instruments Inter­faces, corresponding to dummy loads for nuclear instru­ments, are usually far more difficult to simulate for process

Table 11 1—Equipment Required for Testing and
Inspecting Nuclear Sensors

1 High resistance meter (to measure cable or detector insulation or

leakage resistance) Range, 1 x 106 to 1 x 101 4 ohms full scale switched accuracy, +20% reading above 10% of full scale, d c test voltages, 10, 50, 250, and 1000 volts with 10 5 amp limit

2 Low current or voltage meter (to measure background sensor

current or cable to detector polarization current or voltage) Range (d c), 10 s to 10 1 3 amp, switched or (d-c), 1 to 100 mV, switched input resistance, 104 to 1011 ohms ±5%, depending on voltage or current range

3 Current limited power supply (to check detector element break­

down voltage) Range, 0 to 1500 volts d-c limited to 100 дА (or use a limiting resistor and microammeter to calculate resistor voltage drop)

4 Test sources

a Gamma sources to cover energy and intensity ranges speci fled for system. Sources of 1 3 7 Cs or 6 0 Co are adequate if the energy response of system is documented Intensity must be high enough to activate the highest range (or decade) response with acceptable geometry b Neutron sources to provide low-range response on start up and wide range instruments For some intermediate — and all single-range power range monitors, the reactor is the test source, and initial activation and operation to full-scale ranges must be observed during start up

instruments Static check-out of components does not always predict dynamic operation in a system with control valves and piping Table 114 lists test equipment for temperature monitors, and Table 11.5 lists equipment for testing pressure and differential-pressure (flow or level) monitors As in the listing for nuclear sensors and instru­ments, the listed devices are typical for the types of the instruments or sensor combinations noted For most overseas applications, equivalent metric scales must be specified

(a) Special Environmental Test Requirements. The original design of a nuclear instrument must be qualified in the anticipated nuclear-reactor environment, and the instru­ment itself may have to be subjected to that environment during production testing An example of this is theASME Boiler and Pressure Vessel Code testing of in-core detectors All in-core instrumentation must be subjected to minimum ASME Code requirements and certified by the authorized code examiner Obviously radiation-detection instruments must be exposed to nuclear radiations to determine whether they are working properly, nevertheless, there are practical limits of absorbed dose which should not be exceeded Instru­ments that are to be subjected to high temperature should be tested at a sufficiently high temperature to ensure that they will have adequate insulation resistance when used in a nuclear reactor The ability of the instrument to withstand low-frequency vibration, such as might be encountered during seismic disturbances, would normally be proved out during qualification testing, however, for certain sensitive equipment a vibration test may be included as a production test

Table 11.2—Equipment Required for Testing Nuclear Instruments

A. Count-Rate Meters, Log or Linear, Including Preamplifiers

1 Pulse generator

Internal repetition frequency 5 Hz to 1 MHz continuous with rough calibration only

Externally synchronized repetition frequency D-c to 2 MHz, output pulse duration, 100 nsec to 100 msec continuous

Output pulse rise time <10 nsec.

Output pulse amplitude 0 to 30 volts peak into 1000-ohm load, continuous, negative, and positive.

2 Flectrontc counter (to accurately set pulse-repetition fre­

quency)

Range 5 Hz to 11 MHz.

Accuracy 1 part in 10[29] per year (cumulative) ±1 count with minimum 10 mV rms input Internal standard 100 KHz or 1 MHz.

Gate time 0 1 to 10 sec.

3. Step attenuator (coaxial) (to match pulse generator to preamplifier)

Range 0 to 120 db in 10-db steps.

Impedance 50 ohms nominal Power dissipation 0.5 watt average.

4 Standard capacitor (coaxial) (to generate simulated detector charge pulses)

Value 1, 10, or 100 pF, depending on charge range required, accuracy, ±0.1%.

5. Oscilloscope (dual-trace)

Bandwidth D-c to 50 MHz, above 20 mV per vertical division, D-c to 40 MHz, 5 to 20 mV per division.

Time base 0 1 Msec to 1 sec per horizontal division, calibrated deflection factor, 5 mV to 10 volts per division, calibrated accessories, 1 1 and 10 1 probes.

Delay to permit viewing leading edge of triggering wave form.

B. Log N Amplifiers

1 Current source (d-c) (self-contained or calibrated resistance

box)

Range 5 xlO 3 to 1 x 10 1 3 amp

Accuracy ±1% at 5 x 10“3 to 1 x 10 6 amp, ±2% at 1 x 10 6 to 1 x 10’9 amp, ±3% at 1 x 10 9 to 1 x 10 1 1 amp, ±4% at 1 x 10 11 to 1 x 10 1 2 amp, and ±5% at 1 x 10"12 to 1 x 10 1 3 amp (accuracy may be attained with aid of voltage and temperature corrections).

Source impedance at least a factor of 1000 greater than input impedance of device under test at test current level

2. Voltage source (for resistance box, if used)

Range As required for above current range.

Accuracy As required for above overall accuracies.

3. Oscilloscope (to observe response times, spurious signals,

etc ) See item 5 under section A of this table

C. Period Meters (Without Self-Contained Ramp Generator)

1. Function generator (triangular wave form)

Range 0.01 Hz to 1 KHz, switched by decade.

Accuracy ± one division for 92 dial divisions (9 to 101). Linearity Less than 1% over full range.

Output 0 to 10 volts peak to peak into 600 ohms.

2 10 turn potentiometer with calibrated dial (to reduce gen­

erator output)

Resistance 600 ohms.

Linearity <1%.

Accuracy ±1%.

3. Oscilloscope (see item 5 under section A of this table) Frequency response ±1% at 50 Hz to 1 MHz ±5% at 10 to

50 Hz, 1 to 10 MHz.

Input impedance 1 megohm shunted by 50 pF (maximum)

4.Oscilloscope (see item 5 under section A of this table).

D. Linear Mean-Square-Voltage Monitors (Including Preamplifier)

1.Test oscillator (sine wave)

Range 10 Hz to 10 MHz, decade switched

Accuracy ±3% of dial reading

Output 0 3 mV to 1 volt rms into 600 ohms.

Attenuator 70 db in 10-db steps over output range.

2.Test attenuator (to interface oscillator output with monitor

or preamplifier)

Special test device to provide complementary output steps that are proportional to the square root of monitor range steps For example, if the monitor is switched to a 1-decade less-sensitive range, the attenuator must supply a signal greater by /To or 3.16

3.True rms voltmeter

Range 1 mV to 10 volts full scale, /Ї0 range switch factor

E. Linear Switched Direct-Current (Wide-Range) Monitors

See current and voltage source for log N amplifiers (items 1 and 2 under section В of this table) except that calibrator accuracy should be a factor of 4 better than specified monitor accuracy, range for range This implies that some potentio — metric or standard ramp or capacitor calibrator checking system, not commercially available as a single device, would be required for accuracy below about 1(T8 amp

F. Single-Range Direct-Current (Power-Range) Monitors

1 See current and voltage source for log N amplifiers (items 1 and 2 under section В of this table) except that calibrator accuracy should be a factor of 4 better than specified monitor accuracy. This can usually be accomplished with precision resistors and d-c current and/or voltage standards available to 0.1% or better accuracy over the current ranges involved (usually 10~6 to 5 x 10”3 amp).

2.Oscilloscope (see item 5 under section A of this table).

G. Power-Averaging Instruments

Test fixture (to simulate multiple sensors or flux amplifiers)

A special test device to supply constant currents or voltages to the number of channels averaged. Device must have both single channel and averaged output self-checking ability with an accuracy a factor of 4 better than averaging instrument or must possess interfacing switches to permit external current or voltage monitoring using test devices of that accuracy.

Oscilloscope (see item 5 under section A of this table).

H. Process and Area Radiation Monitors (Typically Gamma Mon­itoring)

These instruments are in principle and electronic test-equipment requirements either identical to sections A, B, and E of this table or are combinations of them (many area monitors are log count-rate meters at lower radiation levels and become log current amplifiers at higher levels). In addition to the test devices described, radioactive sources to cover specified energy responses and intensities are required (see item 4a of Table 11.1).

Table 11.3—General-Purpose Equipment Used Table 11.5—Test Equipment for Pressure

in Calibrating Test Devices, Troubleshooting, Etc. and Differential-Pressure (Flow or Level) Monitoring

(Some or all of the items listed below, in addition to those listed n

Tables 11.1 and 11.2, are widely used for troubleshooting, fo

setting instrument power-supply voltage, for calibrating test equip

ment, etc )

1. D-c volt-ohm ammeter (typical electronic type)

Voltage range 1 mV to 1000 volts, /І0 range switch factor Input resistance 10 megohms minimum Current range 1 дА to 1 amp, s/ЇО range factor Ohmmeter range 1 ohm to 100 megohms, center scale Accuracy ±1% of full scale, all voltage ranges ±2% of full scale, all current ranges, and ±5% at center scale, all resistance ranges

2. Multimeter

Voltage range 1 to 5000 volts a c or d-c at 20,000 ohms/volt d c and 5000 ohms/volt a c Current range 50 дА to 10 amps d c Ohmmeter range 1 ohm to 10 megohms

Accuracy ±5% of full-scale voltage and current, all ranges and ±10% of center scale reading, all ohmmeter ranges

3. D-c voltage standard

Null voltmeter and standard voltage source, range 1 to 1000 volts full scale, decade switched, 20 mA capability as source Accuracy ±0.02% of setting or reading.

4. Ac differential voltmeter

Range 1 to 1000 volts full scale, decade or J 10 range factor. Accuracy ±0.15% of full scale at power-line frequency.

(This may be used in conjunction with a regulated a-c voltage source for meter calibration.)

5. D-c power supplies

Selected to match ranges of instruments’ internal power supplies and also power sources if d-c powered. Supplies should have current-limiting capabilities, ±0.1% line or load regulation, and 1 mV or 0.1% of setting maximum rms noise (whichever is greater)

6. Sinusoidal voltage regulator

To supply standard line voltage and specified line frequency at ±0 2% line or load regulation, 3% maximum harmonic distortion A 1-kW minimum rating is recommended.

7. Cables, connectors, and adapters

Test cables with connectors to mate any instrument used with any test device, including inter type and tee adapters, as required

8 Dummy loads

Test loads to simulate system input and output impedances

Table 11.4—Test Equipment for Temperature Monitoring [30] [31] [32] * [33]

1. Pressure vacuum vanator (bellows for generating pressure to 30 psig or vacuum to —20 in. Hg) Effective volume, 12.5 in 3.

2. Dual-range deadweight tester with pump (oil) 0 to 600 psi (5-lb increments) and 0 to 3000 psi (25-lb increments), accuracy, 0 1% at increments

3 Test gage (one per range) range, —30 to 15 psi, 0 to 30 psi, 0 to 60 psi, 0 to 100 psi, 0 to 300 psi, 0 to 600 psi, 0 to 1000 psi, 0 to 1500 psi, and 0 to 2000 psi, accuracy, ±0.25% of full scale or with movable tabs capable of +0.1% of tab point when calibrated with deadweight tester.

4. Dial manometers (one per range) 0 to 30 in. Hg and 0 to 60 in. Hg, accuracy, ±0.1% of full scale.

5. Slope tube manometer 0.5 to 2.0 in. (Hg or water).

6. Portable test pump (water) 0 to 5000 psi, with test gages 0 to 160 psi, 0 to 600 psi, and 0 to 5000 psi, accuracy, ±0.25% of test gage full scale.

7. Water-weight gage 20 to 3000 psi in 0.2-psi increments, accuracy, ±0 1% at increment.

8. Differential pressure indicator 0 to 200 in. water, 0.5-in. divisions, accuracy, ±0.5% of full scale.

(d) In-Process Inspection. The variety of instruments and associated equipment involved in nuclear-reactor opera­tion is so great that in this chapter it only is feasible to survey briefly the various in-process inspection and test techniques that are available and useful to the industry

Nuclear Radiation Sensors In-process control of coated electrodes, whether uranium or boron (see Chap 3), demands careful analytical techniques during the coating process Proof-testing with dummy ion chambers is an ex­cellent technique after the coating process has been com­pleted, but, once the process has been qualified, it need only be performed on a sample basis. Two of the most important tests performed on ion chambers are the mass-spectrometer helium leak test (after practically every welding or brazing operation) and the high-voltage insulation-resistance test (after practically every important assembly operation)

Electronic Control and Monitoring Equipment In­process inspection and test procedures vary according to the manufacturing techniques used Basically, visual examination is required after every major operation or series of minor operations. (Operations include hand soldering, wire wrapping, etc ) A typical subassembly, such as a printed wire board, has all components mounted on it by one or several operators It is then checked as a first piece by an inspector and run through a solder machine The board is again inspected and then subjected to a performance test, after which (if all is well) the remainder of the production lot may be run Tests from this point on may be either on a sampling basis or 100%, depending on such factors as complexity, quantity, and adequacy of succeeding tests on the next level assembly

A common technique for verifying whether or not a board is correct is to use the first piece sample as an inspection aid against which succeeding boards are checked

The tester can be set up on the same principle a known good board and the board under test are electronically compared by subjecting both boards to identical signals and automatically comparing outputs across a bridge circuit Necessary adjustments can then be made by tuning for a null

A variety of automatic and semiautomatic testing devices are available or can be designed to meet specific objectives Such test devices can range from a simple black box to an elaborate computer arrangement that analyzes and prints out data automatically Each situation has to be evaluated and analyzed by a competent test engineer Computers can be effectively adapted to testing some of the more complex logic systems that are necessary in nuclear-reactor control systems

Peripheral Equipment Testing of peripheral equip­ment often presents a great challenge to the test-equipment designer because the equipment usually combines mechani­cal and electrical capabilities and often creates serious space problems (e g, a drive-mechanism test in conjunction with a traversing in-core probe)

In-process inspection and test of penetration seals is extremely critical since Boiler Code requirements must be met Records are important for such tests as leak tests under pressure High-potential and insulation-resistance testing offer real challenges because of the safety aspect and the sheer number of combinations and permutations on seals with multiple penetrations

(e) Serialization and Control Control of equipment by serialization is usually a function of production control A common technique is to maintain a notebook of consecutive serial numbers for each major subassembly This would normally be a component that could be provided as a spare part, it would have a functional specification that could be tested and might require a data sheet completed by quality-control test

The serial numbers can be assigned as a block when the work order is initiated Serial numbers can be physically affixed to subassemblies in any number of ways, such as by wired-on tags, etching, screwed-on plates, and silk screen­ing

Test data are normally filed first by drawing number and then by serial number Copies of data from all indentured parts are usually filed together with the top assembly in the customer project file

Serialization of larger discrete assemblies, such as ion chambers and source-range monitors, is accomplished in the same manner as above except that it may be important to date code One way to do this without revealing the actual date (if this is desirable) is to establish a date code such as A to L for the months January to December and A to Z for the years 1961 to 1986, and use the date code as a prefix or suffix to the regular serial number Thus a serial number DF87432 would indicate the 87,432nd unit of a particular drawing number and that it was shipped in April 1966

(f) Final Inspection and Test Requirements Radia tton Detection Devices Final inspection and test of radiation-detection devices is accomplished in about as many ways as there are different types of detectors However, some general principles apply For example, even though the best possible test for any item is to test it in its actual operating environment, this is usually impractical and often undesirable, particularly where the test causes the item to become radioactive Therefore it is often necessary to devise substitute tests that test only certain character­istics over a limited portion of the total operating range

Since many nuclear radiation-detection devices, such as in-core sensors, cannot be tested in their intended operating environments, the importance of in-process tests and inspections cannot be overemphasized For example, when a uranium coating has been checked and found to be correct through m-process testing, when the various parts and components have been found to be dimensionally correct, when the final fill-gas purity and pressure have been determined to be correct, and when the continuity of the center conductor is proven, about the only item left to verify is leaktightness of the final seal Since a high — temperature insulation-resistance test normally reveals problems of this nature, it may be this final test that gives a high degree of confidence that the unit is functionally operable

Electronic Control and Monitoring Equipment Final tests of electronic chassis, assemblies, and systems should simulate actual operating conditions as closely as possible Special test equipment must be devised to load individual units with simulated inputs, e g, pulses and ramp currents As noted in Sec 11 5(a), a variety of functional electrical tests can and should be performed whenever practicable The check list of Appendix В is also useful for inspection of systems

Peripheral Equipment Since final testing of peripheral equipment is usually only an extension of the in-process tests that have alread) been performed, it is not necessary to repeat, only to mention, that there is no substitute for a complete and thorough checklist when performing final inspection

(g) Disposition of Rejected Units Rejected units, regardless of whether they are small fabricated parts, large subassemblies, or completed instruments, must be properly labeled and physically separated from accepted units and the uninspected portion of the lot The label must be distinctive and must include the drawing number of the unit, serial number of the unit, reason for rejection, and specification limits of the rejected parameter There should also be space on the label for an inspection stamp and date

Most manufacturing organizations use a Material Re­view Board (MRB), normally made up of representatives from Design Engineering, Manufacturing Engineering, and Quality — or Process-Control Engineering, to review rejected material The objective of the material review is to determine the best disposition of the rejected material An appropriate disposition may be to (1) scrap, (2) rework to drawing, (3) rework to MRB instructions, or (4) accept as is A decision by the MRB should be unanimous Obviously any decision to rework or accept must be made only after careful consideration. Sometimes special studies have to be made to determine the possible effects of accepting out-of-spec material The personnel comprising the MRB must be experienced and knowledgeable since the board must take into account all viewpoints (safety, quality, cost, and schedule)

(h) Reporting and Disposition of Error Correc­tion. Any well-run quality assurance organization has a built-in feedback loop whereby errors or defects are reported and assimilated into a report that automatically exposes problem areas where attention is required Many subtle but expensive problems are never brought to light simply because the reporting system only reports significant problems requiring immediate action or, worse yet, there is no reporting system and problems are attacked only when they threaten shipments

A system to report defects, based on such items as total scrap and rework expense or impact on shippable sales so that reasonable priorities may be established automatically, is an excellent technique for operating the “Pareto” principle, і e, 10% of the problems account for 90% of the excess cost By implementing such a system, the process — control engineers or the quality-control engineers can apply their efforts where they will pay off most for the company.

Process-Monitoring Instrumentation

Process monitoring instrumentation refers to those systems outside the neutron-monitoring group which indi­cate, record, and control all operational systems within a nuclear reactor facility A channel includes the primary sensor, the interconnecting conductors, the measuring circuit, and displays The primary sensors are located at a point in the process system The measuring-circuit instru­mentation (amplifier, recorder, power supply, etc ) is located in the main control room or in an auxiliary control area

Grounding and shielding of instrumentation systems accomplish two major purposes (1) proper operation of the instrumentation by reducing or eliminating erroneous signals and (2) personnel safety with respect to hazardous operating parameters

(a) Personnel Safety. Equipment grounding provides safety for personnel who might come in contact with the equipment As far as instrumentation is concerned, two mam hazards exist for which protection must be provided electrical shock and heat All external surfaces should be at ground potential and at a temperature less than 120°F Where this is not practical for the component itself, protection for personnel safety should be provided by completely surrounding the instrument or installation The National Electrical Code gives rules for grounding that have wide commercial application These same rules, with more details added by individual manufacturers or by special applications, can be used to ensure proper personnel protection for most types of installations and equipment

(b) System Problems (Involving Erroneous Signals) In

instrument systems, good stable grounds are needed to provide a measurement reference, a solid base for the rejection of common-mode signals, and effective shielding for low-level circuits

A stable reference for measurement can be provided positively in only one way by referencing all measurements to a single-point ground However, this is not possible except through the use of a floating system (1 e, all system components completely isolated) A floating system also provides a firm base for the rejection of common-mode signals Compared to Single-point grounding, the instrumen­tation reference ground is inferior, however, it is more generally used The instrumentation reference ground uses a grid or bus that is maintained, as nearly as possible, at a consistent fixed electrical potential Circuits and systems using this type of a ground bus system must be grounded at one point only Grounding the system at more than one point will create ground loops, which will cause erroneous signals to be introduced into the equipment, either directly through one of the signal leads or induced through the shielding

Since in most instances the primary element is located some distance from the measuring circuit, there most certainly will be a difference in ground potential or reference Also, owing to the distances involved, sizeable differences m conductor impedance to ground could exist These conditions cause two of the main problems in low-level measuring circuits, ground loops and common­mode signals These two problems can be dealt with either by eliminating the conditions that cause the problem or by rejecting the erroneous signals that are produced as a result

Some elimination remedies follow. These remedies are usually difficult to implement [see Fig 10 23(a)]

1 Interrupt the continuity of the ground loop while preserving the path for the sensor signal [l e, increase the ground impedance (Zg) ideally to infinity]

2 Reduce the resistance of the ground conductor (Rg) to zero [This effectively shorts the total ground potential

(VAB) ]

3 Break the ground-loop current path (Ig) by floating the system (l e, isolate the sensor or the amplifier and power supply grounding at a single point only) Note If the amplifier is used to feed signals to a recorder, analog-to — digital (A/D) converter, display, or other data handling device, the path of the ground-loop current may be reestablished through these devices

Generally, some method of rejecting the erroneous signals is more practical than the elimination remedies discussed above One of these rejection remedies is to interrupt both the signal and ground-loop currents and then transmit the signal while blocking the ground-loop current Two common methods to achieve this are

1 A transformer and a modulator—demodulator can be used [see Fig 10 23(b)] Some of the advantages of this method are (1) a wide voltage range of both signal to amplifier ground and signal to common mode can be handled and (2) error rejection is independent of closed — loop gam Some of the disadvantages are (1) the bandwidth is limited by the modulating frequency, (2) output errors are induced by mtermodulation, and (3) error-reducing feedback cannot be used without reestablishing the ground-loop path

2 A switched capacitor can be used [see Fig 10 23(c)] This method has the same advantages and disadvantages as the transformer method with the added disadvantage of poor frequency response

A popular method of curing ground-loop problems is by using a differential amplifier Identical fractions of the ground-loop voltage are applied to the inverting and nonmverting inputs of the differential amplifier This causes the ground-loop voltages to be seen as a common mode voltage, and, as such, they are rejected (to a degree depending on the particular amplifier used) This method also becomes directly applicable to differential transducer signals that are imposed on relatively high levels of common-mode voltage Here using the differential amplifier allows extraction of these low-level signals from the high-level common-mode voltages

Although differential amplifiers in general have ex­tremely good common-mode voltage rejection, they are not perfect Certain system parameters affect the level of these common-mode signals and can be manipulated to help reduce the problem Ideally, the common-mode signals can be eliminated by either of two methods [Fig 10 23(a)] reducing the conductor resistances of both the signal and ground (Rs and Rg) to zero or increasing the impedance to ground of both the signal and ground conductors (Zsg and Zgg) to infinity Practically, this same result can be attained by making the ground and signal conductor resistances (Rg and Rs) equal and the signal and ground impedances to ground (Zsg and Zgg) equal This can be accomplished by observing the following rules

1 Use a “balanced line” between the sensor and the amplifier (l e, equal resistance and impedance in both the signal and ground conductor between the sensor and amplifier and the conductor and ground) This can be done most easily by using a shielded twisted pair of conductors for transmission of the signal from the sensor to the amplifier

2 Keep signal cables as short as possible

3 Use a source with a center tap if possible

image323
Both the signal source and the cable impedance have a shunting effect on the input signal to the amplifier. Having equal impedances at each end (sensor and amplifier) idealizes maximum power transfer; however, in transferring voltage signals, this is detrimental owing to line losses caused by system impedance. Therefore, if low-impedance sensors and high-input-impedance amplifiers are selected, the amount of signal current can be kept at a minimum, thus minimizing system error due to voltage drop. For systems where higher sensor impedance is required, corre­spondingly higher amplifier input impedance should be used. Good practice dictates that the input impedance of the amplifier should be at least 10 times the output impedance of the sensor.

The following general rules should be observed in the installation of low-level signal systems:

1. Avoid ground loops.

2. Provide a stable signal ground and a good signal — shield ground.

3. Ground the signal circuit and the signal shield at one common point.

4. Never use a signal-cable shield as a signal conductor.

5. Ensure that the minimum signal interconnection is a uniformly twisted pair of wires with all return current paths confined to the same signal cable.

(c) Primary Elements (Sensors and Transducers). Ob­servance of the following basic rules with respect to
primary elements will eliminate or alleviate many of the problems associated with grounding and shielding of low — level-signal transmission systems

1 Use low signal-source impedance devices whenever possible This not only reduces system noise but also minimizes the shunting effect at the input of the measuring circuit

2 Use a center tap on the sensor output whenever possible This permits the signal-cable shield to be firmly fixed and operated at a minimum potential with respect to the signal pair, thus providing the most effective shielding

3 Use special configurations, such as the noninductive strain gage, to reduce or eliminate interference problems from electromagnetic fields, magnetic fields, and other types of induced noise

4 Ensure proper isolation from all mounting hardware for isolated sensors

(d) Interconnection. Observance of the following basic rules with respect to interconnection will eliminate or alleviate many of the problems associated with grounding and shielding of low-level-signal transmission systems

1 Use a “balanced line” between the sensor and the amplifier (i e, equal resistance and impedance m both the signal and ground conductor between the sensor and amplifier and the conductor and ground) Use twisted, shielded pair

2 Keep signal cables as short as possible

3 Never use splices in signal leads

4 When using connectors (multipin) (1) use adjacent pins for signal pairs, (2) carry shield through pins adjacent to signal pins, (3)use spare pins as a shield around signal pair by grounding them together and then to the signal shield

5 Separate low-level-signal cables and power cables by the maximum practical distance and cross them, where necessary, at right angles

6 Isolate signal cables with conductive conduits and wireways

7. Ensure that spare shielded conductors in signal cables are single-end grounded, with the shields grounded at the opposite end

(e) Measuring Circuit. Observance of the following basic rules with respect to the measuring circuit will eliminate or alleviate many of the problems associated with grounding and shielding of low-level-signal transmission systems

1 Ensure that the measuring circuit has (1) high common-mode signal-rejection ratio, (2) high input im­pedance, (3) good d-c stability, and (4) wide bandwidth

2 In terminating the signal cable to the measuring device, use twisted leads exposed for as short a distance as possible from the shielded cable

10-5.7 Radiation-Monitoring Instrumentation

Most radiation-monitoring equipment requires the use of remotely located detectors connected to the monitor and control section by a multiconductor cable Generally, these systems use a common ground for signal reference, power, and chassis A separate conductor as well as the shield for the signal leads should be used between the control unit and the detector unit The manufacturer’s recommendation for grounding and shielding should be followed explicitly, and care should be taken to ensure that the mounting and assembly of components is proper Most of the problems and their solutions discussed earlier m this chapter are directly applicable to radiation-monitoring instrumentation

Nuclear Power Plant F

Type of neutron detector BF3 Type of signal cable Tnaxial

Location of pulse preamplifier In amplifier cabinet at control room Location of pulse amplifier In amplifier cabinet at control room Distance between pulse amplifier and preamplifier 2 ft Distance between neutron detector and preamplifier Approximately 140 ft

Method of grounding A single-point grounding system is used System ground is made at the amplifier cabinet The signal cable and neutron detector are insulated and floated above ground The two shields in the triaxial cable are connected together to one electrode of the neutron detector The two shields are grounded at the amplifier cabinet The signal ground in the preamplifier and the amplifier are grounded at the amplifier cabinet The amplifier cabinet is grounded to the building ground through grounding cable and buses

Test — and Inspection-Equipment Engineering

Test equipment and measurement systems are generally sufficiently complex to require a special engineering group to provide measurement hardware The requirement for test equipment is set by the quality-control engineer. The test — and inspection-equipment engineers, who are basically design oriented, take the quality-control engineer’s require­ments and provide the hardware that will be used by the process-control engineer in implementing the quality — control engineer’s plans The most comprehensive measure­ment expertise in an organization usually is in the test and inspection engineering group. Calibration to ensure correct functioning of test equipment is often a responsibility of this group as well.

Test equipment may be defined as that equipment used to measure or generate any of the various units of measure. This distinguishes between equipment that is part of the manufacturing process (and thus the responsibility of the manufacturing organization) and the test and inspection equipment. It is convenient to consider two categories of test equipment commercial equipment (equipment that is commercially available and may be purchased from a vendor) and special equipment (equipment that is designed, usually by the test and inspection-equipment engineering group, to solve a particular measurement problem) Since both categories involve intimate knowledge of measurement techniques (although no design skills are required in specifying commercial test equipment), both categories of test equipment are usually made the responsibility of the test- and inspection-equipment engineering group The two categories may be further broken into mechanical and electrical equipment

(a) Commercial Test Equipment. The availability of suitable commercial equipment should always be explored before the decision is made to design special equipment The cost of the commercial instrument will almost without exception be less than the cost of designing a piece of special equipment In addition to lower cost, there are a number of other advantages to using commercial equip­ment Commercial equipment is generally flexible, having been designed for the broadest possible market This flexibility reduces the chance of obsolescence when a measurement requirement changes or a new process must be measured An additional, and often major, advantage is that, when a particular commercial instrument breaks down, a substitute is available within the plant or from the vendor

There are many pitfalls to be avoided in purchasing commercial test equipment, and it is wise to let the responsibility rest with a group that specializes m measure­ment problems and test equipment The newest manufac­turer with equipment having the latest innovations is not necessarily the best choice of vendor. Unfortunately, new companies frequently drop a product line or go out of business, leaving the purchaser with an unsolvable mainte­nance and parts problem. The purchaser must also be wary of “specmanship,” where the manufacturer carefully selects his specification wording to make his product appear better than his competitor’s An example of this might be a digital voltmeter that operates within its temperature specification on a hot day but does not meet its accuracy specification unless it is operating at the low end of its line-voltage specification and (in addition) unless it has been calibrated and adjusted during the preceding 48 hr. The final choice of manufacturer and model should be based on

1. The ability of the equipment to perform the required task and any reasonable variations of the task.

2 The stability of the manufacturer and the proven reliability of his products

3 The compatibility of the equipment with existing test equipment (in terms of interfacing with other equip­ment and maintenance and calibration).

4 Price and delivery Price should be considered after the first three considerations since problems with any of the first three usually result in losses far exceeding any price differential in competitive products.

(b) Special Test Equipment. Justification The deci­sion to design and build special test equipment is usually made when it has been determined that there is no commercial test equipment to perform the particular measurement. Less frequently, the decision is made when commercial equipment is so general purpose and the particular measurement is so specialized that it is cheaper to design and build a special instrument than to purchase the general-purpose equipment The most common pitfall in this latter situation is finding that, after the special test equipment has been built, a design change in the equipment being tested results m total and irreversible obsolescence of the special test equipment Even where there is no choice but to design specialized test equipment, obsolescence through design change of the assembly under test presents a real hazard. The solution is to design all special test equipment to be as flexible as practical

Design and Building Special test equipment is pro­duced in small quantities, typically only one of a kind being built The labor costs for design and construction are the major items of expense, and material costs are not significant This must be kept in mind. For instance, other considerations being equal, it is not profitable to devote 2 hr of design effort to avoid the use of an SCR that is $10 more expensive than another SCR Similarly, incorporating available designs as elements in the special-equipment design can save time and money Duplication of circuits in the unit under test is often necessary to ensure compati­bility between the test equipment and the tested unit Although the designer has greater freedom in some respects, he has some constraints that are more stringent than those imposed on the product design engineer. Accuracy of the equipment being used to measure or to generate quantities normally must be 10 times greater than the tolerance of the device under test, i. e., the test equipment itself can contrib­ute no more than 10% to the error that is allowed for the unit under test. There are occasions when even 10% test-equipment error cannot be tolerated. There are other occasions when state-of-the-art measurements are made and a 10 1 accuracy ratio cannot be achieved Although

special test equipment involves construction of a single item or, at most, a limited quantity, workmanship cannot be compromised, and rugged construction is generally a must Engineering breadboards and prototypes cannot be simply stuffed into a box and shipped out for use. Inadequate mechanical ruggedness as well as poor solder connections in breadboards and prototypes are certain to create problems in normal use One technique, normally associated with production in quantity, can be applied to advantage in the production of special test equipment, namely, the use of printed circuit boards. These provide ruggedness and also have the short direct paths between discrete components needed in high-frequency and pulse circuits. Pnnted — circuit-board construction even on a single-unit basis can be as economical as a terminal-board layout. Definite cost savings can be realized if two or more units are built Moreover, the printed circuit board ensures similar perfor­mance of the units.

The approach in designing special test equipment is similar to that in designing a commercial product Initially the design goals are established in close cooperation with the quality-control engineer. Design alternatives are studied, and the best is selected for the particular measurement problem. In this choice a number of factors must be considered the complexity or difficulty of the measure­ment, working environment, skill level of the operator, expected useful life of the test equipment, cost per measurement over the useful life, operator convenience or human engineering, ease of maintenance and calibration, cost of maintenance and calibration, accuracy, safety, and initial cost. Once the approach has been established, the design engineer develops the detailed design, including breadboards of critical circuits if the problem is electrical Construction is best accomplished under the direction of the design engineer by people familiar with the peculiarities of test-equipment construction The completed equipment or first item is evaluated by the design engineer and, with more-complex design problems, further evaluated by the quality control engineer After his acceptance the equip­ment is then passed through the Calibration Laboratory for formal entry into the calibration cycle. Finally, the equipment is released to the process-control engineer for application

Documentation The documentation problem on spe­cial equipment is as complex as that for a new product to be marketed commercially The first documentation takes place m the engineering notebook or similar document, where various design approaches are explored and details supporting the design logic are recorded. When problems arise as the result of measurements made by test equip­ment, whether special or commercial, the first area ques­tioned is the adequacy of the test equipment This is rightfully so, and it is only sound engineering practice to have the validity of the design well documented In many cases the normal documentation associated with construc­tion and calibration of test equipment is not adequate for this purpose. Documentation of the design details, with one exception, must be complete and under formal control Because of the unique nature of test equipment and the importance of decisions that are based on its performance, the designer must clearly express all the details of his design to the group constructing the equipment and the group calibrating and maintaining the equipment. The exception to this would be the assembly details that are normally associated with the new product. Since special test — equipment production usually consists of only a few items and since they are constructed under the direct supervision of the test-equipment engineer, those assembly details not essential to the accuracy of the equipment need not be documented. Calibration instructions and specifications must be provided for the Calibration Laboratory. The equipment must not only perform correctly when initially released but also must continue to perform within specifica­tion, be periodically recalibrated, and be successfully repaired when necessary, all without the direct and continu­ing direction of the test-equipment engineer. When more complex test-equipment designs are involved, the quality — control engineer’s test or inspection instructions may not be adequate. Then a detailed instruction manual, similar to the manual for any commercial piece of test equipment, must also be provided by the test-equipment design engineer. This manual would provide the same type information customarily put into instruction manuals for commercial equipment, such as oscilloscopes, frequency counters, and spectrum analyzers.

Follow up and Feedback. Performance and continuing evaluation of special equipment is monitored in two ways. The maintenance record maintained by the Calibration Laboratory provides an accurate record of long-term performance. The process-control engineer provides rapid feedback of any critical problems. During the first months of use, he must be particularly alert to spot problems not readily apparent in an engineering evaluation. One of the common weaknesses of test equipment that is not always found in an engineering evaluation is its response to certain modes of failure in the unit under test. Under some conditions destruction of the test equipment is possible, and protection must be designed into it. The long-term performance information from the Calibration Laboratory maintenance record can be used to revise calibration recall intervals, i. e., to base them on actual performance In addition, this record may also indicate more subtle reliabil­ity problems

(c) The Calibration Laboratory. Both commercial and special test and inspection equipment must perform as intended by the original design engineer so that the test technician will feel confident of the results he obtains The Calibration Laboratory ensures this performance through its various activities and provides the basis for his confi­dence

The fundamental responsibility of the Calibration Lab­oratory is to ensure that the units of measure at a particular facility have the same dimensions as those defined and maintained at the National Bureau of Standards There are four basic areas of responsibility

1 Traceability of all units of measure to the National Bureau of Standards (NBS). Traceability implies both derivation of the local unit from the national unit and a known accuracy relation between the local and national units.

2. Maintenance, both preventive and corrective, must be performed to ensure continued performance of equipment within its specifications.

3. Documentation must be initially validated and there­after maintained to ensure that the equipment can be calibrated and restored, if necessary, to the required level of performance.

4. Recall control must be established to ensure that the equipment can be relied on throughout its life to be within its specifications with a reasonable degree of confidence

Organization and Staffing. The Calibration Laboratory can take many forms from a single person who performs all the basic functions to individual departments for each of the various tasks. In a medium — or large-scale operation, the first logical division of effort should be between the maintenance function and the standards function Meaning­ful results in standards work can only be obtained when meticulous care is taken in making measurements by someone who is not only familiar with the techniques involved but also takes great pride in his work. The maintenance function also requires special talents, but not to the same degree as standards work. In either case the individual must live by the rule that the quality of work, not quantity, is of prime importance. Where there are a number of people on the Calibration Laboratory staff, there must be technically competent leadership. In small organizations this is provided by the supervisor or manager. In large organizations engineers who are specialists m the areas of electrical and mechanical measurements provide the leadership.

New Equipment Control All new equipment, com­mercial or special, should pass through the Calibration Laboratory before it is released to the end user An initial calibration is performed to ensure that the equipment is indeed within its specification. At the same time the documentation is being checked out to make certain that it is adequate for future calibration and maintenance. The performance and repair record, discussed later in this section, is initiated at this time, and the equipment is placed on a recall schedule. These steps are taken to establish a firm base for all future control of the equip­ment.

Calibration Standards The traceability of any unit of measure, including a known accuracy relation to the unit as defined by the National Bureau of Standards, is the fundamental responsibility of the Calibration Laboratory The number of intermediate steps between the end user and the NBS depends on the accuracy requirements. Traceabil­ity does not imply direct comparison of the local standards against the NBS standard A valid traceable standard can exist even though the measurement has been passed from NBS through many intermediate laboratories provided the accuracy degeneration is correctly defined.

In each calibration laboratory some instrument or piece of equipment represents the most accurate repository at that local level for a particular unit of measure This then becomes the primary standard for that unit in that laboratory. Several echelons of measure may still exist between this local primary standard and the end user of test equipment. In some cases the local primary standard must be used directly to calibrate test equipment, more often it is used to calibrate other standards called “secondary standards” or “working standards.” Depending on the accuracy required of the local primary standard, it may be calibrated either by an independent laboratory or directly by the NBS Every measurement made degrades the transferred value of the standard involved in that measure­ment to some extent. This then becomes the disadvantage of using intermediate laboratories in establishing calibration standards. In very accurate measurements, this degeneration often cannot be tolerated. Despite this basic disadvantage, all local primary standards should not, and often cannot, be calibrated directly with the NBS. Calibration at NBS is usually much more expensive than traceable calibration through an independent laboratory, and it usually involves delays that may not be tolerable where alternate equipment is not available. In addition, NBS will not work with the less-accurate standards that are often all that are required for a particular local primary standard.

One word of caution on standards of any type no device, either mechanical or electrical, remains indefinitely stable, regardless of the ultimate source of the instability. For this reason all test equipment is placed on a calibration recall cycle. Standards, too, are subject to instability. The shiny new standard is not nearly as valuable as the old standard that has a proven record of stability. The standards of any calibration laboratory increase m value with time as their history of stability is documented

Use of Outside Laboratories. The integrity of traceable calibration is not jeopardized in the least by using outside laboratories for calibration of either the local primary standards or the user’s test equipment In the case of standards calibration, the first consideration needs to be that the required level of accuracy for that standard can be achieved by a laboratory other than the NBS. Each intermediate step between the user and the ultimate reference at NBS degrades the transferred value of the unit. The particular outside laboratory chosen to perform the calibration must be selected with care, especially if a local primary standard is to be calibrated. There are a number of obvious things to look for good equipment, certificates establishing traceability, neat and well-organized laboratory areas, and available and well-used reference material. These are all items that may easily be checked. More difficult to determine is the level of competence of the personnel Regardless of the quality of the physical facilities, reliable results m precision measurements cannot be obtained without highly qualified personnel

The choice of an outside laboratory for routine calibration of standard test equipment is not as critical, although the selection should still be made with care The reason for using an outside laboratory instead of NBS for calibration of local primary standards is that the turn­around time is shorter than at NBS On the other hand, the decision to use an outside laboratory for calibrating test equipment is based on economics. If the number of calibrations per year of a particular type is limited, use of an outside laboratory that already has appropriate stan­dards and trained personnel is more practical. When the cost of performing your own calibration is being compared with the cost of having an outside laboratory perform the work, the cost of calibrating local primary standards must also be included. It is not at all uncommon for the price of a single NBS calibration to exceed the cost of the equipment being calibrated.

Frequency of Calibration. A frequency of calibration is established to give the equipment user reasonable assurance that the equipment will remain within its specifications between calibrations. The frequency depends on the type of test equipment and its application. Because the severity of operating conditions can be so widely variable, calibration intervals for a class of instruments are usually based on their history of performance. This ensures an optimum recall interval for the particular conditions of environment, use, and abuse. Initially, intervals are estab­lished by reference to handbooks and manuals that describe normal recall intervals for typical applications. Too long a calibration interval is undesirable because it diminishes the possibility of the instrument’s remaining within its specifi­cations throughout the interval. The cost of repairs per call-in usually increases in this circumstance. On the other hand, too frequent call-in has disadvantages. Calibration costs per year on the instrument are higher than they should be. In addition, the instrument is out of service more frequently than is necessary, resulting in inconve­nience to the user and the need for backup equipment that would not otherwise be required.

The significance of calibration stickers should be considered. Too frequently a current calibration sticker is considered proof that the instrument is functioning cor­rectly, and, conversely, an expired sticker is proof that the instrument is no longer within specifications. Barring errors, the sticker is only proof that the instrument was within specifications at the time of calibration. It also signifies that, if the instrument is within the calibration interval, there is a reasonable probability that it is still within specifications. If the calibration interval has expired, there is a diminished probability that the instrument remains within its specifications. With both electrical and mechan­ical test equipment, there is always the chance of a subtle failure that is not readily detectable There is no substitute for intelligent use of test equipment by a user who is alert to subtle irregularities.

Call-in Techniques. It is often more difficult to break loose a piece of equipment from the user for a trip to the Calibration Laboratory than it is to repair and calibrate the instrument. Two direct approaches are generally used. When the instrument is calibrated, a sticker is put on it which gives, among other information, the date when the instrument is due for recalibration. The user himself can thus see when the equipment must be returned for recalibration, and he can schedule his use of the equipment to allow turn-in before the calibration has expired. The Calibration Laboratory keeps a record of the date the instrument is due for recalibration. Each month lists of equipment due for recalibration are circulated to alert the users so they can plan around the temporary loss of the equipment. Delinquent lists are also published by the Calibration Laboratory when equipment is not turned in as required. The attitude of the supervisor or foreman in the area using the equipment can be helpful in ensuring the timely turn-in of the equipment. Another call-in technique that can be used in some special situations is the running­time meter Where equipment degeneration is based on running time rather than elapsed time since previous calibration, the running-time meter can be used.

Performance and Repair Records The Calibration Laboratory must generate at least two records the calibra­tion sticker and the equipment-history card. The calibration sticker is placed on the instrument after calibration and contains, as a minimum, the date calibrated, the date due for recalibration, and the identification of the person who performed the calibration. Information such as equipment — use limitations and equipment accuracy may also be desirable. The equipment-history card contains all the basic data and history of the instrument. In small — to medium — size operations, this record is normally a maintenance or history record that is maintained on a one-card-per — mstrument basis In large organizations the information may be entered into a computer. The basic information that must be maintained in this record includes the description of the equipment, the identifying serial number, the name of the person who performed the last calibration, the date of the last calibration, the date recalibration is due, and the condition the equipment was in when received for repair or calibration Besides providing the basic informa­tion for calibration recall, the history information is useful in troubleshooting, and the information on equipment condition is used to establish realistic calibration intervals.

Obsolescence Determination Most equipment even­tually reaches the point where it becomes more economical to replace it than to continue it m service There is no problem in determining that new measurement require­ments have exceeded the capabilities of existing instrumen­tation More difficult to determine is when equipment should be removed from service owing to lack of use or the high cost of repair. The history card is the basic tool for making this determination Data on the equipment condi­tion when received and the extent of repairs necessary at each calibration can be used to decide when it is cheaper to invest in a new instrument with low maintenance cost When this judgment is being made, maintenance costs associated with equipment abuse should be excluded since, in most cases, these costs would continue even with new equipment. Disposal of equipment for lack of use is highly dependent on an organization’s specific situation. The advantages of removing the equipment from an organiza­tion’s capital assets should be considered, as well as the cost of storage, equipment deterioration, and the risk of equipment obsolescence with prolonged storage.

Grounding, Shielding, and Connection of Computers and Digital Data-Holding Systems[27]

The same general principles of isolation of signal leads and shielding of sensitive circuits apply to both analog and digital systems There are, however, distinct differences in the techniques of interference suppression m digital and analog systems

Because of the higher signal levels and the high- frequency signals used m digital devices, standing waves, stray inductances and capacitances, RFI, and line propaga­tion delays may become problems if not carefully taken into consideration Many digital control signals have rise times in the nanosecond range, so there are frequency components in the hundreds of MHz range Connecting cables must be treated as transmission lines to pass this information from circuit to circuit

The points discussed in the following should be considered to provide suitable interconnections between units of high-speed digital systems

1 Cables carrying frequency components above 100 kHz should be terminated properly, і e, both ends having an impedance match between the cable and circuit input or output Ordinary wire has a characteristic impedance of about 150 ohms at frequencies where standing waves become a problem, so this is a good value to choose as a terminating resistor when an approximate first choice is called for If coaxial cable is used, the terminating resistor should match the cable impedance The output of the equipment must be able to supply the current to drive the characteristic impedance of the cable under continuous load conditions, if long-term, constant d-c signal levels are expected [Fig 10.24(a)]

2 Another method of terminating a data cable is shown in Fig 10 24(b) A Zener diode is used to clamp the reflected signal to ground and limit signal excursions above the desired input signal This circuit is useful in the control of standing waves where not enough continuous power is available to supply a low terminating impedance and a series of high-speed single-polarity pulses is to be sent along a line Figure 10 24(cl outlines the same technique except that the grounding diode is returned to a —0 7-volt line to

OUTPUT

 

image325

(a)

 

(b)

 

( c)

 

image326

CABLE TERMINATION WITH TERMINATING RESISTOR

CABLE TERMINATION WITH GROUNDED DIODE

CABLE TERMINATION WITH SEPARATE RETURN LINE

Fig 10 24—Data system terminations

ensure that the reflected wave on the line is clamped to a value approaching zero.

3. The usefulness of a digital approach to instrumenta­tion derives partly from the fact that a signal is either off or on, and all modern digital systems have varying degrees of sensitivity to noise pulses in the circuit “Noise immunity” is usually defined as the minimum voltage difference between the higher O-voltage level and the lower 1 level, this typically ranges between 1 and 10 volts Because of this, millivolt or microvolt interference levels are of no conse quence, so many of the elaborate shielding and guarding techniques described in this chapter are not needed for digital systems Inputs to pulse-sensitive devices, such as some flip-flops, should be protected by shielding or other means against spikes on the input lines, however, the rapid rise time of an induced noise spike may cause spurious triggering even though it is less than the specified d-c noise immunity level [28]

(a) Data-System Power and Ground Connection. In

general, a small computer or other digital system may be connected to both power and ground by following standard electrical code procedures It is important, however, that the various cabinets of the system be connected together by a low-resistance bus, such as No 4 AWG copper wire, and the system be tied to a good earth ground The steel frame of the building in which the computer is housed is a good ground, or, if this is not convenient, a large water pipe is also a good ground. The ground wire should be run along with the signal cables when it is used to interconnect different portions of the system. A good a-c ground for a signal return is important As an example, the d-c resistance of a bond strap 0 002 to 0.003 in. thick, 1 in. wide, and 1 to 5 in. long would be negligible at direct current but would be about 0 1 ohm at 10 MHz and 15 ohms at 1000 MHz. This relatively large change in impedance with frequency indicates there is no substitute for a short grounding
connection with a large cross section and low self­inductance

The quality of the ground depends on the type of system since a self-contained system (one having no remote transducers or A/D converters) is more immune to noise than, for example, a system accepting thermocouple signals directly. For this reason all the interference-reduction ideas mentioned in this chapter should be applied to any system that has many small signal circuits

Nuclear Power Plant G

Type of neutron detector BF3 and fission counters

Type of signal cable Coaxial between detector and preamplifier, triaxial between preamplifier and amplifier

Location of pulse preamplifier Top of neutron-detector instrument well Location of pulse amplifier In amplifier cabinet at control room Distance between pulse amplifier and preamplifier 185 ft Distance between neutron detector and preamplifier 25 ft

Method of grounding A single point grounding system is used System ground is made at the amplifier cabinet The signal cabling, preamplifier, and neutron detector are insulated and floated above ground, being grounded at the amplifier cabinet The two shields in the triaxial cable are connected Coaxial fittings are used for both triaxial and coaxial cables The two shields are grounded at the amplifier cabinet The signal ground in the amplifier is grounded at the amplifier cabinet The amplifier cabinet is grounded to building ground through grounding cable and buses Operating problems and modifications During the initial installation and preoperational testing, it was discovered that the start-up channels were subject to transient noise problems from several systems throughout the reactor plant These noise problems were corrected at the source by replacing faulty relays and switches and by providing better shielding for fluorescent lighting After several years of operation, an unusual noise problem showed up at this power reactor site The reactor was shut down in preparation for refueling Just prior to the start of the second fuel-loading program, r-f noise showed up on all the signal buses coming out of the containment sphere The noise was of such magnitude that all operations were suspended After several davs of attempting to isolate the source of r f noise, the breakers at the 2180 kv substation were opened and closed by a planned operation The noise all but disappeared By readjusting the discriminator threshold bias control on the pulse amplifier, the start-up channels were brought within safe operating condition again A technical explanation has not been given as to what caused the severe r-f noise problem

Quality Control at the Reactor Site

(a) Verification of Condition on Receipt. All nuclear instruments, associated panels, sensors, wire, and coaxial cable should be inspected by Receiving Inspection for signs of damage on receipt at a reactor site. Receiving documents should then be checked to verify that the requirements of the purchase requisition have been fulfilled. The purchase requisition will contain specifications or refer to specifica­tions that are applicable to the purchased item and will also state if vendor quality-assurance inspection and certifica­tion was a requirement before shipment

(b) Quality Checks During Installation.[34] During in­stallation, quality checks of nuclear instrumentation should be made as follows

1 Check and ensure that coaxial connectors are in­stalled on cables per the manufacturer’s specifications. Cleanliness is very important during connector installation to maintain a high insulation resistance.

2. Check and ensure that noncoaxial connectors are installed per the manufacturer’s instructions. Items to watch for are wire size, insulation removal, crimping, and pm insertion tools.

3 Insulation resistance of coaxial cables should be measured after the coaxial connectors have been installed. A rule of thumb is that the numerical value of the insulation resistance (in ohms) should be 10 or more times the reciprocal of the lowest signal current (in amperes) that the cable will carry. Where coaxial cables are used to carry a-c signals (pulses, for example), insulation-resistance values of 1010 ohms are more than adequate and are easy to achieve.

4. High-voltage (hi pot) tests of coaxial cables should be performed

5 Check routing of field cables to ensure that there are no friction points where excessive wear could occur on cable or wire insulation.

6. Perform construction tests. These are functional tests performed with the equipment energized to verify that all field wiring is correctly installed. Any method for checking field wires, such as manual operation of relays and use of jumpers, is acceptable. However, care must be taken to identify and tag equipment, circuits, and systems that are to be energized and to isolate, as necessary, circuits that should not be energized.

Before installation, equipment (and even cables) may be assigned a quality-assurance number that can be used later as a quick guide to the applicable certification documents

(c) Preoperational Check-Out Procedures. Preoper­ational tests are functional operating tests that are per­formed before putting into operation a system (e. g., neutron-monitoring, control-rod-drive, reactor protection system) and its associated instrumentation where the system is actually monitoring a process or performing a safety function The purpose of these tests is to verify that instruments and systems function as designed and as specified in the applicable technical specifications.

Inputs to nuclear instruments that receive inputs from sensors when in actual operation may be simulated with a pulse generator, sine-wave generator, or current source as required. Trip points can be set, and the resultant functions initiated by trips (e. g., scram, rod block, and annunciation) can be checked with simulated inputs.

An acceptable preoperational check-out procedure must be detailed enough to check out every component, circuit, wire, and coaxial cable in the system covered by the procedure. It must also cover the check-out of any mechanical equipment associated with a system, e. g., in-core sensor retracting drives used with source — and intermediate-range instruments.

Field tests must also be made on in-core neutron sensors before they are put into service. The in-core sensors fall into three groups (see Chap. 3) pulse counting for source-range coverage, mean-square-voltage type for inter­mediate range, and direct-current type for power range. Field tests of neutron sensors are as follows

1. Insulation-resistance tests to verify that no damage has occurred to the insulation and seals. In general, the insulation resistance should be greater than 101 0 ohms.

2. Voltage-breakdown tests to verify that the filling gas has not escaped owing to a cracked or broken seal Current in this test should be limited to approximately 10 цА to avoid possible damage to the insulating material

Source tests of “dunking” chambers (fission chamber or proportional counter) that are to be used during fuel loading should be made after the fuel-loading source is placed in the reactor core. Curves of background count vs. discriminator setting must be made before the loading source is placed m the core. After the loading source has been placed in the core, but before fuel loading, discrimi­nator curves and voltage-plateau curves should be run to determine the optimum discriminator set point and cham­ber operating voltage for each source-range channel. After the discriminator and voltage settings have been determined and set, a final check should be made to verify that the chambers are indeed seeing the neutron source. This final check can be made by raising and lowering the dunking chambers above and below the level of the source and verifying that the count-rate readout of the source-range channels decreases and increases accordingly In addition, neutron pulses can be distinguished from background and gamma by monitoring the source-range instrument input signal with an oscilloscope.

Source-range in-core fission chambers should be source tested as the source-range instruments are changed from dunking chamber inputs to the permanent m-core cham­bers. This changeover and the tests required are made after fuel loading has been completed and the large start-up sources have been placed in the core. The same tests that were performed with the dunking chambers should be repeated. If the source-range fission chambers are retract­able, positive verification that the source-range chambers are seeing neutrons can be made by retracting the chambers and noting the decrease in count rate.

The field instrument engineer is responsible for veri­fying that all neutron-monitoring instruments have been calibrated before fuel loading, that preoperational tests on neutron-monitoring systems have been completed satisfac­torily, and that documentation exists for verification of all tests and results

(d) Field Feedback Reporting and Analysis. The field engineer must feed back information relative to the performance of instruments and systems for which he is responsible. The information should be included in reports to the home office.

Reports of equipment failure are particularly impor­tant. To help those who must evaluate the failure, the failure report should include

1. Catalog and serial number of failed part

2. Description of the failed part

3 Mode of failure.

4. Operating status at time of failure.

5. Effect on system or subsystem, if known.

6 Date of failure and approximate total operating time before failure.

7. Corrective action taken.

Field engineers’ reports should be distributed to the responsible engineering groups for information and/or evaluation For instance, if repeated failure of a particular component is observed at one or more field locations, a redesign of circuits or system may be warranted In some systems, depending on the effect of a component failure in that system, a single failure could make it mandatory for redesign to prevent recurrence

In situations where corrective action is initiated by a field engineer and the action involves a redesign or a deviation from approved drawings, change information should be sent to the home office immediately for Engineering approval and drawing changes before the system is put back into operation (where it is performing its intended function) Approval by telephone may be ade­quate in some cases when followed up in writing

Changes initiated by Engineering and performed by the field engineer should be reported to the home office as being completed once the change has been made and the instrument or system has been retested.

Analysis of feedback from the field and determination of corrective action is the responsibility of the appropriate component of Engineering. Field Engineering is responsible for carrying out the corrective action and documenting changes.

11- 2.8 Summary

A total quality system that embodies design control, materials control, process control, and product control must be implemented to attain the reliability necessary for achieving design goals relative to the appropriate level of safe and trouble-free life while still maintaining competitive costs.

The requirements of the Atomic Energy Commission and the customer fix the minimum quality standards that must be incorporated into the design of nuclear instrumen­tation systems. The Quality Assurance organization must ensure that these standards are upheld throughout the procurement and manufacturing cycles by establishing appropriate controls at critical points, such as receiving inspection of raw materials, parts, and subassemblies, in-process inspection and subassembly testing, final systems test, and shipping inspection. Judicious selection of the points in the manufacturing cycle where tests or inspections are to be performed as well as the selection of the correct type of quality information equipment and the generation of inspection and test instructions is the job of the quality-control engineer.

The life cycle of a particular product can be thought of in terms of distinct phases the preproduction phase, which includes design and procurement, the production phase, which includes manufacturing, testing and packaging, and the postproduction phase, which includes shipping, cus­tomer installation, and acceptance testing and service life (particularly during the warranty period). A total quality — assurance program will ensure, with a high degree of confidence, that appropriate measures are implemented during each phase by all personnel involved with the product, from sales to customer installation and servicing. Therefore quality assurance should not be thought of as the inspection and testing operation that screens the good product from the bad, instead, it must be thought of as a company-wide program to ensure customer satisfaction with minimum cost to the company.

The quality-assurance program for a company involved in the design and manufacture of nuclear instrumentation must contain all the elements of a good total quality — assurance program. Criteria and standards promulgated by the AEC and ASME, such as Quality Assurance Criteria for Nuclear Power Plants (Appendix В to 10 CFR 50), or Quality Assurance Program Requirements (RDT F 2-2T), or Quality Assurance (NA4000 from Sec. Ill of the ASME Botler and Pressure Vessel Code), or Quality Assurance Program Requirements for Nuclear Power Plants (ANSI-N45.2), all describe quality-assurance programs which if properly implemented will provide an excellent QA program for anyone in the nuclear industry

Nevertheless, it must be noted and emphasized that there is no substitute for a high degree of technical competence in the personnel involved in implementing such a system. For example, a competent quality-control engi­neer in this industry needs to be versatile not only in the quality-control and statistical field but also in the fields of electronics, nondestructive testing, and nuclear technology—four very specialized fields.

10-5. RFI and Electromagnetic Shielding

An enclosure that has high conductivity and completely surrounds a piece of equipment forms an excellent shield against RFI radiation, provided it is grounded A useful rule of thumb, however, is that below 2 to 3 MHz interference from one component to the next in a system is primarily electromagnetic (i. e, the coupling is by magnetic and electric fields), whereas above this frequency radiated (r-f) energy is the primary carrier of interference This means that low-frequency shielding should be mostly composed of ferromagnetic materials to eliminate magnetic fields and high-frequency shielding should have high conductivity since magnetic fields are not significant Often, if magnetic materials, such as steel, are used at high frequencies, the ohmic resistance they have (compared to materials such as silver) causes potential differences and subsequent electric fields to be set up in shields around sensitive circuits, thereby nullifying some of the effectiveness of the shields In the gray area (a few megahertzs) composite shielding, such as copper-coated steel, is often used. These shields should be used wherever the adjacent equipment or components are sensitive to interference (e. g, coaxial shields applied on input leads and shields placed over any gaps in the case if either enclosed or external circuits are r-f radiation-sensitive).

Though shielding is effective in the elimination of interference, additional suppression is sometimes added by filtering inputs, outputs, and power connections to elimi­
nate noise riding in on those lines. This can be accom­plished by placing feed-through capacitors or other filters on lines entering the shielded enclosure where the affected equipment is located and by using diode—capacitor isola­tion networks on power-supply leads where they enter the instrument circuit-board area within each individual piece of equipment This ensures decoupling of equipment

The power source for instrumentation equipment should be free from spikes, jitter, and poor regulation. Without proper filtering and regulation, any transients that occur will couple through power transformers in equipment (through mterwinding capacitance) and cause difficulty m sensitive circuits This problem is usually eliminated by a power-line filter which may be no more than a 0.02-juF, 600-volt capacitor connected from each side of the line to ground.

In summary, interference can, and usually does, enter equipment wherever there is an unprotected entryway It is up to the equipment installer to make sure that no signals other than those desired by the circuit designer enter the equipment To accomplish this task, he must be aware of the many techniques available for suppression of interfer­ence.