Category Archives: Frequency Response Testing. in Nuclear Reactors

Mathematical Description of System Dynamics

Differential equations are used in the modeling of the dynamics of physi­cal systems. If space dependence is not included in the equations that describe a process, then ordinary differential equations are used, and the model is said to be a lumped-parameter model. If the space dependence is included in the equations, then partial differential equations are required, and the model is said to be a distributed-parameter model. Of course a distributed process may be handled by using a number of interconnected lumped models for portions of the system. This is similar to a finite difference approximation for solving differential equations and is usually called the nodal approach.

Another important distinction between mathematical models for a system is whether they are linear or nonlinear. A linear model is one in which the dependent variables and their derivatives appear only to the first power and not as factors in products of dependent variables. A model that violates this is a nonlinear model.

We consider the question of linearity versus nonlinearity early in this book because the testing methods presented in subsequent sections require linearity of the processes being studied. The user must appreciate the implica­tions of this requirement in regard to restrictions on test conditions and in regard to limitations on interpretation of the results.

Example 2.1.1. The following differential equations are linear:

(1) dx/dt = 2x + 4

(2) d2x/dt2 = 3(dx/dt) — 2x + 61

(3) dx/dt = 3 tx

The first two are constant-coefficient linear equations and the third is a variable-coefficient linear equation. The following equations are nonlinear:

(1) dx/dt = x + (1/x)

(2) dx/dt = x2 + 2

(3) (dx/dt)2 = 2x

(4) dx/dt = x + ex ■

Simple, well-defined techniques are available for analyzing linear systems. On the other hand, it is very difficult to analyze nonlinear systems. This motivates the attempt to approximate nonlinear models with linear models that are valid over some range of operation for the system being modeled. This process, called linearization, proceeds as follows: [1] [2] [3] [4]

1. Identify combinations of terms that identify the equilibrium condi­tion. Since the time derivatives are all zero at equilibrium, these combina­tions of terms are identically zero. The remaining terms make up the linearized model.

As example is useful for illustrating this procedure.

Example 2.1.2. Linearize the following coupled set of differential equations.

dx^/dt = 3xt + xlx2, dx2/dt = x22 + exp(x,)

Step 1. Let

xl = x10 + 5xl and x2 = x20 + <5×2

Step 2.

d bxjdt = 3(x10 + bxj + (x10 + bxj(x20 + <5×2)

d bxjdt = (x20 + <5×2)2 + exp(x10 + bxj or

d bxjdt = 3×10 + x10x20 + 3 <5x! + x10 <5×2 + x20 bxi + bxi bx2 d bxjdt = xj0 + 2×20 <5×2 + (<5×2)2 + (exp x10)(exp bxj Step 3. Substitute

exp(<5x!) = 1 + bx^ + {bx^/H + ■ ■ ■

to obtain

d bxjdt = 3×10 + x10x20 + 3 bxi + x10 <5×2 + x20 bxl + bxi bx2

d bxjdt = x|0 + 2x20bx2 + (<5×2)2 + (expx10)[l + <5xj + (<5x i)2/2! + • ■ •]

Step 4. Eliminate all terms containing products of deviations to obtain

d bxjdt = 3×10 + x10x20 + 3 bxi + x10 <5×2 + x20 bxl

d bxjdt = x20 + (exp x10) + 2×20 <5×2 + (exp x10) «Sxj

Step 5. The combinations of terms that define equilibrium may be found by setting the derivatives equal to zero in the original equations.

(dxjdt)о = 3×10 + x10x20 = 0, (dxjdt)0 = x + exp x10 = 0

The final linearized equations are

d bxjdt = (3 + x20) bxl + x10 <5×2, d bxjdt = (exp x10) <5xj + 2×20 <5×2

The important things to note are that systematic procedures are available for developing linearized models for nonlinear systems, but that the validity

of these linearized models is assured only for some “small” region around an equilibrium point. ■

It is also often convenient to write linear equations with the dependent variables expressed as deviations from equilibrium. In this case, the equa­tions for Sx are identical with the equations for x, but the initial conditions may be different. If the transient starts from an equilibrium point, the initial values for the Sx variables are all zero.

Example 2.1.3. Express the following coupled set of differential equations in terms of deviations from equilibrium:

dxl/dt = Зхі + 2×2, dx2/dt = Xi + 4×2

Because of the linearity of the equations, we may write

d Sxl/dt = 3 <5x! + 2 <5×2, d Sx2/dt = 8x^ + 4 <5×2 with all initial conditions equal to zero. ■

An important property of linear systems is the property of superposition. This property may be stated as follows:

If the output of a system is y, when the input is then the output у when N inputs are applied simultaneously is

N

У=Ї. Уі (2.1.1)

t= 1

Spectral Windows

Spectral windows have an established role in the analysis of random noise data (11-13). They are obtained by averaging power spectrum results at several (usually three) adjacent frequencies in order to improve over the (sinx)/x filtering that is inherent in Fourier analysis (see Section 2.11). A common spectral window is the Hanning window, which is obtained as follows:

Подпись: (4.10.1)ФЩ) = *Ф(Юі-1) + + їф(соі+1)

where ф'(сОі) is the modified power spectrum estimate at frequency со{, and ф(сОі) the original power spectrum estimate at frequency cu(. The effective filtering inherent in the original estimate and in the modified estimate are shown in Fig. 4.15. It is evident that the use of a Hanning window decreases the contribution from frequencies far from the analysis frequency, but increases the contribution of nearby frequencies.

image78

(Harmonic Number) — (Analysis Harmonic Number) Fig. 4.15. Hanning and Hamming spectral windows.

Another common spectral window is the Hamming window. It is given by

ф'((о,) = 0.27ф(щ_,) + 0.46#»,) + 0.27ф(соі+,) (4.10.2)

The filtering obtained with the Hamming window also appears in Fig. 4.15. We see that the Hamming window has a larger central lobe than the Han­ning window, but smaller side lobes.

The advantages of spectral windows in analyzing random noise data are obvious. The noise has a continuous spectrum, and the analysis must pick out the information in the desired frequency range and eliminate informa­tion from other frequencies.

The situation with periodic data is quite different. For the case of a perfect, noise-free periodic signal, Fourier analysis at a harmonic frequency auto­matically places the other harmonics at null points in the filter, and spectral windows are totally unnecessary. However, spectral windows may be help­ful in nonideal situations where background noise is a problem or where analysis of a non-integral number of periods is unavoidable (see Section 4.8). In some cases, narrow-band noise from sources such as 60-Hz pickup or from mechanical vibrations can be a problem. These signals can have a large amplitude relative to the periodic test signals. It may be advantageous to reduce the side lobes of the effective filter in Fourier analysis to reduce the effect of narrow-band noise. Spectral windows can help when a nonintegral number of periods is unavoidable because the resulting filtering effect weights frequencies rather evenly near the analysis frequency and reduces the weights of frequencies far from the analysis frequency.

Spectral windows may be used in tests using multiple periods of a periodic signal. For example, let us assume that two periods of the input are used. If the period is T, then the data record length is 2T, and the harmonics based on the length of the data record are at integer multiples of 1/(2T). We can average the Fourier coefficients using formulas similar to the Hanning or Hamming formulas:

F{oak} = 0.5F{a>k-i} + F{ojk} + 0.5F{e>*+1} (4.10.3)

(Denote location of harmonics)

image79

(Harmonic Number) -(Analysis Harmonic Number)

Fig. 4.16. Hanning and Hamming windows for two periods of periodic data.

or

F'{(ok} = 0.587F(a>t_ j} + F{(ok} + 0.587F{<ut+(4.10.4)

The resulting filtering effect is shown in Fig. 4.16. Since the Fourier coeffi­cients at <ut_! and cok+ x are zero, F'{wk} is identical to F{wk} if there is no noise.

This type of averaging to achieve better filtering can be extended if more than two periods are analyzed. For example, if three periods are used, then the following five-frequency averaging procedure minimizes the contribu­tion from the side lobes of the filter.

F'{wk} = 0.095F{<ut_2} + 0.595F{cok_1} + F{cok} + 0.595F{<ut+1}

+ 0.095F{a>t + 2} (4.10.5)

The filtering associated with this procedure is shown in Fig. 4.17. Clearly, it does a better job than Hanning or Hamming in suppressing side lobes of

image80

(Harmonic Number) — (Analysis Harmonic Number) Fig. 4.17. Five-frequency spectral windows.

the filter. The price paid for this is a broadening of the central lobe, but this should not be a serious problem in most applications with periodic signals.

A practical procedure for using the five-frequency window is:

1. Select at least three periods of data for analysis.

2. Fourier analyze at a harmonic that is nonzero (a harmonic based on the period of the signal T and also on the total record length).

3. Analyze at two harmonics on each side of the nonzero harmonic. Since these are harmonics based on the total record length, the Fourier analysis is allowable, but since these frequencies are not harmonics of the original signal, the Fourier coefficients should be zero (except for the effect of noise).

4. Average the calculate Fourier coefficients as follows:

F{cok} = 0.095F{fl>*_2} + 0.595F{co(l_1} + F{a}k} + 0.595F{a>i+1}

+ 0.095F{a>i + 2} (4.10.6)

This is done independently for input and output signals, and the frequency response is obtained by forming the ratio of the modified output transform to the modified input transform.

Spectral windows can also be applied to the data before it is Fourier analyzed. I n this case, each data point is multiplied by a factor before Fourier transformation. For example, the functions in the table can be multiplied by the data records to give the desired spectral windows, where Cl5 C2, and C3 are constants that determine the area under the filter. They are immaterial in frequency response tests where identical windows are applied to input and output signals.

Spectral window Function to be multiplied by data record

Hanning C,[ 1 — cos(2«/T)]

Hamming C2[ 1 — 0.8519 cos(2;rt/T)]

Five-frequency C2[l — 1.19 cos(2;rt/T) + 0.19 cos(4;rt/T)]

It may appear that it is simpler to apply the windows to the time-domain data. This may not be true because it requires N multiplications for N data points, and each of the factors for the window function must be calculated or stored.

It may be noted in the time domain representations of the spectral windows that they force the function to have the same value at the start of a period (t = 0) and at the end of a period (t = T). This may be used to eliminate some of the problem with drift.

4.11. Correlation Functions

The cross-correlation function between a periodic input l(t) and the resulting output 0(f) is

/■772

C12(t)=1/T l(t)0(t + z)dt (4.11.1)

Подпись: ■ 772J — 7

In Section 2.7, it was shown that for periodic signals

Подпись:f{C,2( Т)Ц =

If C* is the Fourier transform of the input, and Dk is the Fourier transform of the output, then

Dk = G(jok)Ck (4.11.3)

where G(ja>k) is the system frequency response at frequency wt, and

F{Cl2(z)}mk = CkC_kG(jwk) (4.11.4)

Then

C12(t)= F-l{CkC-kG(ju)k)} (4.11.5)

We may use the convolution theorem to interpret this. The result is

С12(т)=[ h(p)C! [(t — p) dp (4.11.6)

J о

where h(p) = F~ ‘{GOco)}, and C{l(p) is the autocorrelation function of the input (inverse Fourier transform of the input power spectrum). If the input signal had an autocorrelation function that was a delta function, then the result would be

C12(t) = /z(t) (4.11.7)

This indicates that the impulse response could be obtained by determining the cross-correlation function between input and output for a test that used an input signal whose autocorrelation function was a delta function. Such a signal would have a power spectrum that was constant over all frequencies.

No periodic signal can have this property, but several useful signals have autocorrelation functions that approximate delta functions and power spectra that are quite flat over a broad frequency range. For example, the PRBS (see Fig. 3.3), the PRTS (see Fig. 3.9) and the n sequence (see Fig. 3.8) all have autocorrelation functions with a series of spikes. These spikes become sharper as the number of bits in the sequence increases, giving a better approximation to a delta function. For these signals,

Подпись:C12(t) s h(x)

The approximation is due to the departure of the autocorrelation function from a true delta function.

It is possible to obtain the response of the system to any input using the convolution integral if the impulse response is known. A response of par­ticular interest is the step response. This may be obtained by integrating the impulse response. This approach may be preferred over a simple test involv­ing a step input because greater accuracy is possible. This is because an impulse response (and the resulting step response) is obtained from analysis of multiple periods of data in a way that discriminates against noise errors. There is no way to achieve a comparable enhancement of the desired step response in a direct step response test.

A great deal of work has been devoted to impulse response measurements using input signals having inpulse-like autocorrelation functions. Tech­niques have been developed (14) that permit a correction of the results to account for the deviation of the autocorrelation function from true delta- function behavior. Since the emphasis in this book is on frequency response measurements, these techniques will not be described here. Also, the reader should note that there is no need for corrections in the frequency domain.

Other Reactor Typesf

Previous sections contain discussions of all of the reactor types with current or near-term importance as central-station power plants. Tests have also been performed on research reactors, isotope production reactors, nuclear rocket engines, and prototypes of potential advanced types of central station power reactors. Many of these reactors are one-of-a-kind, and testing experiences with them generally are interesting more for what they tell about testing procedures than for what they tell about the dynamics of these reactors.

A great deal of useful experience has been obtained in these tests. This experience has had an influence on the development of the technology whose status is described in other portions of this book, but a description of the individual tests does not seem warranted. Instead, a rather complete bibliography is included to serve the interested reader.

8.4. Conclusions

A great deal of theoretical and practical experience with frequency response testing in nuclear reactors has been accumulated. This experience has demonstrated that methods for planning, performing, and interpreting these tests are suitably developed for routine use in power reactors.

[1] Represent each dependent variable as an equilibrium value plus a deviation from equilibrium :

x = x0 + 5x.

[2] Substitute this form for each term in the equations.

[3] If a function of a dependent variable occurs, write the function as a power series in 5x.

[4] Eliminate all terms that contain products of deviations from equilib­rium. This is justified if the model is used only for “small” perturbations. Terms with products of small quantities are smaller than terms with these small quantities raised to the first power.

[5] S. M. Shinners, Control System Design. Wiley, New York, 1964.

Nonlinear Effects

The usual purpose of a frequency response test is to measure the linear dynamic response of the system. The tester usually wishes to make the input and output signals as large as possible in order to maximize the signal-to — noise ratio. However, the allowable maximum signal amplitude is limited by two considerations: the limits imposed by operational restrictions (maximum temperature, pressure, etc.) and the possible influence of nonlinear effects. In most nuclear reactor applications, the first restriction will dominate, but the nonlinear contamination problem may influence some tests. This section outlines current knowledge on nonlinear effects and how to minimize their influence.

It has been shown (6) that the output of a wide class of nonlinear systems may be given by the Volterra functional expansion:

oo

Подпись: m)= J h^z) dl(t — z) dz + JJ h2(zl, z2) SI(t — z0 dl(t — z2) dzl dz2

00

+ЯІ m m i,‘+-

0 (2.13.1)

The first term on the right is the linear part is the impulse response) and all other terms represent nonlinear effects. The kernels hl, h2,h3,… consti­tute a complete representation of the system dynamics.

Equation (2.13.1) may be Fourier transformed to determine the influence of nonlinearities in a frequency response test. The Fourier transform of the first nonlinear term is

t 00

(1/T)J e~ja’ jj h^z^zjdlit — zjdllt — z2)dzldz2dt (2.13.2)

0 0

Interchange the order of integration to give

Подпись: (2.13.3)(l/T)jjh2(zl, z2) J e~Jo>,3I(t — zjdllt — z2)dtdzy dz2

0 0

image111
The term within the square brackets may be written as follows:

Подпись:<5/(0=

Then we may write

Подпись:Г e“J“’ dl(t — tO 5I(t — z2) dt= e"J“’ <5/[f — — (T/2)]

J 772 J Г/2

or, letting p = t — T/2,

Г T r. T/2

e-Jat dl(t — T,) dl(t — T2) dt = e-JaTI2 e~Jap dl(p — T,)

•I T/2 Jo

xSI(p-T2)dp (2.13.7)

This may be substituted into Eq. (2.13.4) to give

r . cT’[5]

e-jat SI(t — t,) SI(t — r2) dt = (1 + e-jaTI2) е~іш SI(t — t,)

Jo Jo

x dl(t — r2) dt (2.13.8)

The factor 1 + e~j<oT/2 may be written

1 + cos(wT/2) — ;sin(wT/2)

Since o)T/2 = (2kn/T)(T/2) = kn and к is odd for an antisymmetric signal, we obtain

1 + e J“r/2 = 1 + cos kn — j sin kn = 0

Therefore, the first nonlinear term is identically zero if the input signal is antisymmetric. A similar analysis shows that all nonlinear terms contain­ing even-numbered kernels are identically zero and that the terms contain­ing odd-numbered kernels are not identically zero. Thus, half of the nonlinear terms can be eliminated by using an antisymmetric signal for frequency response testing. This will result in decreased nonlinear contamination unless the selected antisymmetric signal causes the remaining (odd-numbered) nonlinear terms to increase enough to overcome the reduction in nonlinear effects caused by elimination of the even-numbered terms. Limited practical experience (5) gave results in which the use of antisymmetric signals gave reduced nonlinear contamination, indicating that the elimination of the even-numbered terms was sufficient in that case.

Construction of Empirical Models

This is the lowest level of system identification. The idea is to develop a transfer function that gives frequency response results that are a good approximation to the measured frequency response. Usually, there is little use made of theoretical models in constructing these transfer functions. One simply uses the experimentally measured gains, phases, and break frequencies to formulate a model using the principles outlined in Section 2.4. For example, suppose that the measured system frequency response is as shown in Fig. 6.3. The break frequencies are

Upward breaks (zeros) Downward breaks (poles) to = 0 to = 0.1, to = 10

The zero-frequency gain is zero. Thus we may construct the following model for the system:

G = s/[(s + 0.1 )(s + 10)]

Of course, there are systemmatic computer techniques (generally based on minimizing a squared error) for doing this too.

Models constructed in this way may be used to optimize control systems and to predict the system response to various perturbations. Also, if the poles or zeroes are simply related to physical quantities, this procedure may be used to identify them, t See the literature (3-52).

In nuclear reactor application the objective is often to determine the feedback frequency response, since the uncertainty in the zero-power frequency response is usually much smaller than the uncertainty in the feedback frequency response. The results from a frequency response test (power/reactivity) on a power reactor can be analyzed to yield the feedback frequency response. From Eq. (6.1.2), we see that the feedback transfer

function H may be obtained as follows:

H = (1/G) — (1/Gj) (6.2.1)

This approach has been used for several reactors to determine H, and transfer functions have been fitted to the measured H to use for representing feedback effects in a system dynamic model.

Effects of Signal Imperfections

All of the characteristics of the signals described in previous sections of this chapter are based on signals that switch instantaneously from one level to another. Actual input hardware will give signals that require a finite time to move from one level to another. The actual transition will usually be in the form of a ramp, an exponential, or a staircase as shown in Fig. 3.19. The ramp or exponential transition will apply for inputs in which the drive is continuous, such as a motor-driven, rack-and-pinion control-rod drive. The staircase transition will apply for discontinuous drives such as a magnetic jack or a lead screw with a stepping motor (see Chapter 6).

The finite transition time will cause the power spectrum of the system input to differ from the ideal spectrum. Since the actual input will usually be measured in a test, the actual spectrum will be obtained in the data analysis and the finite transition time will be accounted for. However, the effect of finite transition time should influence the planning of a test. This planning includes the selection of an input signal that contains sufficient

image60

power in selected frequencies to give accurate results. Finite transition times will cause a reduction in available signal power at all of the frequencies. The selection of a signal with adequate power in measurement frequencies should be based on the power spectrum that will actually be achieved, rather than the ideal spectrum.

The loss of signal power due to finite transition times has been analyzed for the PRBS or the n sequence with ramp and staircase transitions. The results are shown below.

(1) Ramp Transition. Figure 3.20 shows a pulse with an instantaneous transition and one with a ramp transition that takes т seconds. The pulse is a member of the total pulse chain that starts at time P. The pulse is L seconds

image63 Подпись: (3.9.1)

long. The Fourier transform of the finite-transition pulse can be obtained as follows:

image162 Подпись: 1 - e-Jar jon Подпись: (3.9.2)

where F'(jco) is the Fourier transform of the finite-transition pulse. The result is

The second term is the error due to the finite transition. Each pulse in the pulse chain will have such an error, and the total error in the Fourier trans­form of the pulse chain is the sum of the errors at each transition. The fractional reduction in the power spectrum due to ramp transitions of the PRBS and n-sequence signals appears as the dashed lines in Fig. 3.21. The fractional loss is defined as follows:

Fractional loss = (P — P’)/P

where P is the power spectrum of the perfect signal, and P’ the power
spectrum of the finite-transition time signal. The loss is given as a function of the normalized harmonic number k/Z, where к is the harmonic number, and Z the number of bits in the sequence.

(2) Staircase Transition. Figure 3.22 shows a pulse with an instantaneous transition and one with a staircase transition. The staircase is characterized by the number of steps N, the step duration in terms of a fraction of a bit duration t, and the rise time Nr. The Fourier transform of this pulse is

Подпись:Подпись:Подпись: 1 W?-jWlПодпись: aeПодпись:Подпись: - jcoNt jПодпись: jmT

image172 image64

The result is

(3.9.4)

The solid lines in Fig. 3.21 show fractional reductions in the power spectrum due to staircase transitions for PRBS and n sequences. As an example of how to use this information, let us consider the loss in magnitude and in power for the 56th harmonic (the half-power harmonic) of a 127-bit PRBS. In this case, k/Z = 0.44. If the transition is a ramp with a rise time equal to 0.6 of the bit time, then Fig. 3.21 indicates that the fractional loss in power is 0.21.

image65

image66

Fig. 3.22. Instantaneous and staircase transitions.

If the transition had been a staircase with two steps, the fractional power loss would have been 0.16. Also, any staircase transition with more than two steps would have a fractional power loss that lies between the value for the ramp transition and the value for the two-step staircase transition. Thus the solid and dashed lines in Fig. 3.21 give the range on the fractional energy loss for staircase transitions.

Another imperfection encountered in practical measurements is that the input hardware may cause the transition from one input level to another to have a shape that depends on the direction of travel. It has been found that this nonreversible transition can cause a pair of small spikes in the input signal autocorrelation function (18,23). This gives a ripple in the power spectrum of the signal. Neither of these effects is significant in determining the success of a measurement. However, this behavior might create concern until the cause is understood.

3.3. Summary

This chapter has presented all of the test signals of current practical im­portance. The MFBS signal is capable of providing the most accurate results in the shortest time. The input waveform can be accomplished as easily as any periodic, binary signal. The only disadvantage is the need to generate the signal off-line, load it into a signal storage device, and play into the system from this device. The PRBS or n sequence may be obtained from an easily constructed device that contains the logical elements needed to form the sequence. Since the PRTS offers no real benefits over other signals and is more difficult to use because of the three input levels, it will probably see little use in the future.

The nonperiodic pulses and steps suffer from signal-to-noise ratio prob­lems, but should be widely used for preliminary tests and for rough checks used between more accurate tests.

Effects of signal imperfections have been determined to allow realistic estimates of the spectral characteristics in pretest planning.

Rack and Pinion

The rack and pinion CRDM requires a mechanical or electrical penetration of the pressure boundary. A motor is used to turn a pinion gear that engages the teeth in a movable rack. Unlike the magnetic jack and roller nut CRDMs, the rack and pinion CRDM allows continuous adjustment of rod position. The maximum rod speed is restricted to low enough values to avoid undesir­able transients due to inadvertent continued rod withdrawals. A typical rod speed is 0.4 in./sec.

Laplace Transforms*

Laplace transforms have a key role in dynamic system analysis and in dynamic testing. The Laplace transform is defined as follows:

/* CO

F(s) = L{f(t)} = f(t)e~s, dt (2.2.1)

Jo

where/(f) is some function of t, F(s) the Laplace transform of/(f), s a param­eter [it is not necessary to specify a value for s, but there must be some value of s that makes the integral in Eq. (2.2.1) converge], and L the Laplace trans­form operator. This simple definition permits the development of a table of Laplace transforms of functions and operators. The process of determining an f(t) whose Laplace transform is L{/(t)} is called inversion of the Laplace transform.

See Aseltine (1).

Example 2.2.1. Determine the Laplace transform of eal.

image1

1 00 1

Example 2.2.2. Determine the Laplace transform of df/dt.

image2 image003

Integrate by parts to obtain

A table of Laplace transforms appears in Table 2.1.

Laplace transforms are useful for solving differential equations. The procedure for linear, ordinary differential equations is:

1. Laplace transform all terms in the differential equation. This gives an algebraic equation.

2. Solve the algebraic equation for the Laplace transform of the desired solution.

3. Obtain the solution by inverting the expression for the Laplace trans­form of the solution. This is done using the table or by using a general inversion theorem (described below).

Example 2.2.3. Solve the following differential equation using Laplace transforms:

dx/dt = R — ax

where R and a are constants.

image3

Step 1.

image4

Step 2.

image5

Step 3.

Подпись: TABLE 2.1 LAPLACE TRANSFORM PAIRS m F(s) df dt sF(s) - /(0) d2f dt2 S2F(S) - sf(0) - f(0) dt dj df s"F(s)-s*-'m----^1( 0) $’of(x) dx F(s)/s f(t - T)u(t - T) e~'sF(s) for T > 0 (u is the unit step function) F(s + a) <L(f) 1 (<5+ is the impulse or delta function) U(t) t 1/s (u is the unit step function) 1Д2 t2 2/s3 f n!/s"+1 e~“ l/(s + a) te~at l/(s + a)2 fe~°' n!/(s + a)"+1 sin fit Ms2 + P1) cos fit s/(s2 + fj2) sinh fit P/(s2 - P2) cosh fit s/(s2 - P2) sin fit P (s + a)2 + P2 e~“ cos fit (s + a)/(s + a)2 + P2 1 (<?-“' е~ы) b — a l/(s + a)(s + b)

In general, the Laplace transform of the solution for a lumped-parameter system will be a ratio of polynomials in s.

Подпись: (2.2.2)sm + bm_ і sm 1 + • • • + bls 4- b(, s" + a„_ 1s"~1 + ■ ■ ■ + ats + a0

image009 Подпись: (2.2.3)

These polynomials may be written in factored form to give

The values zl, z2, ■ ■ ■, zm are called the zeros of X(s). The values Pi, p2, ■ ■ •, p„ are called the poles of X(s). The Laplace transform of the solution of a system of ordinary differential equations with equations of various orders will have Np poles, where

Np = I 0, (2.2.4)

all

equations

where 0,- is the order of the ith equation in the set. For example, if all the equations are first order, then the number of poles is equal to the number of equations. It is possible for several poles to have the same value. Poles that appear once are called simple poles. Poles that appear more than once are called multiple poles.

The inversion of Laplace transforms may be handled by a general inver­sion method based on the residue theorem. The residue theorem gives the following:

L-^{F(s)}= Y Ri (2-2.5)

;= 1

image011 Подпись: £^ї~ PiTFW Подпись: (2.2.6)

where I is the number of distinct poles, and Rt the residue of the ith pole. The residue is given by

where n is the number of times the pole p; is repeated. For a simple pole, this simplifies to

Ri = [(S — рте3‘1=рі (2.2.7)

Example 2.2.4. Invert the following:

Подпись: F(s) =(s+ 1)

(s + 2)(s + 3)

Подпись: -2 + 1 -2 + 3 image016 Подпись: R2 Подпись: -3 + 1' -3 + 2 Подпись: e

The residues are

Thus

L~l{F(s)} = — e~2′ + 2e“3′

Indirect Analysis

It is possible to implement the method of Section 2.7 for the evaluation of the frequency response on a digital computer. This would involve the numerical evaluation of the input-output cross-correlation function and the input autocorrelation function using

Ci/Ap) = (1/N) X хі(Р)х/.Р + Ap) (4.12.1)

p= 1

where Cij is the autocorrelation function if і = j, or the cross-correlation function if і ф j, and Ap the number of sampling intervals to give the desired lag. Both correlation functions are then Fourier transformed, and the ratio of the Fourier transform of the cross-correlation function to the Fourier transform of the input autocorrelation gives the system frequency response.

It is informative to consider the use of the indirect method for analyzing n periods of a periodic signal to get the power spectrum. The usual procedure is to compute the correlation function for enough different lags Ap to give correlation function results for a span of one period. Thus the maximum lag to be calculated is equal to the period. The maximum number of terms in the calculation at any lag is (n — 1)5, where n is the number of periods and S the number of samples per period. The reason the factor is (n — 1) instead

of n is that each term in the series must use two values of the function, and one of them is lagged as much as a period. In order to obtain the same samp­ling interval (and the same Nyquist frequency) in the correlation function, we must calculate the correlation function for a total of S lags. Then the total number of multiplications required to form the correlation function is

(n — 1)S2.

If we add on the number of multiplications required for the Fourier analysis, then the total number of multiplications is

(n — 1)S2 + SF using the DFT of Section 4.4

(n — 1)S2 + 2Slog2 S using FFT

where F is the number of frequencies to be analyzed.

Подпись: (indirect) _ (n — 1)S2 + SF ^ S (direct) nSF F (indirect) _ (n — 1)S2 + 2S log2 S (direct) 2Sn log2 Sn
Подпись: using the DFT of Section 4.4 їізЬ; usi"8 FFT

This can be compared with direct Fourier analysis of the signal. The number of multiplications required would be nSF using the DFT of Section 4.4 and 2Sn log2 Sn using the FFT. The ratio of the analysis time for the indirect method and for direct Fourier analysis is

We observe the direct Fourier analysis based on the DFT of Section 4.4 is faster than the indirect method as long as the number of frequencies to be analyzed is fewer than S. The direct method based on FFT is much faster than the indirect method for all practical values of n and S.

A roundabout way of evaluating the correlation functions has been developed that exploits the great speed of the FFT. This involves the calcula­tion of the power spectra of input and output signals and their cross-power spectrum using Fourier coefficients obtained from the FFT. The correla­tion functions are then obtained by performing inverse Fourier transforms of the power spectra using the FFT. This is readily accomplished, and the time required is much less than a straightforward calculation of the correla­tion functions.

Summary

In this chapter, the role of Laplace transforms in the analysis of dynamic systems has been outlined. They are used in solving differential equations and in formulating transfer functions. The frequency response may be obtained simply by substituting jw for s in the transfer function. The frequency response is also experimentally observable, giving a convenient link between theory and experiment.

In Section 2.4, we began a discussion of mathematical analyses that will be used in frequency response test data analysis. The key operation is Fourier analysis. The Fourier coefficients are related to the power spectrum of the signal by Parseval’s relation. The power spectrum is important for assess­ing signal strength and as an intermediate results in some data-analysis methods. Correlation functions are related to the Fourier coefficients of a signal by Wiener’s theorem.

Analysis procedures for obtaining frequency response results from non­sinusoidal signals were described in Section 2.9. We found that the frequency response is given by the Fourier transform of the output signal divided by the Fourier transform of the input signal. This is the most important result of Chapter 2.

Coherence functions are used to assess the influence of background noise on the test results. We examined the process of Fourier analysis and found that it is equivalent to a band-pass filtering of the signal. The filter has a shape given by the sampling function (sin x)/x.

We considered the bandwidth of a binary pulse chain so that the range of harmonics with suitably large amplitudes could be estimated. We con­sidered nonlinear contamination and found that antisymmetric periodic signals discriminate against nonlinear effects.

A number of topics that may appear somewhat unrelated appear in this chapter. However, the reader will find that they all enter into questions of test-signal characterization and selection, data analysis, and data interpreta­tion that arise in later chapters.