Как выбрать гостиницу для кошек
14 декабря, 2021
This level of system identification involves the correlation of experimental results with results from a detailed theoretical model. Such a model is generally used prior to system operation for studying system stability, for studying performance during normal maneuvers such as load changes, and for designing control systems. Also, this model used for studies of normal system operations usually includes the same modeling assumptions and system parameters used in the accident-studies model. These models have a major influence on system design and operating policies. Clearly, experimental verification of the model is in the best interest of safe and efficient system operation.
Differences between experimental results and theoretical predictions may be due to errors in the model, errors in the parameters in a correct model, or both. The identification of parameters in a correct model can be approached in a systematic way, as described in the next section. The adjustment of theoretical models is not so systematic, but a good understanding of the underlying processes that are responsible for the calculated behavior can aid in this. For example, Fig. 6.4 shows theoretical and experimental phase shift results for the molten-salt reactor experiment (53). The
Ю"3 2 5 Ю’2 2 5 1Cf’ 2 5 Ю0 Frequency (radians/sec.) Fig. 6.4. Phase shift for molten-salt reactor experiment. |
predicted bump in the neighborhood of 0.3 rad/sec did not appear in the experimental results. Prior theoretical work had shown that this bump was due to pure time delays associated with external-loop circulation. This knowledge permitted an adjustment of the model by allowing for more mixing in the external-flow circuits.
A number of methods are available for analyzing frequency response test data. The procedures discussed in this chapter should be sufficient to supply a suitable analysis method for most situations. Methods based on analog equipment will be considered first, then methods based on digital computing equipment will be considered.
4.1. Fourier Analysis—Analog Computer Methods
An analog computer circuit for the Fourier analysis algorithm of Section 2.9 is shown in Fig. 4.1. This method is sound in principle, but problems often arise because of inaccuracies in analog multiplication.
Fig. 4.1. Analog computer circuit for Fourier analysis by the method of Section 2.9. |
An alternate analog computer method that eliminates the need for analog multipliers has been developed (1-3). Consider the circuit diagram shown in
4.2. The transfer functions that relate and 02 to / are
G,(s) = OJI = s/(s2 + w2) (4.1.1)
G2(s) = OJI = — co/(s2 + со2) (4.1.2)
The impulse responses corresponding to these transfer functions are
h^t) = cos cot (4.1.3)
h2(t) = —sin cot (4.1.4)
where hi{t) is the impulse response of 0, (response of Oj if an impulse were input to the circuit of Fig. 4.2), and h2(t) the impulse response of 02. These impulse responses may be used to give the responses of the system to arbitrary inputs by means of the convolution integral:
0,(t) = f cos со (t — t)/(t) dx (4.1.5)
Jo
02(t) = — f sina>(t — x)I(x)dx (4.1.6)
Jo
The following trigonometric identities may be used to cast these results into an alternate form:
sin co(t — t) = sin cot cos cox — cos cot sin cox (4.1.7)
cos co{t — x) = cos cot cos cox + sin cot sin cox (4.1.8)
The results are
Oj(t) = cos cut /(t) cos cox dx + sin cot I(x) sin cox dx Jo Jo
02(t)=—sin cut /(t) cos cox dx + cos cot I /(t) sin cox dx Jo Jo
If the signal is periodic, со = 2kn/T, and we obtain cos conT = 1, sincunT = 0
where n is the number of cycles of data analyzed, and T the period. Thus, if the integration is stopped at a time equal to an integer multiple of the period of the signal, the results are
pnT
I(t) cos cot dr
0
I(t) cos cot dt — (j/nT) I(t) sin cot dt
0 j 0 we obtain
tчіТ ’
(1/nT) I(t)e-Jo>’dt = (l/nT)Ol(nT)-(j/nT)02(nT) (4.1.13)
Jo
This method has also been implemented (2) using a digital simulation of the analog circuit of Fig. 4.2.
The locking piston CRDM is a hydraulically driven CRDM used by the General Electric Company in its boiling-water reactors. These CRDMs are mounted beneath the reactor core and use primary system water as the operating fluid. These CRDMs move in stepwise fashion between fixed stops. The distance between stops is 6 in. The nominal rod speed is 3 in./sec or less. This means that it takes at least 2 sec to accomplish the unlatching, latching, and rod moving associated with a step. Also, the CRDM is designed so that the rod must be raised slightly to disengage a latch before it can be lowered. The reactivity per step is quite adequate for dynamic testing, but the maximum frequency is less than 0.25 Hz. Furthermore, the small insert before a withdraw step gives a reactivity change that may be difficult to account for in the input-signal spectrum, since there is normally no continuous indication of rod position. The position indication consists of
magnetically operated switches that are actuated as the rod passes certain points (usually at latching points and halfway between latching points). These observations indicate that this CRDM is not very well suited for frequency response testing.
(a) Analytical Development
The transfer function is defined as the Laplace transform of the deviation of a linear system output from its equilibrium value divided by the Laplace transform of the deviation of an input from its value at equilibrium. It is usually written as
Example 2.3.1. Determine the transfer function <50(s)/<5/(s) for the following lumped-parameter equation:
dO/dt = — ЗО + 41
Since this is a linear system, the equation for the deviation from equilibrium (involving <50 and SI) is exactly the same as the equation for absolute values of the variables (involving 0 and I). Thus we may write
dSOjdt = -3S0 + 431
Now Laplace transform:
s <50(s) — <50(0) = -3 <50(s) + 4 <50(s)
The value of <50(0) is zero because the initial state is the equilibrium state. The transfer function is
<50(s) _ 4
<5/(s) s + 3 ^
Transfer functions may also be developed for distributed-parameter systems.
Example 2.3.2. Develop a transfer function that relates internal temperatures to a change in the surface temperature applied simultaneously at each surface of a one-dimensional slab.
The equation (in terms of deviation from equilibrium) is
d2ST _ 8 ST
a 8z2 8t
where a is a constant, T the temperature, and z the position. Laplace transform to obtain
82 <5T(s) s
The solution of this ordinary differential equation is
<5T(s) = A exp[(s/a)1/2z] + В exp[(s/a)1/2z]
Now use appropriate boundary conditions:
(1) 3T(L, s) = 36(s)
(в is the slab surface temperature, and L the half thickness of the slab.)
(2) ^(0,s) = 0
The result is
3T(z, s) cosh[(s/a)1/2z]
<50(s) cosh[(s/a)1/2L] ®
In general, the transfer function for a lumped-parameter system will be a ratio of polynomials in s with a finite number of poles. The transfer function for a distributed-parameter system will be a transcendental function of s with an infinite number of poles.
(b) Stability
The stability of a linear system is determined by the values of the poles of the transfer function. If all of the poles have negative real parts, the system is stable. If any pole has a positive real part, the system is unstable. If a complex conjugate pair of poles lies on the imaginary axis, the system will have an undamped oscillatory response, and the system is on the stability- instability boundary. System stability can be determined by a direct calculation of the poles or by use of stability criteria (2) that provide an indication of stability versus instability without a calculation of all the poles. Furthermore, some of these criteria also provide a measure of relative stability so that the stability margin may be evaluated. A description of the various stability criteria and relative stability measures is beyond the scope of this book. However, we are interested in relative stability measures that can be determined experimentally. The Nyquist stability criterion provides a suitable relative stability measure and is described in Section 2.4.
(c) Block Diagrams
Transfer functions provide input-output relations that may be defined for each part of a system and combined to give complete input-output relations for the whole system. This involves the use of block diagrams and
their combination using block-diagram algebra. An input-output relation is indicated by a block and input and output lines as shown in Fig. 2.1.
Total system models are obtained by combining subsystems, which may be arranged in series, parallel, or feedback configurations as shown in Fig. 2.2.
Fig. 2.2. Block diagram combinations.
G, |
||
? ‘ |
||
H |
||
c. Feedback Combination |
The overall system transfer function G for a series arrangement is obtained multiplying each of the serially connected transfer functions
G = GlG2G3 ■ ■ • (2.3.2)
The overall transfer function for a parallel arrangement is obtained by adding each of the parallel transfer functions:
G = G, + G2 + G3 + ••• (2.3.3)
The overall transfer function for the feedback arrangement shown in Fig. 2.2 is
G = G,/( 1 + GlH)
Note that the sign of the feedback term at the summing junction is negative, indicating that the feedback is subtracted at the summing junction. This is the most common form, but the sign of the feedback signal may also be positive, giving
G = G,/( 1 — GlH) (2.3.5)
For feedback systems, a distinction is made between the closed-loop transfer function and the open-loop transfer function. For the feedback system of Fig. 2.2, the closed-loop transfer function is the overall input — output relation,
G = GJ( + GlH) (2.3.6)
The open-loop transfer function is defined as GtH. It gives the input-output relation that would describe a system obtained by breaking the feedback loop at some point, inserting an input downstream of the break, and measuring the output upstream of the break. For example, Fig. 2.3 shows an open-loop system. The transfer function GXH gives the input-output
Fig. 2.3. Feedback combination for the open-loop transfer function.
relation — b/c. Since the open-loop transfer function includes all of the subsystem transfer functions that occur in the closed-loop transfer function, it is possible to obtain information about closed-loop system performance by studying the open-loop transfer function. In particular, the Nyquist stability criterion described in the next section uses the open-loop transfer function to determine the stability of the closed-loop system.
The analyst must decide whether to use on-line analysis equipment to process the data at the test site or to record the data for subsequent analysis
on a large, off-site digital computer. Convenience, cost, capabilities of available manpower, and the delay that can be tolerated will dictate the choice.
If a large digital computer is to be used, then a multichannel FM tape recorder may be used to record the input and output signals. The analog tape is subsequently played into an analog-to-digital converter and a digital magnetic tape is written. Another choice is to use a digital data acquisition system that digitizes the data at the time of recording and stores the data in digital form. The digital data (obtained by either procedure) are then used along with an analysis program (probably an FFT program) on the large digital computer. This method rates low on convenience. There is a long delay between the completion of the test and the availability of the results. Also, this method requires considerable data handling and the operation of several different devices. The cost depends on whether the necessary equipment is already available or whether it would have to be purchased or rented for the tests. The manpower capabilities will vary greatly from one organization to another.
The other possibility for data analysis is to build or purchase an on-line analyzer. Analog analyzers have been used, but most on-line analyzers now use small digital computers. The analyzer consists of the minicomputer with appropriate software, a multiplexer, an analog-to-digital converter, and some sort of output device. The cost of the equipment ranges from $25,000 to $75,000, depending on the number of special options.
Most available analyzers that employ a minicomputer use the FFT algorithm. It is certain that the FFT algorithm is the fastest procedure available, but the price paid for this speed is that the sampling rate and signal characteristics must give 2" data points per period and the computer must have enough storage to hold at least a block of data at a time. An analyzer based on the procedure of Section 4.4 would have no restriction on the number of data points and the storage requirements would be very small. Also, the programming for the algorithm of Section 4.4 is much simpler than the programming for the FFT. (However, this is often not a problem because FFT programs are available for most computers.)
In this chapter, the important signals for use in frequency response tests are considered. As described in the previous chapter, Fourier analysis can be used to obtain frequency response information from nonsinusoidal tests that use periodic or nonperiodic input disturbances. We restrict our attention to those currently known test signals that are practical to input with standard reactor hardware and that have power spectra that can give results over frequency ranges of interest. Discrete-level inputs (those that have only distinct values—usually two or three) are available that have the desired characteristics. In this chapter we consider the following discrete-level, periodic signals: pseudo-random binary sequence, n sequence, pseudorandom ternary sequence, multifrequency binary sequence and square wave. The nonperiodic signals considered are the pulse and the step.
In a frequency response test on a nuclear reactor, one of these signals could be used as an input to the system. The input and selected output signals would then be analyzed using one of the methods described in Chapter 4 to give the frequency responses. The information in this chapter can aid the experimenter in selecting the test signal that is best suited for his application.
Parameter estimation is the determination of coefficients in a theoretical model by interpretation of experimental results. In general, there are two classes of parameter estimation:
Class 1. Parameters are determined that are required only to cause theoretical results predicted by the model to agree with experimental results. This class of parameter estimation is related to the problem of constructing an empirical model, but in this case the problem consists of assigning coefficients in a model with a predetermined structure. Also, coefficients that are adequately known can be fixed.
Class 2. Parameters are determined that accomplish the objectives of class-1 parameter identification; but, furthermore, the parameters must be “physical.”
The reason for this distinction is that there may be many sets of coefficients that give equally good agreement between theory and experiment, but there is only one set that would agree with the coefficients that would be determined individually in independent, perfect measurements. Clearly a method that will solve class-2 problems will also solve class-1 problems.
One form of parameter estimation uses the experimental response (either frequency response or time response) at N selected points. The parameter estimation procedure finds the set of coefficients that minimizes the error between theory and experiment. The error is defined as follows:
where Уе(/с) is the experimental response at observation point /с, and Yc(k) is the calculated response at observation point k. This form of error function is used because a zero value for E requires that the error be zero at every observation point (no cancellation of terms).
The minimization of the error may be accomplished by the methods of automatic optimization. A number of procedures are available, but the most common one is the method of steepest descent. Experience indicates that the method works nicely for class-1 problems. That is, it is not difficult
to find some set of parameters that gives a small error between theory and experiment. Class-2 problems are more difficult. Here the problem is one of uniqueness. The method is required to find the unique set of parameters that are the true “physical” parameters.
The steepest-descent method is essentially a systematic search in the multidimensional space in which the error function is related to the system parameters. The change in the error due to a change in parameters may be expressed using a Taylor series:
M
E = E0 + (dE/dXj)Ax,- + ■ ■ ■ (6.2.3)
i= 1
If the higher-order terms are ignored, then the relation between the error function and the system parameters is given by
M
E — E0 = £ (дЕ/дХі)Ax,- (6.2.4)
i= 1
This is valid for some region around the point where E0 and dE/dxt are evaluated. The greatest reduction in the error function for a given change in the parameters is obtained by making the parameter changes according to
Ax,-a ( — dE/dxi) (6.2.5)
That is, each parameter should be changed in proportion to the negative of the sensitivity of the error to that parameter. In the steepest-descent optimization, this gives a direction of change of the parameters that will reduce the error. This vector is explored by calculating the error at selected points along the vector. When the best point is found, a new direction is determined, and the process is repeated.
The sensitivities are found as follows:
se^_2y І уд) — ym ism
dxt Ye(k) ) dxi,
Of course, the experimental response does not depend on the assumed values for the parameters x,. The evaluation of the error sensitivity by Eq. (6.2.6) depends on the evaluation of the response sensitivity dYc(le)/<3x,. This may be done using the theoretical model and generalized techniques for evaluating sensitivity functions (54).
Criteria for uniqueness have been developed for the parameter identification problem. Buckner (40) studied this problem and considered the uniqueness of the identification of the coefficients in a linear state-variable model (a model consisting of coupled, first-order, linear differential equations). Sufficient criteria were derived for identifying some of the coefficients in a
system with n equations. It is necessary for all of the remaining coefficients to be correct. For the case of a measurement involving only a single input and a single output, it was found that:
1. Any (2n — 1) coefficient can be determined uniquely if the initial guesses on these adjustable coefficients are close to the correct values.
2. Any two diagonal coefficients and (n — 1) off-diagonal coefficients from any single row or column can be determined uniquely, regardless of the error in the initial guesses on these adjustable coefficients.
For the case of a test in which m different outputs that result from a single input are analyzed to give m separate frequency responses, it was found that:
1. Any m(n — 1) + n coefficients can be determined uniquely if the initial guesses are close to the correct values.
2. Any two diagonal coefficients and (n — 1) off-diagonal coefficients from any single row or column can be determined uniquely, regardless of the error in the initial guesses on the adjustable coefficients. This is the same result as for the test that uses a single output.
Other authors (28, 33) have studied the uniqueness problem for identifying the coefficients in a single nth-order differential equation. In this case, there are n coefficients to identify instead of up to n2 in a state-variable model. The general form of the equation is
d”x dn~1x d2x dx
~ПГ T Я„_ і. _ .—I" • • • + Й? ТТ T :—————- H QnX = f
dt" " 1 dtn 1 2dt2 1 dt
Buckner’s uniqueness criterion may be used if we convert this equation into an equivalent state-variable form:
dx/dt = Ax + /
where
0 |
1 |
0 |
0 •• |
• 0 |
0 |
0 |
1 |
0 •• |
• 0 |
0 |
0 |
0 |
1 •• |
• 0 |
-flo |
-fll |
~a2 |
— a3 •• |
‘ ~an-1 |
Since all of the adjustable coefficients are on a single row, all of them can be determined uniquely in a single test.
Another problem is the question of local optima versus global optima. Regardless of satisfaction of uniqueness criteria, the optimization procedure might find a local minimum rather than the absolute minimum. This is illustrated in Fig. 6.5 for a one-parameter problem. In general, there is no absolute test for global optimality. The only recourse is to repeat the search from several different starting points. If all the searches converge to the same optimum, then confidence is gained in the global optimality of the results.
Fig. 6.5. Sketch of error function versus a system coefficient. |
Simple examples can illustrate the parameter estimation procedure (40). A model with known parameters was used to calculate the frequency response. This was used as the “experimental” response. The parameters were then changed and were used for a reference model. Then a steepest-descent parameter identification procedure was used to adjust the parameters in the reference model to improve agreement between “experimental” results and results from the reference model. The model used to give the “experimental” response was
dzl/dt = —z1 — 2 z2 + f dz2/dt = 0.4 z1 — 0.7 z2
The initial reference model for case 1 was
dzjdt = — 1.25zj — 2 z2 + /, dz2/dt = 0.65z! — 0.5z2
Figure 6.6 shows the “experimental” frequency response for bzjbf and the frequency response obtained from the initial reference model. In the parameter estimation calculation for case 1, the coefficient of z2 in the first
equation was not varied. The other three coefficients were adjusted to minimize the error. According to the uniqueness criteria, unique results should be obtained if the optimization converges to the global minimum. Figure 6.7 shows the current model parameters at each stage in the optimization. We observe that the final parameter estimates agree with the known true values.
Another example can demonstrate the uniqueness problem. In case 2, the model used for calculating the “experimental” model was the same as before. The initial reference model for case 2 was
dzx/dt = — 1.03zj — 1.8z2 + /, dz2/dt = 0.45z! — 0.7 z2
The coefficient of z2 in the second equation was held constant and the other coefficients were adjusted in the optimization. This indicates that the uniqueness criteria were not satisfied. Figure 6.8 shows frequency response results for SzJSf for the two cases. Figure 6.9 shows the current parameter
.001 .01 .1 1 10 100 |
(a) (b) Frequency (radians/sec.) |
estimates at each stage in the optimization. Clearly, the estimated values are incorrect. However, the model based on these parameters give results that agree very well with theoretical results, as shown in Fig. 6.10. This type of problem with uniqueness suggests a cautious approach to practical parameter identification, but it does not mean that parameter identification cannot be very helpful if used properly.
Several workers have used parameter estimation techniques to determine important parameters in reactor systems. Cummins (36, 41) included this type of analysis in his work on the Dragon reactor. He used an optimization procedure to find the minimum squared difference between a theoretical step response and a measured step response. The minimum was found by a systematic adjustment of selected parameters in the theoretical model.
Obeid and Lapsley (42) determined the reactivity coefficients and heat — transfer coefficients in a swimming-pool reactor at the University of
Frequency (radians/sec.) Fig. 6.10. Amplitude for case 2. |
Virginia. They obtained the zero-power frequency response and the closed — loop frequency response at power (0.9 MW). They then obtained the feedback frequency response from Eq. (6.2.2). In their fitting of the theoretical model to the experimental results, they took advantage of the fact that the three major temperature feedbacks (fuel, coolant, reflector) had significantly different relative importances in different frequency ranges.
Band-pass filters have been used extensively in noise analysis work to obtain power spectra. The approach may be extended for use in frequency response measurements. This may be desirable in some cases where it is possible to use existing equipment or where the filtering inherent in Fourier analysis is inadequate to remove the effect of troublesome background noise. The basic setup consists of two band-pass filters, a multiplier, and an integrator. The arrangement is shown in Fig. 4.3. The theoretical development for this method may be found in the work of Kerlin and Ball (4).
Signal 1.
K>—0>
Signal 2_
Fig. 4.3. Band-pass filter analysis circuit.
The frequency analysis requires the use of this setup for the three different pairs of signals shown in Table 4.1. The frequency response is obtained as follows:
Re{G(jco)} = B/A (4.2.1)
Im{ G(co)} = — a>C/A (4.2.2)
This may be repeated for a number of settings for the band-pass filter center frequency to to obtain the complete system frequency response.
TABLE 4.1 Analysis Circuit Outputs for Analysis at Frequency oj
a These results are exact only for a filter with an infinitesimal bandwidth. The results are approximate for practical filters with finite bandwidths. 6 This is obtained by sending the input through an analog integrator prior to input into the analysis circuit. |
In this type of CRDM, control rods are fastened to cables that are connected to motor-driven drums located above the core. These CRDMs are used in high-temperature gas-cooled reactors supplied by Gulf General Atomic. Their design has two cables (and associated rods) connected to each drive. The rod speed is about 1 in./sec. This gives a reactivity change of approximately 1 cent/sec. The rod position measurement is obtained from a potentiometer connected to the drum.
The responses of a system to any of a number of input disturbances are useful for studying the dynamic behavior of the system. Typical choices might be a pulse, a step, or a ramp. Flowever, the frequency response, which involves the response of a selected system output due to a sinusoidal input, is particularly useful. As will be shown below, the output is a sinusoid with the same frequency as the input, but shifted by some phase angle ф. The ratio of the amplitude of the output to the amplitude of the input and the phase
angle completely specify the frequency response. A typical pair of waveforms appears in Fig. 2.4.
(a) Relation between Frequency Response and Transfer Function
We may now determine the relation between the frequency response and the system transfer function. (Many authors use the terms interchangeably. Here the term transfer function refers to a mathematical quantity, a ratio of Laplace transforms. The term frequency response refers to a physically observable quantity.) For an input 5I(t) = b sin cot, the Laplace transform (see Table 2.1) is bco/(s2 + со2). The Laplace transform of the output is obtained using Eq. (2.3.1).
<50(s) = G(s) 8I(s) = G(s)bco/(s2 + со2) (2.4.1)
We can use the method of residues to determine the output:
where s, is the pole of G(s) and j = ч/— 1. (This development assumes simple poles, but the result is the same for systems with multiple poles.) For a stable system, all the st have negative real parts. Therefore, after a sufficient time, all the terms containing e**’ will have vanished, and only the first two terms in Eq. (2.4.2) will remain:
The terms G(jco) and G( — jco) are complex quantities. A complex quantity (a + jfS) always may be written as a magnitude and a phase, G(j(o)ei*. This is demonstrated in Fig. 2.5. This shows that G(— jco) = |G(ycu)|е~ІФ.
Fig. 2.5. Complex plane representation of G(jto).
Thus Eq. (2.4.3) may be written
gjtaW + tfO _ e~j(o>t + tli)
2/
We use Euler’s formulas (eJX = cosx + jsinx, e~jx = cos x — ;’sinx) to give
50(t) = bG(jco) sin(cut + ф) (2.4.5)
This verifies the earlier assertion that the output for a sinusoidal input is a sine wave with the same frequency as the input, but shifted by a phase angle ф. The amplitude of the output is |G(yat)| times the amplitude of the input.
This development has shown that the theoretical frequency response is obtained simply by substituting jco for s in the transfer function and carrying out the complex arithmetic. One of the main reasons that frequency response analysis and testing is commonly used is that such a simple link between the theoretical transfer function and the experimental frequency response exists.
(b) Frequency Response Plots
The result of a frequency response calculation is a complex number, which may be represented as a magnitude and a phase:
G(jco) = a (jco) + jP(jco) = |G(yw)|eJlM“) (2.4.6)
where
I G(jco) = {[a(ycu)]2 + mjco)]2}1/2 ‘ (2.4.7)
and
{//(jco) = tan” ‘[P(joj)/a(joj)] (2.4.8)
The most common way to plot the frequency response is to show separate plots of amplitude and phase as a function of frequency. This is called a Bode plot. A log-log plot is used for the amplitude curve and a semilog plot is used for the phase curve. These are demonstrated in Fig. 2.6 for the transfer function l/(s + 1). It is also common to define the magnitude in terms of the decibel (dB), given by
|G(yco)| (in decibels) = 20log10|G(yco)| (in absolute units) (2.4.9)
A few values are shown in tabular form to illustrate the relation between absolute gain and gain in decibels.
Absolute gain |
Gain in dB |
0.1 |
20 log 0.1 = -20 |
1 |
20 log 1 = 0 |
2 |
20 log 2 = 6.02 |
10 |
20 log 10 = 20 |
Another common plotting procedure presents the frequency response on a single polar plot. In the complex plane, the frequency response at some frequency is given by a vector as shown in Fig. 2.7. At some other frequency, the vector will have a different orientation and a different length. A curve traced out by the tip of the vector as the frequency changes is a complete description of the frequency response. A polar plot for the transfer function l/(s + 1) is shown in Fig. 2.8.
A third method of graphical presentation of frequency response data is by a plot of gain versus phase. Such a plot is called a Nichols plot. The gain may be expressed in absolute units or in decibels. A Nichols plot for the transfer function l/(s + 1) is shown in Fig. 2.9.
(b) Frequency (radians/sec.)
Fig. 2.6. (a) Amplitude for transfer function 1 /(s + 1); (b) phase shift for transfer function!/(*+ !)•
Fig. 2.7. Complex plane plot of G(jco) at a single frequency со.
Fig. 2.8. Polar plot for transfer function l/(s + 1).
Of course the Bode plot, the polar plot, and the Nichols plot are only different methods for presenting the same data. The purpose for which the data are used determines the most appropriate plotting method.
The Bode plot is convenient for furnishing certain information about system dynamics. The Bode plot for the system transfer function is inspected for “break frequencies” and resonance peaks.
Let us first consider the break frequency. Take the transfer function G(s) = l/(s + a). The real and imaginary parts of the frequency response are given by
Re[G(jco)] = a/(a2 + a>2), Im[G(ya>)] = — to/(a2 + a>2)
The amplitude and phase are then
|G(;w)| = [1 /(a2 + ш2)]1/2, іИМ = tan-1(-aVa)
It is informative to examine the asymptotic value of |G(yw)| and i/d/cu) for very small frequencies and for very large frequencies. For very small frequencies,
|G(;cu)| s (l/a2)1/2 = 1/a, іj> s tan" ‘(0) = 0
For very large frequencies,
|G(;cu)| s (l/w2)1/2 = 1 /со, ф s tan-‘(—со) = -90°
This shows that the amplitude has a constant value of 1/a at low frequencies and varies as 1/ш at high frequencies. The phase has a constant value of 0
at low frequency and —90° at high frequency. These low-frequency and high-frequency approximations are shown in Fig. 2.10 along with the exact curves. Note that the two amplitude curves intersect at со = a. This frequency is called the break frequency.
A similar analysis may be made for transfer functions of the form G(s) = s + a. The Bode plot for this transfer function appears in Fig. 2.11. This break frequency is again at со = a, but in this case the amplitude rises after the break. Also, the phase goes to +90° at high frequencies.
In general, the amplitude will break downward at frequencies corresponding to real poles and upward at frequencies corresponding to real zeros.
Lumped systems in which all poles and zeros have negative real parts are called minimum-phase systems. An exact relation between the gain curve and the associated phase curve for minimum-phase systems is provided by Bode’s first theorem (2). This theorem gives the phase at all frequencies if the gain at all frequencies is known. This is useful for obtaining detailed
phase information from gain results or vice versa, but our main interest is in methods for making rough approximations.
It is sometimes useful to determine the asymptotic gain and the associated asymptotic phase for a system. The asymptotic gain and phase are the gain and phase that would occur if the poles and zeros were widely separated. For a minimum-phase system with real poles and zeros, the results are very simple:
1. The asymptotic slope of the gain for a transfer function with negative real poles and zeros is given by
= (Na — Da) (2.4.10)
where Sa is the logarithmic slope of the gain curve at frequency со (in decades of change in gain per decade increase in со), Na the number of zeros with numerical values less than со, and Dm the number of poles with numerical values less than со.
2. The asymptotic phase shift is given by
Фю = S<o x 90° (2.4.11)
These properties may be used to construct approximate Bode plots for specified transfer functions or to construct approximate transfer functions for specified Bode plots. This is best illustrated by an example.
Fig. 2.12. Components of G(s) — (s + l)/[(s + 0.1)(s + 10)]. |
Example 2.4.1. Consider the transfer function
w (s + 0.1)(s + 10)
Each term contributes to the amplitude as shown in Fig. 2.12. The resulting amplitude is shown in Fig. 2.13a. Because of the relation between phase angles and slopes on amplitude plots, the approximate phase is as shown in Fig. 2.13b. Of course the phase curve will not display sharp changes, but will gradually change as shown in Fig. 2.13b.
(a) |
Fig. 2.13. (a) Asymptotic amplitude approximation for (s + l)/[(s + 0.1 )(s + 10)] and exact results; (b) asymptotic phase approximation for s + l/[(s + 0.1)(s + 10)] and exact results. |
The situation is somewhat different for the common case of complex poles or zeros. Figure 2.14 shows the gain and phase for the following transfer function:
G(s) = l/(s2 + 2£s + 1)
This transfer function has complex poles for £ > 0. We observe that Eqs.
(2.4.10)
and (2.4.11) are not valid in the region of the peak in the gain. In
general, systems with complex poles can have resonance peaks and Eqs.
(2.4.10) and (2.4.11) are not valid near the peaks. However, these relations are valid at frequencies well away from the peak as is demonstrated in Fig. 2.14.
(c) Stability Analysis
The Nyquist stability criterion uses a frequency response obtained from the open-loop transfer function. The criterion for negative feedback systems whose closed-loop transfer function is given by Eq. (2.3.4) is:
The closed loop system whose open loop transfer function is GlH(s) is stable if and only if
R = P
where R is the number of clockwise encirclements of the (— 1, JO) point by the locus GiHijco) as a> varies from — 00 to +00, and P the number of poles of G1H with positive real parts.
For most nuclear reactor applications, the transfer functions Gj and H have no poles with positive real parts. For this case, the Nyquist criterion may be stated:
The closed loop system whose open loop transfer function has no poles with positive real parts is stable if and only if
R = 0
Typical Nyquist plots for a stable system and an unstable system appear in Fig. 2.15. This figure also illustrates the procedure for connecting the 0~ point (the point approached as a> approaches zero from the negative side) and the 0+ point (the point approached as a> approaches zero from the positive side). The general procedure is to close the locus with an infinite — radius clockwise trajectory from the 0“ point to the 0+ point if the 0“ and 0+ points do not coincide. Clearly, the proximity of the locus to (—1,_/0) obtained for a stable system is a measure of the stability margin. Two measures are used to assess the stability margin:
Phase margin—the angle between the negative real axis and the line passing from the origin through the point where the Nyquist locus has a magnitude of unity.
Gain margin—the factor by which GlH would have to be multiplied to cause the intercept of the negative real axis to pass through ( — 1, j’0).
Fig. 2.15. Nyquist plots for a stable system and an unstable system. Arrows indicate increasing frequency. |
These concepts are demonstrated in Fig. 2.16. Clearly, these measures are meaningless for loci that do not cross the negative real axis or that have very complicated shapes near the origin.
The phase margin is the most common of the two stability measures for Nyquist plots. A rule of thumb is that a phase margin of at least 20° is desirable. The phase margin may be obtained by calculation or by experiment.
Fig. 2.16. Portion of a Nyquist plot near the origin, where *F is the phase margin and 1 fa the gain margin.