Category Archives: NUCLEAR POWER PLANTS

Parameter tuning

In controller design, the difficulty encountered is how to quickly minimize the upper bound of the objective function so that the control actions can force a process to track a specified trajectory as close as possible.

There has no rigorous solution to the selection of optimal control horizon (Nu) and prediction horizon (Ny).

The model horizon is selected so that TAt > open loop settling time.

The ranges of weighting factors Wj and W2 can be very wide, the importance is their relative magnitudes. The following procedure to tune the weighting factors is proposed:

— Select a value for Wj and assign it to all local controllers. Determine W2; independently for each local controller in order to minimize the objective function for that subsystem

— Identify the largest W2 and assign it to all subsystems.

Examine the system’s closed-loop dynamic performance. Reduce the value of W2 gradually until the desirable dynamic performance is identified.

Wear work-rates

In fretting wear, work-rate is defined as the rate of energy dissipation when a tube is in contact with its support. Energy is being dissipated through friction as the tube moves around in contact with its supports. A force (the contact force between tube and support) multiplied by a displacement (as the tube slides) results in work or dissipated energy required to move the tube (Taylor et al., 1998, Au-Yang, 1998). Normal work-rate Wn for different tube and tube support plate material combinations and different geometries (Au — Yang, 1998) is defined.

Подпись: (35)Wn = ^ J FndS

where T is the total time, Fn is the normal contact force, and S is the sliding distance.

(Au-Yang, 1998) has assessed the cumulative tube wall wear after 5, 10, and 15, effective full power years of operation of a typical commercial nuclear steam generator, using different wear models.

The EPRI data reproduced from (Hofmann & Schettlet, 1989) in Figure 14 shows the wear volume against normal work-rate for the combination of Inconel 600 tube (discrepancy as plot shows J 600 whereas text indicates Inconel 600) and carbon tube support plate, a condition that applies to many commercial nuclear steam generators (Hofmann & Schettlet, 1989). Figure 15 shows the tube wall thickness loss against volumetric wear for different support conditions (Hofmann & Schettlet, 1989).

image116

image117

Fig. 14. Volumetric wear rate versus normal work-rate for different material combinations (Hofmann & Schettlet, 1989).

 

image118

Fig. 15. Tube wall thickness loss versus volumetric wear for different support conditions, from Hofmann and Schettler (Hofmann & Schettlet, 1989).

 

(Payen et al., 1995) have carried predictive analysis of loosely supported tubes vibration induced by cross-flow turbulence for non-linear computations of tube dynamics. They have analyzed the gap effect and have concluded that wear work-rate decreases when the gap value increase at low velocities. (Peterka, 1995) has carried out numerical simulation of the tubes impact motion with generally assumed oblique impacts. (Charpentier and Payen, 2000) have carried out prediction of wear work-rate and thickness loss in tube bundles under cross-flow by a probabilistic approach. They have used Archard’s Law and wear correlation depending on the contact geometry, and have concluded that most sensitive parameters that affect the wear work-rate are the coefficient of friction, the radial gap and the spectral level of turbulent forces.

(Paidoussis & Li, 1992) and (Chen et al., 1995) have studied the chaotic dynamics of heat exchanger tubes impacting on the generally loose baffle plates, using an analytical model that involves delay differential equations. They have developed a Lyapunov exponent technique for delay differential equations and have shown that chaotic motions do occur. They have performed analysis by finding periodic solutions and determining their stability and bifurcations with the Poincare map technique. Hopf bifurcation is defined as the loss of stable equilibrium and onset of amplified oscillation (Paidoussis & Li, 1992).

image119

Fig. 16. The bifurcation diagram (Paidoussis & Li, 1992, Chen et al., 1995)

A typical bifurcation diagram for the symmetric cubical model with P /D = 1.5 , is given in Figure 16 showing dimensionless mid-point displacement amplitude in terms of dimensionless fluid velocity. Where UH denotes critical U for Hopf bifurcation, UD, is the first post Hopf bifurcation, and UCH denotes the onset of chaos. Total wear work rates against pitch velocity and mass flux have been given by (Taylor et al., 1995) and (Khushnood et al., 2003).

Researchers

Salient fretting — wear features

(Rubiolo & Young, 2009)

• The evaluation of turbulence excitation is very challenging.

• Identification of key wear factors that can be correlated to assembly operating conditions.

• Functional dependence of wear damage against identified factors.

• Grid cell clearance size and turbulence forces as key risk factors for PWR fuel assemblies.

• Grid misalignment and cell tilts are less important.

• Minimization of wear risk through modification in core loading.

(Jong & Jung, 2008)

• Fretting wear in helical coil tubes steam generator.

• Thermal-Hydraulic prediction through FEM.

• Emphasis on the effects of number of supports, coil diameter and helix pitch on free vibration modes.

• Design guidelines for designers and regulatory reviewers.

(Attia, 2006a)

• Investigation of fretting wear of Zr-2.5% Nb alloy.

• Experimental setup includes special design fretting wear tribometers.

• Fretting wear is initially dominated by adhesion and abrasion and then delamination and surface fatigue.

• Volumetric wear loss decreased with number of cycles.

(Attia, 2006b)

• Fitness for service and life management against fretting fatigue.

• Examples of fretting problems encountered in nuclear power plants.

• Methodology to determine root cause.

• Non-linearity of the problem and risk management.

• Critical role of validation experimentally (long term) under realistic conditions and to qualify in-situ measurements of fretting damage non-destructive testing.

(Rubiolo, 2006)

• Probabilistic method of fretting wears predictions in fuel rods.

• Non-linear vibration model VITRAN (Vibration Transient Analysis).

• Numerical calculations of grid work and wear rates.

• Monte carlo method applied for transient simulations (due to large variability of fuel assemble parameters).

• Design preference of fuel rods.

(Kim et al., 2006)

• A way toward efficient of restraining wear.

• Increase in contact area through two different contours of spacer grid spacing.

• Consideration of contact forces, slip displacement and wear scars on rods to explore mechanical damage phenomenon.

• It concludes that the contact shape affects the feature and behavior of length, width and volumetric shape of wear.

• A new parameter "equivalent depth" is introduced to represent wear severity.

Table 7. Salient features of some recent researches on fretting wear Damage in Tube Bundles.

A generalized procedure to analyze fretting — wear process and its self — induced changes in properties of the system and flow chart for fretting fatigue damage prediction with the aid of the principles of fracture mechanics is presented in figure 17 & 18 respectively.

image120

Fig. 17. System approach to the fretting wear process and its self-induced changes in the system properties (Attia, 2006a).

image121

Fig. 18. Flow chart for the prediction of fretting fatigue damage, using fracture mechanics principles (Attia, 2006a).

Project related environmental impact assessment (EIA)

EIA is the selected technology and location linked consideration. Environmental assessment is specific, concrete, and deep. The endpoint is to determine clearly the environmental changes in terms of their scope, intensity and tolerability. Risks are assessed quantitatively. Very specific indicators of environmental quality may be applied.

Integration of strategic planning and environmental evaluation

Figure 3 provides a synthesis of the desired integration between strategic planning and tiering environmental evaluations. A brief overview of present issues and their possible resolution at different planning stages is also given. One should not overlook the importance of a loop from the fifth planning step (Plan implementation; Licensing) back to the step 2a informing all planning steps between success and issues in the plan implementation. This loop actually acts as a special form of historical monitoring of the plan implementation.

Comparative evaluation approach and its indicators

Multi-objective analysis (MOA) is aimed at facilitating comprehensive and consistent consideration, comparison and trade-offs of economic (financial), supply security, social, health and environmental attributes of selected alternative energy options or systems (could also be technologies for electricity production). These technologies are usually classified as thermal and non-thermal, or renewable and non-renewable, and include nuclear, coal, natural gas, biomass, hydro, PV-Photo Voltaic, and wind systems. MOA is expected to assist in the systematic evaluation of options according to multiple objectives/criteria which are different and which may not be measured on an interval (or even ordinal) scale. It should be understood that MOA is not primarily a method that can be used to derive impacts, but rather a method that places different types of impact on a comparable basis and facilitates comparisons between impacts originally estimated and expressed in different units (IAEA, 2000).

The main objectives of MOA are:

• to provide quantitative information where it is difficult to quantify the impacts directly;

• to display risk-benefit trade-offs that exist between different impact indicators;

• to facilitate comparisons and trade-offs;

• to facilitate understanding of the ‘values’ that need to be placed on different attributes.

The impact of each option under consideration should be represented using the units of measurement appropriate for each indicator or attribute. For example, impact indicators could be:

• The proportion of area utilized in the area (e. g. as a measure of land use impacts associated with each option referring to shares of existing and planned land-use);

• Health determinants affected/changed due to implementation of the alternative energy option.

Table 2 indicates a set of aggregated indicators; these need to be developed further into measurable (possibly quantifiable) sub-indicators, so as to enable clear, verifiable, reproducible, and transparent evaluation. How this could be done in a comprehensive and transparent manner shows the example of Eurelectric RESAP — Renewables Action Plan (Eurelectric, 2011); the WG Environmental Management and Economics of the Eurelectric RESAP was tasked with an evaluation, based on existing literature — 296 selected worldwide studies — of the sustainability of renewable energy sources (RES) and other technologies over their whole life cycle (IPCC, 2011). The quantitative indicators applied in comparative evaluation were, e. g., carbon footprint, health impacts, water use, land use, biodiversity, raw materials, energy payback, etc. No matter the approach of selecting the indicators, caution should be exercised to ensure that the sub-indicators are chosen on the basis of:

• Relevance: indicators should reflect the overall objectives of the study;

• Directionality: indicators must be defined in a manner that ensures that their magnitude can be assessed and interpreted. This can be accomplished by specifying indicator measurement in terms of maximizing or minimizing, increasing or maintaining, etc.;

• Measurability: it should be possible to measure quantitatively or estimate directional impacts of each alternative on each indicator, in the unit of measurement that is appropriate for the indicator. Directionality and measurability together determine interpretability, i. e. they permit an interpretation of impacts as being good/bad or better/worse on each indicator;

• Manageability: in order to make assessments comprehensible and to facilitate effective comparison, the number of sub-indicators should not be too large.

Once the impact analyses have been consolidated, all the data should be expressed in a common metric, or ‘standardized’, so that the indicators can be compared and assessed. For example, impact indicators can be presented on an interval scale (e. g. from 0 to 1). The scale would indicate the relative effect of each fuel chain option being considered, on the basis of the relative magnitude of the impact indicator.

The process can be standardized as follows (adapted from Canter & Hill, 1979 and combined with IAEA, 2000):

Main (aggregated) indicators

Goals/objectives as a basis for specification of sub-indicators and development of the evaluation criteria

Cost/Value

Supply Reliability

Economic/Technological

Advancement

Risk/Uncertainty

Management

Environmental and Health Impacts

Welfare of local and regional communities

Development of competitive (least cost) electricity production The energy payback ratio

Development of an electricity system expansion plan that minimises greenhouse gas emission

Enhancement of the welfare of local communities; growth of social capital across region

Protection and improvement of the health of all residents and workers (good access to health care, reduced health inequalities, affordability of safe and quality nutrition, availability of recreation zones/infrastructure, nursing/ work/social inclusion for elderly people, clean and healthy environment, safe urban areas, etc.)

Changes/improvements in regional and local employment Improvement of economic benefit to the community (to reduce disparities in income; access to jobs, housing, and services between areas within the region and between segments of the population; access to better and effective education; energy efficiency; etc.)

Maintenance of high and stable levels of economic growth (good accessibility to business within the region, stronger linkages between firms and the development of specialism within area, local strengths and economic value locally, emergence of new and high technology sectors and innovations, etc.). Effective protection of the environment (maintenance and enhancement of the quality and distinctiveness of the landscape; making towns more attractive places to live in; maintenance and improvement of the quality of air, ground and river water; reduced contribution to climate change (greenhouse gases); moving up through the waste management hierarchy; prudent use of resources — to reduce consumption of undeveloped land, natural resources, greenfield sites; to reduce need to travel; to apply reasonable, long-term land-use planning considering open space; improvement of resource efficiency; etc.)

Note on sustainable development: Sustainable development does not mean having less economic growth. On the contrary, a healthy economy is better able to generate the resources for environmental improvement and protection, as well as social welfare. It also does not mean that every aspect of the present environment should be preserved at all cost (extremism, fundamentalism). What it requires is that decisions throughout society are taken with proper regard to their environmental impact and implications for wide social interests. Sustainable development does mean taking responsibility for policies and actions. Decisions by the government or the public must be based on the best possible scientific information and analysis of risk, and a responsible attitude towards community welfare. When there is uncertainty and the consequences of a decision are potentially serious, precautionary decisions are desirable (see Hansson, 2011 for further discussion on applying the precautionary principle). Particular care must be taken where effects may be irreversible. Cost implications should be communicated clearly to the people responsible.

Table 2. A list of main indicators to be applied in comparative multi-objective assessment

a. For each indicator, the analyst should identify the best value (e. g. highest contribution to employment) and the worst value (least contribution to employment) from the alternatives under consideration.

b. Then, the impact scale should be arranged on a horizontal axis from the best value (at the origin on the scale) to the worst value (at the extreme of the scale). The scale will depend on the units of measurement used in the impact assessment for each indicator.

c. Then, the standardized values of the impact indicators should be represented on the vertical axis, the same for all indicators and ranging from 0 to 1.

Finally, an indicator value of 1 should be assigned to the best option and 0 to the worst. The other options are then located according to their impact values on the line joining the best and worst.

Once the impact data are standardized, the following three methods could be used for the aggregation of results (IAEA, 2000; Kontic et al., 2006):

• Weighting; weight should be assigned to each indicator on the basis of its relative importance, for instance in a comparison of human health, global environmental impacts and land occupation (land-use impacts). Sensitivity analysis of the weighting should be performed in terms of investigating the difference in final comparative assessment results due to assignment of different weight values to a particular indicator (at least three justified variations should be considered); the final amalgamation method can be weight summation.

• Aggregation rules; based on standardization of the indicator’s values, and a tree structure of the whole set of indicators where a root of the tree represents the ultimate aggregated value; pairs or sets of multiple indicators should be aggregated and evaluated by means of the »if-then« approach. In this way the aggregation rules should be developed as an alternative to weighting. A final score is derived by comparing aggregated values at the tree root for the treated alternatives. This approach is described in detail in (Bohanec, 2003) while an example of a decision tree specifying evaluation indicators is presented in Figure 4.

• Trade-offs; the final product of the analysis should be presented as a description of trade-offs in either tabular or graphical form. Goal programming can employ the amalgamation method which ranks the alternatives on the basis of the deviation from a goal or target that analysts (decision makers) would like to see achieved: the less the deviation, the closer to the goal, and thus the higher the alternative is ranked.

The analysts’ view on the three methods and results achieved should be a part of the conclusions.

The liquid scintillation counting

Beta emitting radionuclides are normally measured by a gas ionization detector or liquid scintillation counting (LSC). In LSC the scintillation takes place in a solution, the cocktails contain two basic components, the solvent and the scintillator(s). This allows close contact between the isotope atoms and the scintillator what becomes an advantage in measuring low-energy electron emitters due to the absence of attenuation. Once the solvent must act as an efficient collector of energy, and it must conduct that energy to the scintillator molecules instead of dissipating the energy by some other mechanism (National Diagnostics, 2004). Liquid scintillation cocktails absorb the energy emitted by radioisotopes and re-emit it as flashes of light. A в particle, passing through a scintillation cocktail, leaves a trail of energized solvent molecules. These excited solvent molecules transfer their energy to scintillator molecules, which give off light. With LSC the short path length of soft в emissions is not an obstacle to detection. LSC can thus be used for the measurement of both high and low energy emitters.

A pulse height spectrum is a representation of the average kinetic energy associated with the decay of a particular isotope. When an isotope decays it liberates an electron or beta particle and a neutrino that have the energy associated shared between the two particles. As a result of that the resulting beta particles have a continuous distribution of energies from 0 to maximum decay energy (Emax). The amount of light energy given off is proportional to the amount of energy associated with the beta particle. Therefore, the beta decay shows a continuous energy distribution and beta particle spectrometry becomes an analytical thecnique in which it is difficult to identify individual contributions in the spectrum beta. The determination of various beta emitters such as 3H, 14C, 63Ni, 55Fe, 90Sr requires chemical separation of the individual radionuclides from the matrix and from the other radionuclides before couting.

The isotope 63Ni is an artificial radionuclide. It is a pure в emitter with a half-life of 100 years. The maximum energy of the emitted в-radiation is 67 keV. No у radiation is observed. Except 59Ni with a half-life of 7.6 x 104 years all nickel radionuclides have very short half — lifes. They range between 18 seconds and 54.6 hours. Therefore they don’t disturb a measurement of 63Ni. Besides, LSC has a high couting efficiency for 63Ni, about 70%., i. e., the ratio cpm/dpm, counts per minute to disintegration per minute expressed as a percentage, in other words, the percentage of emission events that produce a detectable pulse of photons, making the technique widely used for the determination of 63Ni.

LOCA in Zone 1

In case of break in Zone 1, a huge amount of coolant used for the reactor core cooling is discharged through the break. Consequently, the reactor core cooling is extremely worsened or terminated at all in the group of the fuel channels. GDH check valve prevent coolant water leaking from the core in the opposite direction. Depending on the location of the break, the cooling of FCs connected to one group distribution header (in the case of GDH break) or in all channels of one RCS loop (in the case of MCP pressure or suction header break) can be lost. The emergency protection (reactor shutdown) is activated within the first seconds due to the pressure increase in the reinforced leaktight compartments. Cooling of the reactor core is restored after the activation of ECCS. After 2 — 3 seconds the short-term subsystem of ECCS (two trains of hydro accumulators and one train from the main feedwater pumps) is activated. This subsystem starts to supply water into GDH downstream check valve and is designed to cool down the reactor within the first 10 minutes. Later the long-term subsystem of ECCS, which consists of 6 ECCS pumps and 6 auxiliary feedwater pumps, is activated. The long-term subsystem of ECCS supplies the water to both loops of RCS. The fuel cladding and fuel channel wall temperatures start to decrease after the ECCS activation.

In the case of large LOCA in Zone 1, the pressure upstream fuel channels decreases very fast. To prevent the reverse of coolant flow in the channels, the check valves are installed in each GDH. The failure of some of these valves effects the cooling conditions of channels connected to the affected GDH and may change the consequences of the accident. For example, in the case of the MCP pressure header break, the channels connected to the GDH with failed to close check valve are cooled by the reverse coolant flow from the drum separators (Figure 12). At the beginning of the accident, these FCs are cooled by the saturated water flow, but later (after DSs get empty) only by the saturated steam. Due to the worsened cooling conditions, fuel cladding temperatures in the channels connected to GDH with failed to close check valve increases higher than in the other channels of the affected RCS loop.

The feature of LOCA type accidents in Zone 1 is the heat-up of the fuel in the affected RCS loop during the first seconds of the accidents. It should be noted that the first fuel cladding temperature increase asserts only at the very beginning of the accident and takes a very short time: no more than 30 seconds. In the case of the MCP pressure header break with additional failures of short-term subsystem of ECCS, a short-term violation of an acceptance criterion for fuel rods cladding of 700 °С is observed in a considerable group of channels. Such increase of temperatures is related with stagnation of coolant flow rate after the GDHs

image048

check valves closing. This stagnation is terminated with the start of ECCS water supply. If the loss of the preferred AC power of the Unit does not occur simultaneously, so the stagnation is terminated after 10 s from the start of water supply from ECCS pumps. The excess of acceptance criteria for fuel rod cladding of 700 °С is probable in FCs, initial power of which is higher than 2.5 MW (see Figure 13). There are 370 of such FCs in the affected RCS loop (see Figure 14). If the loss of preferred AC power takes place simultaneously, the stagnation is prolonged (ECCS pumps are started after the start of diesel generators). In this case the acceptance criterion for fuel rod cladding is violated in FCs which have the initial power higher than 2.0 MW. There are 670 of such FCs in the affected RCS loop (see Figure 14). However, the peak temperature of FCs walls is much below than the acceptance criterion for the FC wall (650 oC).

The detailed analysis was performed to evaluate the possibility of failure of those fuel rods where cladding temperature exceeds 700 oC. This analysis was performed using RELAP/SCDAPSIM model presented in Figure 7. The calculated pressure inside the fuel rods remains below pressure outside fuel (Figure 15). Thus, the ballooning of fuel rod claddings do not occur in the fuel channels with initial power less than 3.4 MW. The detailed analysis allows removing the surplus conservatism in the analysis (Figure 14).

Another fuel cladding temperature increase starts approximately 200 seconds after the beginning of the accident and is caused by the decrease of the reversed coolant flow, which in turn is due to the pressure decrease in DSs of the affected loop of RCS (Figure 12). A
considerable temperature increase is possible only in case of operator non-intervention. Operator has a possibility to reduce coolant discharge through the break by closing the maintenance valves. These actions lead to the water level increase in the affected DS and improve cooling conditions of the fuel channels. The fuel channels of the intact RCS loop is reliably cooled with water supplied by the MCPs and ECCS long-term cooling subsystem.

image049

Time, s

Fig. 13. Break of MCP pressure header with short-term ECCS failure. Temperature of fuel cladding in FC of the affected RCS

image050

Channels power, kW

Fig. 14. Break of MCP pressure header with short-term ECCS failure. Estimation of the number of failed fuel channels (conservative estimation versus detailed analysis using RELAP/ SCDAPSIM code)

For the case of medium LOCA in Zone 1 a detailed analysis is not carried out as the consequences this event is covered by medium LOCA in Zone 2. Short-term increase of temperatures of fuel rod cladding and FC walls is not observed at the initial stage of the accident at the GDH break upstream the check valve (medium LOCA in RCS Zone 1), but is traced at the break downstream the check valve (LOCA in Zone 2).

header with short-term ECCS failure. Pressure inside and outside the fuel rod (element) in the fuel channel with initial 3.4 MW power

The Gap Measurement Technology and Advanced RVI Installation Method for Construction Period Reduction of a PWR

Do-Young Ko

Central Research Institute, Korea Hydro & Nuclear Power Co., Ltd.

Republic of Korea

1. Introduction

A nuclear power plant takes approximately 52-58 months from the first concrete construction until the completion of the performance test. Many research groups throughout the world have studied ways to shorten the construction period of nuclear power plants to 50 months. Therefore, the construction period of a nuclear power plant is one of the most important factors to make a company competitive in international nuclear energy markets. There are many advanced construction methods to decrease the construction period for new nuclear power plants. This chapter is related to the modularization of reactor vessel internals (RVI) that one of the most effective methods to reduce the construction period of nuclear power plants (Ko et al., 2009) (Ko & Lee, 2010) (Ko, 2011) (Korea Hydro & Nuclear Power Co., Ltd., 2009).

Test of containment sump strainer

In the tests of containment sump strainer, the solid debris such as fiber and particulate is put into the test system. And then the liquid-solid two-phase flow comes into being. Three main tests are carried out to measure pressure drop of the debris bed, which are clean screen test, thin bed test and full fiber/particulate load test. If there is vortex on the water surface, air may be inhaled into the pump, and then cavitation erosion would appear. So the vortex formation is carefully observed in these tests.

Clean screen test is performed without debris, which is shown in Fig. 15.

The thin bed test is conducted for the purpose of determining the amount of fiber fines which are necessary to completely cover the strainer. When the full coverage of the strainer screen is visually observed, the strainer screen should be photographed. The post-test photograph of thin bed test is shown in Fig. 16. It can be seen that the strainer is completely covered by fibrous debris and the thin bed is formed.

Fig. 15. Photograph of clean screen test

Fig. 16. Post-test photograph of thin bed test

Full fiber/particulate load test is performed to determine the head loss associated with the maximum fibrous and particulate debris load. In this test, full debris load is put into the test tank and the strainer is covered by more debris than that in the thin bed test. And the post­test photograph of full fiber/particulate load test is shown in Fig. 17.

As the safety nuclear apparatus, the containment sump strainers filter the debris out of the recycling water and provide the filtered water for the emergency core cooling system (ECCS) and the containment spray system (CSS). In order to keep the normal operation of ECSS pump and CSS pump, the containment sump strainers must guarantee sufficient NPSH (net positive suction head). Then the pressure loss due to the accident-generated debris accumulated on the sump screens should be one of the most important parameters of he containment sump strainers. And liquid-solid two-phase flow will appear when the accident-generated debris is flushed into the recycling water. NPSH of ECCS pump and CSS pump will be directly affected by this two-phase flow, of which the characteristics are important and significant for researchers to investigate.

Fig. 17. Post-test photograph of full fiber/particulate load test

In the tests of containment sump strainer, the solid debris such as fiber and particulate is put into the liquid. And the liquid-solid two-phase flow comes into being, which is different from the flow patterns in tube-bundle channels. The liquid-solid two-phase flow and the debris bed covering on the containment sump strainer are carefully observed and recorded in the tests.

In the recycling course, the ECCS and CSS systems would adopt the water in the containment as pump source when the water in PTR001BA was used up. The debris generated by LOCA or HELB would be transported to the containment ground floor with the elevation of -3.5m. And a fraction of debris would accumulate on the sump screen which could induce pressure loss and might lead to the pump failure of ECCS and CSS systems. The debris transportation fraction to the sump strainers is analyzed by numerical simulation.

The authors take Daya Bay PWR for example to establish a 3-D computational model, with the purpose of studying the debris types and contents transported to the sump strainers. According to the actual dimension of the containment sump strainer in Daya Bay nuclear power station, a 3-D CAD model is established, as shown in Fig. 18. The altitude of the CAD model is ranging from -3.5m to 0m.

Then this CAD model is imported into Gambit, and the computational grid can be plotted. The cooper mode is adopted in defining the computational grid. In the complex locality and the key position, the computational grid is refined to resolve the important features. For the main part of the model, 5*5*5cm mesh spacing is induced in x-y-z directions. And the total cell amount in the model is 7,166,332, which is shown in Fig. 19.

In CFD model, the water temperature in the containment is set as 120°C and the pressure is 1.99bar. In this circumstance, the water is sub-cooled and the water density is 943kg/m3, and the viscosity is 2.32*10-4 Pa s.

According to the mass conservation principle and momentum conservation principle, continuity equation and momentum equation are established.

Fig. 18. 3-D CAD model of containment

Where, p is the liquid density, kg/m3. u is the flow velocity, m/s. t is time, s. ц is the dynamic viscosity Pa s. S is the source term.

In the CFD calculation, the general transportation equations are established with the use of k-£ technique.

Where, pt is the turbulent viscosity coefficient. Gk is the turbulent energy generated by time — average velocity gradient. ok and o£ are turbulent Prandtl number of k equation and £ equation.

The velocity field of water flow in the containment is shown in Fig. 20. The water would tumble the sunken debris along the ground floor or lift debris over a curb in the area where the water velocity is high enough.

The experimental and numerical results gained above can provide necessary basis for the analysis of the properties of containment sump strainer and for the design of new-type containment sump strainers.

Fatigue monitoring systems

During the early operation of NPPs in the 1970ies and 1980ies local loads occurred at different locations causing fatigue cracks. These were either due to new loading conditions which were not considered in the design phase (e. g. temperature stratification) or insufficient manufacturing quality (e. g. welded joints). These problems constituted the starting signal for the development of fatigue monitoring systems. Thus, FAMOS was for instance developed by then Siemens KWU at the end of the 1980ies and installed in German NPPs. At that time, this was a very progressive data logging system. Henceforth, it was possible to measure the local loading effects. The fatigue relevance of those effects was analyzed by simple assessment methods. These experiences gave rise to a better understanding of the ongoing loading phenomena. The fatigue assessment induced the necessity of retrofitting of components or the modification of the operating mode. For instance, the feedwater sparger of the steam generator was subsequently designed in a way that the stresses of cyclically occurring stratification transients were minimized. Nevertheless, the technology of the data logging system at that time still had certain limits with respect of the frequency of data logging and the recording and storage. A data logging frequency of 10s (0.1Hz) constituted the upper limit (nowadays, 1s respectively 1Hz is usual). Furthermore, the capacitive effect of the applied measurement sections was underestimated in their transient behavior. Nowadays, this effect is appropriately considered by correction factors specific to the respective measurement section.

EPZ in relation to PRA

1.2 PRA application for IRIS design

In the Safety-by-Design™ approach, the Probabilistic Risk Assessment plays obviously a key role, therefore a Preliminary IRIS PRA was initiated, and developed with the design, in an iterative way. This unprecedented application of the PRA techniques in the initial design phase of the reactor and the deep impact that this had in the development of the project was described in already published papers (Carelli, 2004, 2005).

Summarizing this, it is possible to note, that the success of the IRIS Safety-by-Design™ and PRA-guided design in the internal and external events assessments (Carelli, 2004) is due to the effective interactions between the IRIS Design team and the IRIS PRA team (see Figure 2). The main task of the PRA team was to identify high risk events and sequences.

The IRIS Design team provided information concerning the IRIS plant and site design. It updated IRIS component/system description and design data. PRA team identified assumptions concerning IRIS plant and site design requirements. The design team then reviewed assumptions concerning IRIS plant and site design requirements.

A preliminary evaluation of internal and external events was performed in the Preliminary IRIS PRA, to determine if there were any unforeseen vulnerabilities in the IRIS design that could be eliminated by design during the still evolving design phase of the reactor. The preliminary analysis of external events included both quantitative and qualitative analyses. For the quantitative analyses, bounding site characteristics were used in order to minimize potential future restrictions on plant siting.

Referring to Figure 3, it can be seen that the initial PRA for internal events resulted in a Core Damage Frequency (CDF) of 2.0-10-6. The PRA team then worked with the IRIS design team in order to implement design changes that improved plant reliability and to identify additional transient analyses that showed no core damage for various beyond design basis transients. The resulting CDF around 1.2-10-8 was therefore obtained thanks to a combination of the Safety-by-DesignTM features of the IRIS design, coupled with the insights provided by the PRA team regarding success criteria definition, common cause failures, system layout, support systems dependencies and human reliability assessment.

Being still in a design development/refinement phase, the PRA was kept constantly updated with the evolution of the design; moreover, all the assumptions required to have a reasonable complete PRA model capable of providing quantitative insights as well as qualitative considerations, were accurately tracked down and the uncertainties connected with such assumptions were assessed. These refinements of the Preliminary IRIS PRA yielded a predicted CDF from internal events around 2.0-10-8.

Design team pra procedure

image002

Fig. 2. IRIS Design and PRA Team Interactions

image003

The same method was extended also to the external events. In comparison to events dominant in other plant PRA, the IRIS plant was expected to be significantly less vulnerable to some external events. In external events PRA, the focus was set on the plant BOP, that has not been analyzed as extensively or explicitly as accidents caused by internal events. In general, the IRIS plant arrangement structures were designed to minimize the potential for natural and manmade hazards external to the plant from affecting the plant safety related functions. The external events PRA insights were expected to help taking full advantage of the potential safety oriented features of the IRIS design and this implied probabilistic consideration of extreme winds, fires, flooding, aircraft crash, seismic activity, etc. In addition, it was shown that estimation of risk measures could be related to the site size and could be the input for the emergency zone planning.

In viscid model

Despite the obviously viscous nature of the interstitial flow through arrays of cylinders, the compactness of some arrays suggests that the cylinder wake regions are small, especially for normal triangular arrays with small P/D (Price, 1995). Hence under this assumption wake regions are neglected and flow is treated as inviscid. Many solutions based upon potential flow theory have been given, including (Dalton & Helfinstine, 1971), (Dalton, 1980), (Balsa, 1977), (Paidoussis et al., 1984), (Vander Hoogt & Van Compen, 1984) and (Delaigue & Planchard, 1986). The results obtained from potential flow analyses are somewhat discouraging (Price, 1995). Recent flow visualizations suggest that even though the wake regions are small, the interstitial flow is more complex than that accounted for in these analyses.

1.1.2 Unsteady models

image069

The unsteady models measure the unsteady forces on the oscillating cylinder directly. (Tanaka & Takahara, 1980, 1981) and (Chen, 1983) have given theoretical stability boundary for fluid-elastic instability as shown in Figures 3 and 4 respectively.

___ S =0.01

—— S =0.03

Fig. 3. Theoretical stability boundary for fluid-elastic instability for an in-line square array, P/D=1.33, obtained by (Tanaka & Takahara, 1980, 1981).

image070

—— Theoretical solution showing multiple instability boundaries

____ Practical stability boundary

Fig. 4. Theoretical stability for fluid-elastic instability predicted by (Chen, 1983), for a row of cylinders with P/D=1.33.