Category Archives: EuroSun2008-13

Mapping Solar Radiation over Complex Topography Areas Combining Digital Elevation Models and Satellite Images

J. L. Bosch1*, L. F. Zarzalejo2, F. J. Batlles1 and G. Lopez3

1 Universidad de Almeria, Departmento de Fisica Aplicada, Ctra. Sacramento s/n, 04120-Almeria, Espana
2 CIEMAT, Departamento de Energia, Madrid, Espana
3 EPS-Universidad de Huelva, Departamento de Ingenieria Electrica y Termica, Huelva, Espana

Corresponding Author, jlbosch@ual. es

Abstract

The correlation of solar irradiation data in flat and homogeneous areas is relatively high and classic interpolation methods are very suitable for its estimation. However, in complex topography zones, a simple data interpolation is not adequate. On the other hand, spatial variability of solar irradiation is also affected by site latitude and cloud cover distribution. In this work, a methodology has been implemented consisting in daily solar irradiation estimation for all sky conditions, by means of Meteosat satellite images and additional information from a Digital Terrain Model (DTM) of the studied area. Solar irradiation is calculated following the HELIOSAT-2 methodology; and a method is presented to obtain the horizon of the studied points using the DTM. The effect of the snow covers is also studied. Model performance has been evaluated against data measured in 14 radiometric stations located in a mountainous area, offering good results, with a Root Mean Square Error (RMSE) around 11%. Finally, a daily solar irradiation map has been generated for the complex topography site.

Keywords: Daily Irradiation Mapping, DTM, Meteosat, Complex Terrain

1. Introduction

Incoming solar radiation, through its influence over the energy and water balances of the earth surface, affects processes like air and soil heating, photosynthesis, wind or snow thawing. Therefore, its knowledge is important in diverse fields and necessary for several applications. For most of these applications, global radiation measures are needed over wide regions, for long time periods and with a high spatial resolution.

At local scales the topography is the most important factor in the distribution of the solar radiation on the surface. In plane and homogeneous areas, classic interpolation methods can estimate the solar radiation accurately. However, in zones with a high topographical variability the spatial correlation is difficult to detect, and for distances between 300 and 1000 m is very small or disappear [1]. The use of interpolation in this kind of terrains can lead to large errors and more complex models that include topographical information are needed [2]. In recent years, digital models of terrain (DMT) have been utilized to develop radiation models incorporating topography and consequently, the spatial variability
of terrain mentioned before. In addition, spatial variability of solar radiation is affected by latitude and cloud cover distribution. This issue has been studied recently with the aid of satellite images. The geostationary satellites (METEOSAT) permanently occupy the same zone over the earth surface, acquiring several images per day, for this reason they are suitable for estimating the solar irradiation and evaluating the energy potential of a wide area. The Centre Energetique et Precedes (CEP), the School of Mines of Paris, in cooperation with other European Centers of Investigation, developed a statistical model to estimate the solar irradiation in the terrestrial surface from images of METEOSAT. The mentioned model is known as HELIOSAT [3]. The basic idea of the HELIOSAT is the interrelation between the cloud cover and the global incident irradiation on a point on the earth’s surface. This model was one of earliest used for the evaluation of the global irradiation from images of satellite. It was developed using measurements of French stations and its goal was the estimation of average monthly values of global irradiation. Afterwards different modifications were introduced [4] until a new version named HELIOSAT-2 was implemented.

In this work, a methodology is presented and tested, in which the estimation of solar irradiation is performed by using a modified HELIOSAT-2 [5], together with the information contained in a DTM. Instead of a single irradiation value for a pixel, the horizon of each inner point is used to estimate around 1000 irradiation values for every pixel. Computational cost of the horizon calculation can be a problem when dealing with large areas; that issue has been addressed by developing an algorithm that reduces drastically the time utilized in this process, without loosing much information about the actual horizon. Additionally, the happening of snow covers can lead to a subestimation of the model, because those pixels can be considered covered by very bright clouds instead of snow, this problem has been also addressed in this work with satisfactory results.

The main goal is to perform an irradiation map of daily values from the satellite images, fitting the spatial resolution of pixels (~ 3.5 km) to the resolution of the DTM (100 m). Ground measurements registered at 14 stations located in a complex topography area have been used for validation purposes, observing an error reduction after the consideration of the horizon and snow effects. It is also interesting to note that this procedure can be applied under all kind of sky conditions.

User guidance

1.1 User survey

One of main objectives of the MESoR project is the involvement of key stakeholders throughout the project. Their first task was the participation in a user survey which was conducted as an interview via telephone calls or e-mail. The survey performed a comparative analysis of various solar radiation platforms concerning technical aspects but also addressing usability, integration and pre-commercial information. This analysis based on the evaluations expressed by actual “top-users” and “top- customers” of the platforms. The sample has been composed by current users of the services belonging to various academic, scientific, industrial and business categories, from public and private sectors. The organisations are active in the fields of architecture/building, PV and other solar applications. The initial sample of 53 was selected by each partner according to the criteria of importance, frequency of usage and attitude to scientific cooperation.

The collected answers indicate a very high degree of awareness about the analysed issues. This is witnessed by the fact that most of the respondents use multiple services, they have a deep knowledge of each service, are able to compare the various services and to highlight the related points of strength and weakness.

The survey detected a gap between expectations and the satisfaction as for quality and accuracy of some parameters, reliability of data measurement and calculation, comparability of data across the services and personalisation of services.

The users expect a truly new and integrated service that offers standardised data and protocols.

Specifications of the solar radiation databases and underlying methods

1.1. General specifications

Spatially-distributed (map) solar radiation databases are classified according to several factors:

• Input data from which they have been created: (a) observations from the meteorological stations (global, diffuse and direct irradiances, and other relevant climate data), (b) digital satellite images or (c) combination of both; here also ancillary atmospheric data used in the models are considered, such as water vapour, ozone and aerosols;

• Period of time (typically a number of years) which is represented by the input data;

• Spatial resolution, i. e geographical distribution of the meteorological sites, grid resolution of the satellite data and resulting outputs;

• Time resolution, which characterises periodicity of the measurement of the input data and of the resulting parameters. Thus a primary database may include time series with periodicity of a few minutes up to hourly and daily averages (sums), or it may contain only monthly and long-term averages.

• Methodical approach used for computation of the primary database: typically solar radiation models combined with interpolation methods (e. g. geostatistical methods or splines, in case that ground observations are used) or algorithms for satellite data processing (e. g. Heliosat). Primary database typically consists of global, direct normal or diffuse irradiances (irradiation in case of time-integrated products) and also some auxiliary parameters such as clear-sky index.

• Simulation models used for calculation of derived parameters, such as global irradiance for inclined and sun-tracking surface, spectral products (e. g., illuminances, UV and PV-related irradiances), estimation of terrain effects, derived statistical products (e. g. synthetic time series).

Quality of an individual data set is assessed for a set of locations by comparing them to ground measurements, where the first order statistics is calculated (bias, root mean square deviation, standard deviation, the correlation coefficient) and the frequency distribution is analysed. In this work we focus on the relative map-based cross comparison of several solar radiation products.

Such comparison provides means for improved understanding of regional distribution of the uncertainty by combining all existing resources (calculating the average of all) and quantifying their mutual agreement by the means of standard deviation.

Meteonorm program

Meteorological data can be generated with Meteonorm using a database with long term monthly average measurement data from different stations. In the recent software versions there are more than 7000 meteorological stations worldwide available. If no meteorological station is available in the database for a desired site, meteorological data will be interpolated based on the data of the nearest stations. The accuracy of the generated data depends on the accuracy of measurements of used stations in the database, the distance to the next stations and the interpolation method.

Подпись: Fig.1. Long-term monthly average ambient temperatures from Meteonorm 6.0 for Bishkek city (predefined in Meteonorm), weather station Frunze and Naryn available in the program database.

A density of weather stations in the program database is relatively low for Central Asia compared to Europe. In total, about 100 stations are available in the program database (5.1 and 6.0) for Central Asia with data for ambient temperature, humidity, velocity and wind direction, precipitation and only 3 stations with solar radiation data. The mentioned station Frunze in Bishkek is available in the program database, but without solar radiation data. The nearest database station to Bishkek with solar radiation data is about 600 km away in Tashkent, Uzbekistan. Unlike previous versions, Meteonorm 6.0 uses satellite-derived solar radiation data additionally to the weather stations for interpolation of solar radiation.

Bishkek city is already defined in Meteonorm 5.1 and 6.0 but with a wrong altitude of 2111 m instead of 760 m. This leads to significantly lower ambient temperatures compared to the measure

values at the weather station Frunze (Fig 1). Because of the wrong altitude, ambient temperature for Bishkek was obviously interpolated by Meteonorm using data from a station Naryn (about 300 km away) with a similar altitude of 2041 m. Thus, it is necessary for further investigations to define Bishkek in Meteonorm manually as a new site with the correct coordinates.

Formulas

The extraterrestrial solar radiation on the horizontal surface is the function of Latitude of the location. As the solar radiation passes through the earth’s atmosphere, it is further modified by the processes of scattering and absorption due to the presence of the cloud and atmospheric particles. Hence, the solar radiation incident on the horizontal surface is to some extent locality-dependent and less than the extraterrestrial irradiation.

Several climatic parameters have been used to develop empirical relations to predict the solar radiation incident on the horizontal surface. Among these existing correlations, the Angstrom-Prescott type regression equation related monthly average daily radiation to clear-day radiation at the location in question and average fraction of possible sunshine hours is considered to widely accept by the scholars [9]:

h . S

= a+b (1)

H о ^

The Equation (1) has been proved to accurately predict the global solar radiation.

H : the monthly average daily global radiation on the horizontal surface(MJm-2 day-1)

H0: the monthly average daily extraterrestrial radiation on the horizontal surface (MJm-2day-1)

S : the monthly average daily number of hours bright sunshine

S0 : the monthly average daily maximum number of hours bright sunshine a, b : the regression constants to be determined

The extraterrestrial solar radiation [9] on a horizontal surface is determined by following equation.

24*3600 . 0 ■ п.

H0 = 10 f (cos л cos о sin a>s + a>s sin Asm0) (2)

n 180

Where I0 =1367 Wm 2 is the solar constant, Л is the latitude of the location, 0 is the declination angle.

n

and f = 1 + 0.033cos(360 ) (3)

365

Where n is the Julian number (for example Jan 1st is the number 1)

image042

(4)

 

And a>s is the sunset hours angle given as

cos = cos 1 (- tan Л tan 0) The maximum possible sunshine duration

S 2

s0 =

0 15 s

H>, So in the equation are obtained from the equation(2) and (6).The regression coefficient a

Подпись:H S

and. The values of the monthly average daily H 0 S0

global radiation and the average number of sunshine duration are obtained from daily measurements for a period of ten years.

The regression coefficient (a, b) can be computed by the following formulas:

image044

b

 

(7)

 

image045

1 H bA S

a = z — z

n і=1 H 0 n і=1 S0

Where n is the number of measured data points.

Wavelet implementation of the MOS

The NWP uncertainties of the rainfall forecast is based on non-stationary, nonlinear and dynamic effects as stated by Todini [18]. As the solar radiation forecast is also a function of the cloud cover, it is probably subjected to the same underlying effects. For non-stationary signals the short-time Fourier transform, also named as Fast Fourier Transform (FFT) has the disadvantage that the information concerning the frequency content at a specific time interval can only be obtained with limited uncertainty. By the Heisenberg uncertainty theorem the method increases its uncertainty for the frequency, if the width of analyzing time window is small, and in the time location of a

particular shape if the windows width is large [19]. A high resolution in time and frequency is obtained by the wavelet convolution, also referred as mathematical microscope [20], where the analyzing time window with is variable in a single transformation. With digital computers the Discrete Wavelet Transform (DWT), has the advantage to reconstruct the decomposed signal with lower uncertainties than the Continuous Wavelet Transformation (CWT) [19]. Also the amount of convolutions is reduced with the DWT which increase the transform speed. This transform is based on the members of a family of functions [20]. One has to begin with the selection of the family of wavelets, e. g. the bi-orthogonal wavelet family, and one of the mother wavelets within the selected family. While the orthogonal DWT uses the inverse filters for the reconstruction of the signal, the bi-orthogonal transform introduced by Cohen [21] permits the utilization of distinct filters for the decomposition of the signal and its reconstruction (Souza [22] citing [21]) in order to obtain symmetric wavelet functions. The mother wavelet function determines the order and specifies the time window or support length of the convolution at the first time scale (m = 1). Also each mother wavelet has its own function shape and degrees of freedom [19]. A TDW transform is accomplished at different time scales (m = 1 … mx), using different functions, named by the members of a family, which are all specifically related to the mother wavelet function. If at a specific time location the signal shape is similar to the wavelet shape, one obtains high wavelet convolution coefficients. At each of the m time scales the signal is convoluted by the DWT with distinct wavelet functions. The daughter wavelet functions ym, n(t) (eqn.1) are equal to the expanded and translated mother wavelet functions у [19].

ym, n(t) = 2-m/2 y( 2-m t — n) ; m, n є Z; t є ^ (1)

Where m defines the scaling or expansion of the mother wavelet and n defines the translation of у, relatively to the time t of the time series values from the signal to be analyzed. Due to the expansion, the convolution support lengths are increased by the factor two from scale m to m+1. For the DWT, the wavelet convolutions are obtained by a filter bank of Finite Impulse Response (FIR) digital filters [19] (Figure 1a). The filter bank separate by low and high pass filters the signal to be analyzed in signals with distinct frequency bands. The mother wavelet (Figure 1a — first bk filter) represents the FIR high pass which separates the highest frequencies appearing within the bandwidth of (SL -1 … ГО). SL is the support length of the mother wavelet. The low pass filters ck, also named as scaling function, represent on its output the signal with the complementary low frequency band until to zero frequency. At m = 1, e. g. the complementary frequency bandwidth is (0 … SL-1) and for m = 2 its frequency content decrease to (0 .. .(2SL)’1). The frequency band of the high pass filter at this scale is ((2 SL)-1… SL-1) and from scale m to (m+1) its band width is reduced by the factor two. Where in the Fourier transform the frequency bins are hold constant, in DWT the energy is hold constant to obtain nearly complete reconstruction of the original time series signal. The signal details and approximations at distinct time scales or filter bands are obtained by the bk and ck filters (Figure 1a). The last scaling function is also known as father scaling function [20].

The downsampling function (2f) after each filter reduces the vector length by two, avoiding a redundant representation of the decomposed signal and due to the upsampling (2t) the signal is reconstructed to its original vector length. The decomposed signal can be represented by the wavelet and scaling coefficient vectors T(m, n) and S(m, n) (Figure 1), or by equal length partially reconstructed sub-signals. If during the reconstruction of the original signal, utilizing the inverse filter bank (Figure 1 b), only one of these vectors is supplied to its input, the signal which corresponds to the supplied vector, is reconstructed to the length of the original time series. This wavelet transform is also referred as Non Decimated Wavelet Transform (NDWT), or a trous WT and its partially reconstructed signal vectors are here named as sub-signals. Beside the

image113

image114

reconstruction based on the wavelet and scaling coefficients (Figure 1 b), with the NDWT one can reconstruct the original signal by the sum of the complete sub-signal set.

Подпись: S(3,n)Cf(1,k)

Cf(2,k)

Figure 1 — Wavelet digital filter bank for the decomposition of a signal (a) and its reconstruction (b), where ( 2i ) stands for the downsampling process and ( 2t ) stands for the upsampling process