Experimental results

Intensity value and texture measures from the co-registered and geo-referenced data sets are used in the algorithm to estimate the forest biomass. The data sets are related to the forest biomass through a classification analysis. The correspondence between the data sets and ground plots is made using PCI Geomatica software, where the ground plot GPS locations are superimposed on the data set. For each selected pixel (or point) from data set, a window
with size of 5×5 pixels around the point is used and the average intensity values for the PRISM and three channels of the AVNIR images with four texture values of the JERS-1 image are calculated. Thus each selected point contains a vector with eight attributes where the first four elements are the average intensity values and the second four elements are the texture measures values. These vectors of data set construct the feature space. The vectors belong to the pixels of the ground plots and subplots are used as training patterns in the classification process.

The classification analysis is done with a MLPNN. A multi layers neural network is made up of sets of neurons assembled in a logical way and constituting several layers. Three distinct types of layers are present in the MLPNN. The input layer is not itself a processing layer but is simply a set of neurons acting as source nodes which supply input feature vector components to the second layer. Typically, the number of neurons in the input layer is equal to the dimensionality of the input feature vector. Then there is one or more hidden layers, each of these layers comprising a given number of neurons called hidden neurons. Finally, the output layer provides the response of neural network to the pattern vector submitted in the input layer. The number of neurons in this layer corresponds to the number of classes that the neural network should differentiate (Haykin, 1999; Miller et al., 1995; .

The network that is used in this study arrange in layers as following. The number of neurons in the output layer is taken to be equal to the number of classes desired for the classification. Here, the output layer of the network used to categorize the image in five classes should contain five neurons. The input layer contains eight neurons corresponding to the number of attributes in the input vectors. The input vector to the network for pixel i of the data sets is the form vios = {vil, vi2,… viB}. Where the first four elements belong to the intensity values of PRISM and AVNIR images and the second four elements belong to the texture measures of JERS-1 image for a window with size of 5×5 around pixel i of the geo — referenced data sets. After the determination of the input layer, the number of hidden layers required as well as the number of neurons in these layers still needs to be decided upon. An important result, established by the Russian mathematician Kolmogorov in the 1950s, states that any discriminate function can be derived by a three-layer feed-forward neural network (Duda, 2001). Increasing the number of hidden layers can then improve the accuracy of the classification, pick up some special requirements of the recognition procedure during the training or enable a practical implementation of the network. However, a network with more than one hidden layer is more prone to be poorly trained than one with only one hidden layer.

Thus, a three-layer neural network with the structure 8-10-5 (eight input neurons, ten hidden neurons and five output neurons) is used to classify the data sets into five classes. Training the neural network involves tuning all the synaptic weights so that the network learns to recognize given patterns or classes of samples sharing similar properties. The learning stage is critical for effective classification and the success of an approach by neural networks depends mainly on this phase. The network is trained by using back-propagation rule (Paola & Schowengerdt, 1995). After training the network, the parameters are selected as: Momentum value 0.9, Learning rate 0.1, and the number of iteration 2000. The numbers of training data are 200 patterns of the subplots that are selected randomly from the classes, in which each class is represented with at least 40 patterns. The set of training patterns is presented repeatedly to the neural network until it has learnt to recognize them. A training pattern is said to have been learnt when the absolute difference between the output of each output neuron and its desired value is less than a given threshold. Indeed, it is pointless to train the network to reach the target outputs 0 or 1 since the sigmoid function never attains its minimum and maximum (Masters, 1993). For classification of data sets into five classes, the threshold is set to 0.4. The network is trained when all training patterns have been learnt. Once the network is trained, the weights of the network are applied on the data sets to classify into five classes: class1 Azedarach, class2 Acorn, class3 Beech, class4 Grassland and class5 None. The result of the classified image is shown in Fig. 4.

image051

Fig. 4. The classified image with MLPNN.

After classification, it is needed to determine the degree of classification accuracy. The most commonly used method of representing the degree of accuracy of a classification is to build confusion matrix.

The confusion matrix is usually constructed by a test sample of patterns for each of the five classes. A set of test sample with 105 patterns based on the ground truth collection were randomly selected in the classified image for accuracy assessment. The values 70% and 65% are achieved for overall accuracy and kappa coefficient respectively. One reason for misclassification can be due to poor selection of training areas, so that some training patterns don’t accurately reflect the characteristics of the classes used. Another reason can be due to poor selection of land cover categories, resulting in correct classification of areas from the point of view of the network, but not from that of the user. Thus the classification accuracy can be improved by redefining the training patterns and land cover categories.

In order to show the texture of SAR image and the neural network classifier improve the accuracy of the classification and then forest biomass estimation, we employ the Maximum Likelihood (ML) classifier method using only the intensity values of the PRISM and AVNIR images. The overall classification accuracy of 57% is achieved with ML classifier. The accuracy of 70% with the neural network is significantly better than the accuracy of 57% with ML.

In comparison between the MLPNN and ML classifiers, the advantages of MLPNN that is used in this study are:

i. It can accept all kind of numerical inputs whether or not these conform to statistical distribution or not.

ii. It can recognize inputs that are similar to those which have been used to train them. Because the network consists of a number of layers of neurons, it is tolerant to noise present in the training patterns.

Thus, we can estimate the forest biomass of the classes in the classified image which has been classified based on the SAR image texture and the MLPNN classifier. We also evaluate the biomass for two classes based on the allometric equation (15) for the classic method based on the ML classifier and the proposed method. The results are shown in Table 3, where the classic method and the proposed method have been applied in the classified image to estimate the biomass for two classes.

The classic method The proposed method

Acorn Azedarach Acorn Azedarach

Area (ha)

853.217

1129.552

937.312

1241.320

Mean height (m)

34

28.5

34

28.5

Mean DBH (cm)

55

45

55

45

# of tree (ha)

34

23

34

23

Mean biomass

(kg/tree)

3272

1861.99

3272

1861.99

Total biomass (tons/ha)

94918.85

48374.08

104274.085

53160.484

Table 3. Estimated biomass for the classic method and the proposed method by both optical and sar data.

For the accuracy assessment of the proposed method, Table 4 shows how well the results agree with the ground measurements results from Table 1, when the classic method and the proposed method are used for biomass estimation. Table 4 shows the estimated biomass when both methods are used. The root mean square error (RMSE) of estimated biomass with both methods is indicated in the table. The RMSE values is decreased when the model is used (RMSE=2.175 ton) compared the classic method (RMSE=5.34 ton).

The classic method The proposed method

Measured biomass Estimated biomass Estimated biomass

Plot

(ton) for

Azedarach Acorn

(ton) for

Azedarach Acorn

(ton) for

Azedarach Acorn

1

26.712

07.42

29.13

10.40

27.43

09.12

2

25.960

42.575

30.40

46.39

27.13

41.43

3

25.584

10.660

18.13

06.43

23.32

08.86

4

26.558

17.073

22.13

24.32

23.16

21.36

5

14.238

56.952

17.43

66.13

15.29

58.56

RMSE

4.71

5.97

1.97

2.38

Mean RMSE

5.34

2.17

Table 4. Accuracy assessment for the classic method and the proposed model using the ground measurements from Table 1.

optical image and SAR image texture in a non-linear classifier method, neural network, significantly improve the accuracy of the forest biomass estimation.