Exploiting symmetries

In a given node, the response matrices are determined based on the analytical solution (4.19). We need an efficient recipe for decomposing the entering currents into irreps and reconstructing the exiting currents on the faces. Since the only approximation in the procedure requires the continuity of the partial currents, we need to specify the representation of the partial currents and how to represent them. The simplest is a representation by discrete points along the boundary, the minimal number is four, the maximal number depends on the computer capacity. An alternative choice is to represent the partial currents by moments over the faces. Usually average, first and second moment suffice to get the accuracy needed by practice. The representation fixes the number of points we need on a side and the number of points (n) on the node boundary.

To project the irreps, we may use (Mackey, 1980) the cos((k — 1)2п/n), k = 1,…, n/2 and sin((k)2n/n), k = 1,…,n/2 vectors (after normalization). The following illustration shows

4, i. e. one value per face. In a square node we need the following matrix

Подпись: the case with nПодпись: (4.30)Подпись:1111 1 -1 1 -1 10 -10 0 10 -1

to project the irreducible components from the side-wise values. As (2.20) shows, irreducible components are linear combinations of the decomposable quantity [20]. The coefficients are given as rows in matrices П4.

In a regular n-gonal node the response matrix has[21] Ent [(n + 2) /2] free parameters. The response matrix also has to be decomposed into irreps, this is done by a basis change. Let the response matrix give

J = RI

Multiply this expression by П from the left:

nj = (nRQ-1) Ш, (4.31)

and we see that for irreducible representations the response matrix is given by ORO+. In a square node:

r t1 t2 t1

Подпись: t r t t2Подпись:(4.32)

t1 t2 t1 r

and the irreducible representation of R4 is diagonal:

Подпись:A 0 0 0 0 B 0 0 0 0 C 0 0 0 0 C

where

A = r + 2t + t2, B = r — 2t + t2, C = r — t2.

We summarize the following advantages of applying group theory:

• Irreducible components of various items play a central role in the method. The irreducible representations often have a physical meaning and make the calculations more effective (e. g. matrices transforming one irreducible component into another are diagonal).

• The irreducible representations of a given quantity are linearly independent and that is exploited in the analysis of convergence.

• The usage of linearly independent irreducible components is rather useful in the analysis of the iteration of a numerical process.

• In several problems of practical importance, the problem is almost symmetric, some perturbations occur. This makes the calculation more effective.

• It is more efficient to break up a problem into parts and solve each subproblem independently. Results have been reported for operational codes (Gado et al., 1994).

The above considerations dealt with the local symmetries. However, if we decompose the partial currents into irreps, we get a decomposition of the global vector x in equation (4.28) as well. We exploit the linear independence of the irreducible components further on the global scale.

For most physical problems we have a priori knowledge about the solution to a given boundary value problem in the form of smoothness and boundedness. This is brought to bear through the choice of solution space. In the following, we introduce via group theoretical principles the additional information of the particular geometric symmetry of the node. This allows the decomposition of the solution space into irreducible subspaces, and leads, for a given geometry, not only to a rule for choosing the optimum combination of polynomial expansions on the surface and in the volume, but also elucidates the subtle effect that the geometry of the physical system can have on the algorithm for the solution of the associated mathematical boundary problem.

Consider the iteration (4.29) and decompose the iterated vector into irreducible component

x = £ X (4.34)

a

where because of the orthogonality of the irreducible components

xe+ xa = 0

when a = в. The convergence of the iteration means that

XN+k1 — XN+k2 = 0 (4.35)

for any ki, k2. But that entails that as the iteration proceeds, the difference between two iterated vector must tend to zero. In other words, the iteration must converge in every irreducible subspace. This observation may be violated when the iteration process has not been carefully designed.

Let us assume a method, see (Palmiotti, 1995), in which N basis functions are used to expand the solution along the boundary of a node and M basis functions to expand the solution inside the node. It is reasonable to use the approximation of same order along each face, hence, in a square node N is a multiple of four. For an Mth order approximation inside the node, the number of free coefficients is (M + 1)(M + 2)/2. It has been shown that an algorithm (Palmiotti, 1995) with a linear (N = 1) approximation along the four faces, with 8 free coefficients, of the boundary did not result in convergent algorithm unless M = 4 quartic polynomial, with 15 free coefficients, was used inside the node.

In such a code each node is considered to be homogeneous in composition. Central to the accuracy of the method are two approximations. In the first, we assume the solution on the boundary surface of the node to be expanded in a set of basis functions (f (£); i = 1,…, N). In the second, the solution inside the volume is expanded in another set of basis functions (Fj (r); j = 1,…, M). Clearly the independent variable £ is a limit of the independent variable r.

Any iteration procedure, in principle, connects neighboring nodes through continuity and smoothness conditions. For an efficient numerical algorithm it is therefore desirable to have

i/Order

0

1

2

3

4

1

1

(x2 + y2)

(x2y2), (x4 + y4)

2

(x3y — y3x)

3

(x2 — y2)

44

x4 — y

4

xy

(x3y + y3 x)

5

x

3

x3

6

2

xy2

7

x2y

8

y

y3

Table 6. Irreducible components of at most fourth order polynomials under the symmetries of a square C4v

the same number of degrees of freedom (i. e. coefficients in the expansion) on the surface of the node as within the node. With the help of Table 6, for the case of a square node, we compare the required number of coefficients for different orders of polynomial expansion. A linear approximation along the four faces of the square has at least one component in each irreducible subspace. At the same time the first polynomial contributing to the second irrep is fourth order. Convergence requires the convergence in each subspace thus the approximation inside the square must be at least of fourth order. There is no linear polynomial approximation that would use the same number of coefficients on the surface as inside the volume. The appropriate choice of order of expansion is thus not straightforward but it is important to the accuracy of the solution, because a mismatch of degrees of freedom inside and on the surface of the node is likely to lead to a loss of information in the computational step that passes from one node to the next. A lack of convergence has been observed, see (Palmiotti, 1995), in the case of calculations with a square node when using first order polynomials on the surface. A convergent solution is obtained only with fourth or higher order polynomial interpolation inside the node. Similar relationships apply to nodes of other geometry. For a hexagonal node that there is no polynomial where the number of coefficients on the surface matches the number of coefficients inside the node.

In a hexagonal node in (Palmiotti, 1995), the first convergent solution with a linear approximation on the surface requires at least a sixth order polynomial expansion within the node. Thus, in the case of a linear approximation on the surface, in the case of a square node a third order polynomial within the node does not lead to a convergent solution, although the number of coefficients is greater than those on the surface. In the case of the regular hexagonal node, a convergent solution is obtained only for the sixth order polynomial expansion in the node, while both a fourth and a fifth order polynomial have a greater number of coefficients inside the node than on the surface. It appears that some terms of the polynomial expansion contain less information than others, and are thus superfluous in the computational algorithm. If these terms can be "filtered out", a more efficient and convergent solution should result. The explanation becomes immediately clear from the decomposition of the trial functions inside the volume and on the boundary. In both the square and hexagon nodes, the first order approximation on the boundary is sufficient to furnish all irreducible subspaces whereas this is true for the interpolating polynomials inside V for surprisingly high order polynomials.