On the Use of Sparse Direct Solver in a Projection Method for Generalized Eigenvalue Problems Using Numerical Integration : On the Use of Sparse Direct Solver in a Projection Method for Generalized Eigenvalue Problems Using Numerical Integration Takamitsu Watanabe and Yusaku Yamamoto
Dept. of Computational Science & Engineering
Nagoya University
Outline : Outline Background
Objective of our study
Projection method for generalized eigenvalue problems using numerical integration
Application of the sparse direct solver
Numerical results
Conclusion
Background : Background Generalized eigenvalue problems in quantum chemistry and structural engineering real axis eigenvalues specified interval Problem characteristics
A and B are large and sparse.
A is real symmetric and B is s.p.d.
Eigenvalues are real.
Eigenvalues in a specified interval are often needed. Given , find and such that . HOMO LUMO
Background (cont’d) : Background (cont’d) A projection method using numerical integration
Sakurai and Sugiura, A projection method for generalized eigenvalue problems,
J. Comput. Appl. Math. (2003)
Reduce the original problem to a small generalized eigenvalue problem within a specified region in the complex plane.
By solving the small problem, the eigenvalues lying in the region can be obtained.
The main part of computation is to solve multiple linear simultaneous equations.
Suited for parallel computation. Original problem Small generalized eigenvalue problem within the region region
Objective of our study : Objective of our study Previous approach
Solve the linear simultaneous equations by an iterative method.
The number of iterations needed for convergence differs from one simultaneous equations to another.
This brings about load imbalance between processors, decreasing parallel efficiency.
Our study
Solve the linear simultaneous equations by a sparse direct solver without pivoting.
Load balance will be improved since the computational times are the same for all linear simultaneous equations.
Projection method for generalized eigenvalue problems using numerical integration : Projection method for generalized eigenvalue problems using numerical integration ×?m+2 ×?m+1 Suppose that has distinct eigenvalues
and that we need that lie in a closed curve . Using two arbitrary complex vectors
, define a complex function
Then, f (z) can be expanded as
follows: . C, g(z): polynomial in z. ,
Projection method for generalized eigenvalue problems using numerical integration (cont’d) : Projection method for generalized eigenvalue problems using numerical integration (cont’d) Further define the moments by
and two Hankel matrices by . Th. are the m roots of . The original problem has been reduced to a small problem through contour integral.
Projection method for generalized eigenvalue problems using numerical integration (cont’d) : Projection method for generalized eigenvalue problems using numerical integration (cont’d) Path of integration Set the path of integration G to a
circle with center g and radius r .
Approximate the integral using the
trapezoidal rule. Computation of the moments : The function values
have to be computed for each . Solution of N independent linear
simultaneous equations is necessary
(N = 64 128). r
Application of the sparse direct solver : Application of the sparse direct solver Application of the sparse direct solver
For a sparse s.p.d. matrix, the sparse direct solver provides an efficient way for solving the linear simultaneous equations.
We adopt this approach by extending the sparse direct solver to deal with complex symmetric matrices. The coefficient matrix is a sparse complex symmetric matrix. A and B: sparse symmetric matrices, : a complex number
The sparse direct solver : The sparse direct solver Characteristics
Reduce the computational work and memory requirements of the Cholesky factorization by exploiting the sparsity of the matrix.
Stability is guaranteed when the matrix is s.p.d.
Efficient parallelization techniques are available. ordering symbolic factorization Cholesky factorization triangular solution Find a permutation of rows/columns that reduces
computational work and memory requirements. Estimate the computational work and memory
requirements. Prepare data structures to store the Cholesky
factor.
Extension of the sparse direct solver to complex symmetric matrices : Extension of the sparse direct solver to complex symmetric matrices Algorithm
Extension is straightforward by using the Cholesky factorization for complex symmetric matrices.
Advantages such as reduced computational work, reduced memory requirements and parallelizability are carried over.
Accuracy and stability
Theoretically, pivoting is necessary when factorizing complex symmetric matrices.
Since our algorithm does not incorporate pivoting, accuracy and stability is not guaranteed.
We examine the accuracy and stability experimentally by comparing the results with those obtained using GEPP.
Numerical results : Numerical results Matrices used in the experiments BCSSTK12 BCSSTK13 FMO Harwell-Boeing
Library For each matrix, we solve the equations with the sparse direct solver
(with MD and ND ordering) and GEPP.
We compare the computational time and accuracy of the eigenvalues.
Computational time : Computational time Computational time (sec.) for one set of linear simultaneous equations and speedup
(PowerPC G5, 2.0GHz) The sparse direct solver is two to over one hundred times faster than GEPP, depending on the nonzero structure. BCSSTK12 BCSSTK13 FMO
Accuracy of the eigenvalues (BCSSTK12) : Accuracy of the eigenvalues (BCSSTK12) Example of an interval containing 4 eigenvalues Relative errors in the eigenvalues for each algorithm (N=64) Distribution of the eigenvalues and the specified interval eigenvalues
specified interval The errors were of the same order for all three solvers.
Also, the growth factor for the sparse solver was O(1).
Accuracy of the eigenvalues (BCSSTK13) : Accuracy of the eigenvalues (BCSSTK13) Example of an interval containing 3 eigenvalues Distribution of the eigenvalues and the specified interval eigenvalues
specified interval The errors were of the same order for all three solvers. Relative errors in the eigenvalues for each algorithm (N=64)
Accuracy of the eigenvalues (FMO) : Accuracy of the eigenvalues (FMO) Example of an interval containing 4 eigenvalues Distribution of the eigenvalues and the specified interval eigenvalues
specified interval The errors were of the same order for all three solvers. Relative errors in the eigenvalues for each algorithm (N=64)
Conclusion : Conclusion Summary of this study
We applied a complex symmetric version of the sparse direct solver to a projection method for generalized eigenvalue problems using numerical integration.
The sparse solver succeeded in solving the linear simultaneous equations stably and accurately, producing eigenvalues that are as accurate as those obtained by GEPP.
Future work
Apply our algorithm to larger matrices arising from quantum chemistry applications.
Construct a hybrid method that uses an iterative solver when the growth factor becomes too large.
Parallelize the sparse solver to enable more than N processors to be used.