Sparse Matrix Structures and Higher Derivatives
In this talk we consider solving nonlinear systems of equations and the unconstrained minimization problem using iterative methods from
the Halley class. The methods in this class have in general local and
third order rate of convergence. In the unconstrained optimization case
these methods will require the second and third derivative. We say that
the sparsity pattern of the third derivative (or tensor) is induced by
the sparsity pattern of the Hessian matrix. The sparsity pattern of
the tensor can fairly easy be derived from the indices of the nonzero
elements of the Hessian. However, few datastructures for sparse
symmetric matrices will store both the row and column index of a
nonzero element. We will discuss some datastructures for matrices where
the indices of nonzero elements of the tensor can be computed. These
will be compressed storage format using array of arrays (jagged) data
structures for storing the second and third derivative of a multivariate
function. Methods in the Halley class require solution of one or two
linear systems of linear equations and we briefly discuss suitable
termination criteria for the (inner) iterative method to achieve a cubic
rate of convergence as well as direct methods. We also discuss
techniques to compute the derivatives using coloring schemes to computesparse Jacobian and Hessian matrices.
Trond Steihaug, University of Bergen, Norway