## Numerical methods

1 Chapter 0: Preliminaries
1.1 Contents of This Chapter
• Analysis Versus Numerical Analysis
• Describes how numerical analysis differs from analytical analysis and shows where each has special • It briefly lists the topics that will be covered in later chapters • Explains why computers and numerical analysis are intimately related • It describes several ways by which a computer can be employed in carrying out the procedures • Tells how a typical problem is solved and uses a special program called a computer algebra system to • Kinds of Errors in Numerical Procedures • Examines the important topic of the accuracy of computations and the different sources of errors • Errors that are due to the way that computers store numbers are examined in some detail • Discusses one way to determine the effect of imprecise values in the equations that are used to model • Measuring the Efficiency of Numerical Procedures • Tells how one can compare the accuracy of different methods, all of which can accomplish a given task, and how they differ in their use of computing resources 1.1.1 Analysis Versus Numerical Analysis
1.1.2 Computers and Numerical Analysis
1.1.3 An Illustrative Example
1.1.4 Kinds of Errors in Numerical Procedures
1.1.5 Interval Arithmetic
1.1.6 Measuring the Efficiency of Numerical Procedures
2 Chapter 1: Solving Nonlinear Equations
2.1 Contents of This Chapter
• Interval Halving (Bisection)
• Describes a method this is very simple and foolproof but is not very efficient. We examine how the error decreases as the method continues. • Tells how approximating the function in the vicinity of the root with a straight line can find a root more efficiently. It has a better "rate of convergence." • Explains a still more efficient method that is very widely used but there are pitfalls that you should know about. Complex roots can be found if complex arithmetic is employed. • Approximates the function with a quadratic polynomial that fits to the function better than a straight line. This significantly improves the rate of convergence over linear interpolation. • Fixed-Point Iteration: x = g(x) Method • Uses a different approach: The function f(x) is rearranged to an equivalent form, x = g(x). A starting value, x0, is substituted into g(x) to give a new x-value, x1. This in turn is used to get another x-value. If the function g(x) is properly chosen, the successive values converge. It has important theoretical implementations. • Gives a brief description of five other methods that can be used to find the roots of polynomials. Three of these have the advantage of not requiring a starting value to obtain a root. • Applies Newton's method to systems of nonlinear equations, a much harder problem than with a 2.1.1 Interval Halving (Bisection)
2.1.2 Linear Interpolation Methods
2.1.3 Newton's Method
2.1.4 Muller's Method
2.1.5 Fixed-Point Iteration: x=g(x) Method
2.1.6 Multiple Roots
2.1.7 Nonlinear Systems
3 Chapter 2: Solving sets of Equations
3.1 Contents of This Chapter
• Matrices and Vectors
• Reviews concepts of matrices and vectors in preparation for their use in this chapter. • Describes two classical methods that change a system of equations to forms that allow getting the solution by back- substitution and shows how the errors of the solution can be minimized. These techniques are also the best way to find the determinant of a matrix and they arrive at forms that permit the efficient solution if the right-hand side is changed. Relations for the number of arithmetic operations for each of the methods are developed. • The Inverse of a Matrix and Matrix Pathology • Shows how an important derivative of a matrix, its inverse, can be computed. • It shows when a matrix cannot be inverted and tells of situations where no unique solution exists to a • Explores systems for which getting the solution with accuracy is very difficult. A number, the condition number, is a measure of such difficulty; a property of a matrix, called its norm, is used to compute its condition number. A way to improve an inaccuracy solution is described. • This section describes how linear systems can be solved in an entirely different way, by beginning with an initial estimate of the solution and performing computations that eventually arrive at the correct solution. It tells how the convergence can be accelerated. An iterative method is particularly important in solving systems that have few nonzero coefficients. 3.1.1 Matrices and Vectors
3.1.2 Elimination Methods
3.1.3 The Inverse of a Matrix and Matrix Pathology
3.1.4 Ill-Conditioned Systems
3.1.5 Iterative Methods
4 Chapter 3: Interpolation and Curve Fitting
4.1 Contents of This Chapter
• Interpolating Polynomials
• Describes a straightforward but computationally awkward way to fit a polynomial to a set of data points so that an interpolated value can be computed. The cost of getting the interpolant with a desired accuracy is facilitated by a variant, Nevill's method. • These provide a more efficient to construct an interpolating polynomial - one that allows one to readily change the degree of the polynomial. If the data are at evenly spaced x-values, there is some simplification. • Using special polynomials, Splines, one can fit polynomials to data more accurately than with an interpolating polynomial. At the expense of added computational effort, some important problems that one has with interpolating polynomials is overcome. • Are modern techniques for constructing smooth curves. They are used widely in computer graphics. They are not interpolating polynomials but are closely related. • Are methods by which polynomials and other functions can be fitted to data that are subject to errors likely in experiments. These approximations are widely used to analyze experimental observations. 4.1.1 Interpolating Polynomials
4.1.2 Divided Differences
4.1.3 Spline Curves
4.1.4 Bezier Curves and B-Spline Curves
4.1.5 Least-Square Approximations
5 Chapter 5: Numerical Differentiation and Integration
5.1 Contents of This Chapter
• Differentiation with a Computer
• Employs the interpolating polynomials of Chapter 3 to derive formulas for getting derivatives. These can be applied to functions known explicitly as well as those whose values are found in a table. • Based on a consideration of the error term, a method for getting improved estimates can be found, a procedure called Richardson extrapolation. • Numerical Integration - The Trapezoidal Rule • Approximates the integrand function with a linear interpolating polynomial to derive a very simple but important formula for numerically integrating functions between limits. The method can be applied to tabular data. • Romberg integration, an extrapolation technique, can improve the accuracy. • Develops more accurate integration formulas based on approximating the integrand with quadratic or • Describes a way to reduce the number of function evaluations when Simpson's rule is used. A kind of binary search is used to locate sub-regions where the size of intervals can be larger. An interesting bookkeeping problem is involved. • Gives the development of an integration method that uses fewer function evaluations by properly selecting the points where the value of the function is computed. The section introduces a representative of orthogonal polynomials, the Legendre polynomials. • Gives the detail for using a Spline approximation to compute derivatives and integrals. 5.1.1 Differentiation with a Computer
5.1.2 Numerical Integration - The Trapezoidal Rule
5.1.3 Simpson's Rule