《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 09.0 Root Finding and Nonlinear Sets of Equations 9.0 Introduction

Chapter 9.Root Finding and Nonlinear Sets of Equations http://www.nr. .com or call 9.0 Introduction NUMERICAL Cambridge We now consider that most basic oftasks,solving equations numerically.While most equations are born with both a right-hand side and a left-hand side,one traditionally moves all terms to the left,leaving f(x)=0 (9.0.1) 9是%> compu Press. C:THEA whose solution or solutions are desired.When there is only one independent variable. the problem is one-dimensional,namely to find the root or roots of a function. 8 With more than one independent variable,more than one equation can be satisfied simultaneously.You likely once learned the implicit function theorem SCIENTIFIC which(in this context)gives us the hope of satisfying N equations in N unknowns simultaneously.Note that we have only hope,not certainty.A nonlinear set of 6 equations may have no(real)solutions at all.Contrariwise,it may have more than one solution.The implicit function theorem tells us that"generically"the solutions will be distinct,pointlike,and separated from each other.If,however,life is so unkind as to present you with a nongeneric,i.e.,degenerate,case,then you can get r Numerical a continuous family of solutions.In vector notation,we want to find one or more Recipes (N06211 N-dimensional solution vectors x such that f(x)=0 (9.0.2) ecipes where f is the N-dimensional vector-valued function whose components are the (outside individual equations to be satisfied simultaneously. North Software. Don't be fooled by the apparent notational similarity of equations(9.0.2)and (9.0.1).Simultaneous solution of equations in N dimensions is much more difficult than finding roots in the one-dimensional case.The principal difference between one America visit website and many dimensions is that,in one dimension,it is possible to bracket or"trap"a root machine- between bracketing values,and then hunt it down like a rabbit.In multidimensions. you can never be sure that the root is there at all until you have found it. Except in linear problems,root finding invariably proceeds by iteration,and this is equally true in one or in many dimensions.Starting from some approximate trial solution,a useful algorithm will improve the solution until some predetermined convergence criterion is satisfied.For smoothly varying functions,good algorithms 347
Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Chapter 9. Root Finding and Nonlinear Sets of Equations 9.0 Introduction We now consider that most basic of tasks, solving equations numerically. While most equations are born with both a right-hand side and a left-hand side, one traditionally moves all terms to the left, leaving f(x)=0 (9.0.1) whose solution or solutions are desired. When there is only one independent variable, the problem is one-dimensional, namely to find the root or roots of a function. With more than one independent variable, more than one equation can be satisfied simultaneously. You likely once learned the implicit function theorem which (in this context) gives us the hope of satisfying N equations in N unknowns simultaneously. Note that we have only hope, not certainty. A nonlinear set of equations may have no (real) solutions at all. Contrariwise, it may have more than one solution. The implicit function theorem tells us that “generically” the solutions will be distinct, pointlike, and separated from each other. If, however, life is so unkind as to present you with a nongeneric, i.e., degenerate, case, then you can get a continuous family of solutions. In vector notation, we want to find one or more N-dimensional solution vectors x such that f(x) = 0 (9.0.2) where f is the N-dimensional vector-valued function whose components are the individual equations to be satisfied simultaneously. Don’t be fooled by the apparent notational similarity of equations (9.0.2) and (9.0.1). Simultaneous solution of equations in N dimensions is much more difficult than finding roots in the one-dimensional case. The principal difference between one and many dimensions is that, in one dimension, it is possible to bracket or “trap” a root between bracketing values, and then hunt it down like a rabbit. In multidimensions, you can never be sure that the root is there at all until you have found it. Except in linear problems, root finding invariably proceeds by iteration, and this is equally true in one or in many dimensions. Starting from some approximate trial solution, a useful algorithm will improve the solution until some predetermined convergence criterion is satisfied. For smoothly varying functions, good algorithms 347

348 Chapter 9.Root Finding and Nonlinear Sets of Equations will always converge,provided that the initial guess is good enough.Indeed one can even determine in advance the rate of convergence of most algorithms It cannot be overemphasized,however,how crucially success depends on having a good first guess for the solution,especially for multidimensional problems.This crucial beginning usually depends on analysis rather than numerics.Carefully crafted initial estimates reward you not only with reduced computational effort,but also with understanding and increased self-esteem.Hamming's motto,"the purpose of computing is insight,not numbers,"is particularly apt in the area of finding roots. You should repeat this motto aloud whenever your program converges,with ten-digit accuracy,to the wrong root of a problem,or whenever it fails to converge because there is actually no root,or because there is a root but your initial estimate was not sufficiently close to it. "This talk of insight is all very well,but what do I actually do?"For one- dimensional root finding,it is possible to give some straightforward answers:You should try to get some idea of what your function looks like before trying to find its roots.If you need to mass-produce roots for many different functions,then you should at least know what some typical members of the ensemble look like.Next. you should always bracket a root,that is,know that the function changes sign in an identified interval,before trying to converge to the root's value. 9 Finally (this is advice with which some daring souls might disagree,but we give it nonetheless)never let your iteration method get outside of the best bracketing bounds obtained at any stage.We will see below that some pedagogically important algorithms,such as secant method or Newton-Raphson,can violate this last constraint, and are thus not recommended unless certain fixups are implemented. A之台 Multiple roots,or very close roots,are a real problem,especially if the OF SCIENTIFIC multiplicity is an even number.In that case,there may be no readily apparent sign change in the function,so the notion of bracketing a root-and maintaining the bracket-becomes difficult.We are hard-liners:we nevertheless insist on bracketing a root,even if it takes the minimum-searching techniques of Chapter 10 to determine whether a tantalizing dip in the function really does cross zero or not. (You can easily modify the simple golden section routine of $10.1 to return early if it detects a sign change in the function.And,if the minimum of the function is exactly zero,then you have found a double root.) Numerica 10621 As usual,we want to discourage you from using routines as black boxes without understanding them.However,as a guide to beginners,here are some reasonable starting points: Brent's algorithm in 89.3 is the method of choice to find a bracketed root of a general one-dimensional function,when you cannot easily compute the function's derivative.Ridders'method (89.2)is concise,and a close North competitor. When you can compute the function's derivative,the routine rtsafe in 89.4,which combines the Newton-Raphson method with some bookkeep- ing on bounds,is recommended.Again,you must first bracket your root. Roots of polynomials are a special case.Laguerre's method,in 89.5, is recommended as a starting point.Beware:Some polynomials are ill-conditioned! Finally,for multidimensional problems,the only elementary method is Newton-Raphson (89.6),which works very well if you can supply a
348 Chapter 9. Root Finding and Nonlinear Sets of Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). will always converge, provided that the initial guess is good enough. Indeed one can even determine in advance the rate of convergence of most algorithms. It cannot be overemphasized, however, how crucially success depends on having a good first guess for the solution, especially for multidimensional problems. This crucial beginning usually depends on analysis rather than numerics. Carefully crafted initial estimates reward you not only with reduced computational effort, but also with understanding and increased self-esteem. Hamming’s motto, “the purpose of computing is insight, not numbers,” is particularly apt in the area of finding roots. You should repeat this motto aloud whenever your program converges, with ten-digit accuracy, to the wrong root of a problem, or whenever it fails to converge because there is actually no root, or because there is a root but your initial estimate was not sufficiently close to it. “This talk of insight is all very well, but what do I actually do?” For onedimensional root finding, it is possible to give some straightforward answers: You should try to get some idea of what your function looks like before trying to find its roots. If you need to mass-produce roots for many different functions, then you should at least know what some typical members of the ensemble look like. Next, you should always bracket a root, that is, know that the function changes sign in an identified interval, before trying to converge to the root’s value. Finally (this is advice with which some daring souls might disagree, but we give it nonetheless) never let your iteration method get outside of the best bracketing bounds obtained at any stage. We will see below that some pedagogically important algorithms, such assecant method orNewton-Raphson,can violate this last constraint, and are thus not recommended unless certain fixups are implemented. Multiple roots, or very close roots, are a real problem, especially if the multiplicity is an even number. In that case, there may be no readily apparent sign change in the function, so the notion of bracketing a root — and maintaining the bracket — becomes difficult. We are hard-liners: we nevertheless insist on bracketing a root, even if it takes the minimum-searching techniques of Chapter 10 to determine whether a tantalizing dip in the function really does cross zero or not. (You can easily modify the simple golden section routine of §10.1 to return early if it detects a sign change in the function. And, if the minimum of the function is exactly zero, then you have found a double root.) As usual, we want to discourage you from using routines as black boxes without understanding them. However, as a guide to beginners, here are some reasonable starting points: • Brent’s algorithm in §9.3 is the method of choice to find a bracketed root of a general one-dimensional function, when you cannot easily compute the function’s derivative. Ridders’ method (§9.2) is concise, and a close competitor. • When you can compute the function’s derivative, the routine rtsafe in §9.4, which combines the Newton-Raphson method with some bookkeeping on bounds, is recommended. Again, you must first bracket your root. • Roots of polynomials are a special case. Laguerre’s method, in §9.5, is recommended as a starting point. Beware: Some polynomials are ill-conditioned! • Finally, for multidimensional problems, the only elementary method is Newton-Raphson (§9.6), which works very well if you can supply a

9.0 Introduction 349 good first guess of the solution.Try it.Then read the more advanced material in 89.7 for some more complicated,but globally more convergent, alternatives. Avoiding implementations for specific computers,this book must generally steer clear of interactive or graphics-related routines.We make an exception right now.The following routine,which produces a crude function plot with interactively scaled axes,can save you a lot of grief as you enter the world of root finding. #include #define ISCR 60 Number of horizontal and vertical positions in display. #define JSCR 21 #define BLANK, #define ZERO #define Yy '1' #define XX #define FF 'x' -00 void scrsho(float (*fx)(float)) For interactive CRT terminal use.Produce a crude graph of the function fx over the prompted- for interval x1,x2.Query for another plot until the user signals satisfaction. int jz,j,ii (North float ysml,ybig,x2,x1,x,dyj,dx,y[ISCR+1]; char scr[ISCR+1][JSCR+1]; Americ computer, Press. for (;;) ART printf("\nEnter x1 x2 (x1=x2 to stop):\n"); Query for another plot,quit scanf ("%f %f",&x1,&x2); if x1=x2. Programs if (x1 =x2)break; for (j=1;j=2;j--){ Display. printf("%12s","■); for (i=1;i<=ISCR;i++)printf("%c",scr[i][j]); printf("\n"); printf("%10.3f ",ysml);
9.0 Introduction 349 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). good first guess of the solution. Try it. Then read the more advanced material in §9.7 for some more complicated, but globally more convergent, alternatives. Avoiding implementations for specific computers, this book must generally steer clear of interactive or graphics-related routines. We make an exception right now. The following routine, which produces a crude function plot with interactively scaled axes, can save you a lot of grief as you enter the world of root finding. #include #define ISCR 60 Number of horizontal and vertical positions in display. #define JSCR 21 #define BLANK ’ ’ #define ZERO ’-’ #define YY ’l’ #define XX ’-’ #define FF ’x’ void scrsho(float (*fx)(float)) For interactive CRT terminal use. Produce a crude graph of the function fx over the promptedfor interval x1,x2. Query for another plot until the user signals satisfaction. { int jz,j,i; float ysml,ybig,x2,x1,x,dyj,dx,y[ISCR+1]; char scr[ISCR+1][JSCR+1]; for (;;) { printf("\nEnter x1 x2 (x1=x2 to stop):\n"); Query for another plot, quit scanf("%f %f",&x1,&x2); if x1=x2. if (x1 == x2) break; for (j=1;j ybig) ybig=y[i]; x += dx; } if (ybig == ysml) ybig=ysml+1.0; Be sure to separate top and bottom. dyj=(JSCR-1)/(ybig-ysml); jz=1-(int) (ysml*dyj); Note which row corresponds to 0. for (i=1;i=2;j--) { Display. printf("%12s"," "); for (i=1;i<=ISCR;i++) printf("%c",scr[i][j]); printf("\n"); } printf(" %10.3f ",ysml);

350 Chapter 9.Root Finding and Nonlinear Sets of Equations for (i=1;i<=ISCR;i++)printf("%c",scr[i][1]); printf("\n"): printf("%8s%10.3f744s410.3f\n","",x1,"",x2); CITED REFERENCES AND FURTHER READING: Stoer,J.,and Bulirsch,R.1980,Introduction to Numerical Analysis(New York:Springer-Verlag). Chapter 5. Acton,F.S.1970,Numerica/Methods That Work;1990,corrected edition (Washington:Mathe- matical Association of America),Chapters 2,7,and 14. Ralston,A.,and Rabinowitz,P.1978,A First Course in Numerical Analysis,2nd ed.(New York: McGraw-Hill),Chapter 8. Householder,A.S.1970,The Numerical Treatment of a Single Nonlinear Equation (New York: McGraw-Hill). 9.1 Bracketing and Bisection 令 We will say that a root is bracketed in the interval (a,b)if f(a)and f(b)have opposite signs.If the function is continuous,then at least one root must lie in that interval(the intermediate value theorem).If the function is discontinuous,but bounded,then instead of a root there might be a step discontinuity which crosses zero(see Figure 9.1.1).For numerical purposes,that might as well be a root,since OF SCIENTIFIC the behavior is indistinguishable from the case of a continuous function whose zero crossing occurs in between two "adjacent"floating-point numbers in a machine's 6 finite-precision representation.Only for functions with singularities is there the possibility that a bracketed root is not really there,as for example f(a)=-1 (9.1.1) c 三N Some root-finding algorithms(e.g.,bisection in this section)will readily converge Numerica 10621 to c in(9.1.1).Luckily there is not much possibility of your mistaking c,or any 431 number z close to it,for a root,since mere evaluation of f()will give a very Recipes large,rather than a very small,result. If you are given a function in a black box,there is no sure way of bracketing its roots,or of even determining that it has roots.If you like pathological examples, North think about the problem of locating the two real roots of equation(3.0.1),which dips below zero only in the ridiculously small interval of about =+10-667. In the next chapter we will deal with the related problem of bracketing a function's minimum.There it is possible to give a procedure that always succeeds; in essence,"Go downhill,taking steps of increasing size,until your function starts back uphill."There is no analogous procedure for roots.The procedure"go downhill until your function changes sign,"can be foiled by a function that has a simple extremum.Nevertheless,if you are prepared to deal with a "failure"outcome,this procedure is often a good first start;success is usual if your function has opposite signs in the limit x→土oo
350 Chapter 9. Root Finding and Nonlinear Sets of Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). for (i=1;i<=ISCR;i++) printf("%c",scr[i][1]); printf("\n"); printf("%8s %10.3f %44s %10.3f\n"," ",x1," ",x2); } } CITED REFERENCES AND FURTHER READING: Stoer, J., and Bulirsch, R. 1980, Introduction to Numerical Analysis (New York: Springer-Verlag), Chapter 5. Acton, F.S. 1970, Numerical Methods That Work; 1990, corrected edition (Washington: Mathematical Association of America), Chapters 2, 7, and 14. Ralston, A., and Rabinowitz, P. 1978, A First Course in Numerical Analysis, 2nd ed. (New York: McGraw-Hill), Chapter 8. Householder, A.S. 1970, The Numerical Treatment of a Single Nonlinear Equation (New York: McGraw-Hill). 9.1 Bracketing and Bisection We will say that a root is bracketed in the interval (a, b) if f(a) and f(b) have opposite signs. If the function is continuous, then at least one root must lie in that interval (the intermediate value theorem). If the function is discontinuous, but bounded, then instead of a root there might be a step discontinuity which crosses zero (see Figure 9.1.1). For numerical purposes, that might as well be a root, since the behavior is indistinguishable from the case of a continuous function whose zero crossing occurs in between two “adjacent” floating-point numbers in a machine’s finite-precision representation. Only for functions with singularities is there the possibility that a bracketed root is not really there, as for example f(x) = 1 x − c (9.1.1) Some root-finding algorithms (e.g., bisection in this section) will readily converge to c in (9.1.1). Luckily there is not much possibility of your mistaking c, or any number x close to it, for a root, since mere evaluation of |f(x)| will give a very large, rather than a very small, result. If you are given a function in a black box, there is no sure way of bracketing its roots, or of even determining that it has roots. If you like pathological examples, think about the problem of locating the two real roots of equation (3.0.1), which dips below zero only in the ridiculously small interval of about x = π ± 10 −667. In the next chapter we will deal with the related problem of bracketing a function’s minimum. There it is possible to give a procedure that always succeeds; in essence, “Go downhill, taking steps of increasing size, until your function starts back uphill.” There is no analogous procedure for roots. The procedure “go downhill until your function changes sign,” can be foiled by a function that has a simple extremum. Nevertheless, if you are prepared to deal with a “failure” outcome, this procedure is often a good first start; success is usual if your function has opposite signs in the limit x → ±∞
按次数下载不扣除下载券;
注册用户24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 08.6 Sorting 8.6 Determination of Equivalence Classes.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 08.5 Sorting 8.5 Selecting the Mth Largest.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 08.4 Sorting 8.4 Indexing and Ranking.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 08.3 Sorting 8.3 Heapsort.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 08.2 Sorting 8.2 Quicksort.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 08.1 Sorting Sedgewick, R. 1988, Algorithms, 2nd ed.(Reading, MA:Addison-Wesley), Chapters 8–13. [2] 8.1 Straight Insertion and Shell’s Method.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 08.0 Sorting 8.0 Introduction.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 07.8 Random Numbers 7.8 Adaptive and Recursive Monte Carlo Methods.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 07.7 Random Numbers 7.7 Quasi-(that is, Sub-)Random Sequences.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 07.6 Random Numbers 7.6 Simple Monte Carlo Integration.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 07.5 Random Numbers 7.5 Random Sequences Based on Data Encryption.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 07.4 Random Numbers 7.4 Generation of Random Bits.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 07.3 Random Numbers 7.3 Rejection Method:Gamma, Poisson, Binomial Deviates.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 07.2 Random Numbers 7.2 Transformation Method:Exponential and Normal Deviates.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 07.1 Random Numbers 7.1 Uniform Deviates.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 07.0 Random Numbers 7.0 Introduction.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 06.9 Special Functions 6.9 Fresnel Integrals, Cosine and Sine Integrals.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 06.8 Special Functions 6.8 Spherical Harmonics.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 06.7 Special Functions 6.7 Bessel Functions of Fractional Order, Airy Functions, Spherical Bessel Functions.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 06.6 Special Functions 6.6 Modified Bessel Functions of Integer Order.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 09.1 Root Finding and Nonlinear Sets of Equations 9.1 Bracketing and Bisection.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 09.2 Root Finding and Nonlinear Sets of Equations 9.2 Secant Method, False Position Method, and Ridders’ Method.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 09.3 Root Finding and Nonlinear Sets of Equations 9.3 Van Wijngaarden–Dekker–Brent Method.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 09.4 Root Finding and Nonlinear Sets of Equations 9.4 Newton-Raphson Method Using Derivative.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 09.5 Root Finding and Nonlinear Sets of Equations 9.5 Roots of Polynomials.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 09.6 Root Finding and Nonlinear Sets of Equations 9.6 Newton-Raphson Method for Nonlinear Systems of Equations.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 09.7 Root Finding and Nonlinear Sets of Equations 9.7 Globally Convergent Methods for Nonlinear Systems of Equations.pdf
- 《数字信号处理》教学参考资料(MATLAB 5手册)第1章 MATLAB是什么.pdf
- 《数字信号处理》教学参考资料(MATLAB 5手册)第2章 MATLAB启动.pdf
- 《数字信号处理》教学参考资料(MATLAB 5手册)第3章 矩阵运算.pdf
- 《数字信号处理》教学参考资料(MATLAB 5手册)第4章 创建新矩阵.pdf
- 《数字信号处理》教学参考资料(MATLAB 5手册)第5章 字符串和其他数据类型.pdf
- 《数字信号处理》教学参考资料(MATLAB 5手册)第6章 数据分析和统计.pdf
- 《数字信号处理》教学参考资料(MATLAB 5手册)第7章 线性方程系统.pdf
- 《数字信号处理》教学参考资料(MATLAB 5手册)第8章 特征值和特征向量.pdf
- 《数字信号处理》教学参考资料(MATLAB 5手册)第9章 稀疏矩阵.pdf
- 《数字信号处理》教学参考资料(MATLAB 5手册)第10章 函数、插值和曲线拟合分析.pdf
- 《数字信号处理》教学参考资料(MATLAB 5手册)第11章 积分和微分方程组.pdf
- 《数字信号处理》教学参考资料(MATLAB 5手册)第12章 MATLAB程序设计.pdf
- 《数字信号处理》教学参考资料(MATLAB 5手册)第13章 图形和声音.pdf