《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 09.3 Root Finding and Nonlinear Sets of Equations 9.3 Van Wijngaarden–Dekker–Brent Method

9.3 Van Wiingaarden-Dekker-Brent Method 359 for (j=1;j=fh?1.0:-1.0)*fm/s); Updating formula. if (fabs(xnew-ans)<=xacc)return ans; ans=xnew; fnew=(*func)(ans); Second of two function evaluations per if (fnew ==0.0)return ans; iteration. if (SIGN(fm,fnew)!fm){ Bookkeeping to keep the root bracketed X1=X; on next iteration. fl=fm; xh=ans; fh=fnew; else if (SIGN(fl,fnew)!=fl){ 19881992 xh=ans; fh=fnew; 1.800 else if (SIGN(fh,fnew)!fh){ xl=ans; 872 fl-fnew; Cambridge from NUMERICAL RECIPESI else nrerror("never get here."); if (fabs(xh-xl)<=xacc)return ans; nrerror("zriddr exceed maximum iterations"); else (North America server computer, sto make one paper University Press. IN C: THE if (fl==0.0)return x1; ART if (fh =0.0)return x2; nrerror("root must be bracketed in zriddr."); Programs return 0.0; Never get here. strictly proh to dir Copyright (C) CITED REFERENCES AND FURTHER READING: OF SCIENTIFIC COMPUTING(ISBN Ralston,A.,and Rabinowitz,P.1978,A First Course in Numerica/Analysis,2nd ed.(New York: 198819920 McGraw-Hill),88.3. Ostrowski,A.M.1966,Solutions of Equations and Systems of Equations,2nd ed.(New York: Academic Press),Chapter 12. Numerical Recipes 10-621 Ridders,C.J.F.1979,IEEE Transactions on Circuits and Systems,vol.CAS-26,pp.979-980.[1] 43108 (outside 9.3 Van Wijngaarden-Dekker-Brent Method North Software. Ame While secant and false position formally converge faster than bisection,one visit website finds in practice pathological functions for which bisection converges more rapidly. machine These can be choppy,discontinuous functions,or even smooth functions if the second derivative changes sharply near the root.Bisection always halves the interval, while secant and false position can sometimes spend many cycles slowly pulling distant bounds closer to a root.Ridders'method does a much better job,but it too can sometimes be fooled.Is there a way to combine superlinear convergence with the sureness of bisection?
9.3 Van Wijngaarden–Dekker–Brent Method 359 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). for (j=1;j= fh ? 1.0 : -1.0)*fm/s); Updating formula. if (fabs(xnew-ans) <= xacc) return ans; ans=xnew; fnew=(*func)(ans); Second of two function evaluations per if (fnew == 0.0) return ans; iteration. if (SIGN(fm,fnew) != fm) { Bookkeeping to keep the root bracketed xl=xm; on next iteration. fl=fm; xh=ans; fh=fnew; } else if (SIGN(fl,fnew) != fl) { xh=ans; fh=fnew; } else if (SIGN(fh,fnew) != fh) { xl=ans; fl=fnew; } else nrerror("never get here."); if (fabs(xh-xl) <= xacc) return ans; } nrerror("zriddr exceed maximum iterations"); } else { if (fl == 0.0) return x1; if (fh == 0.0) return x2; nrerror("root must be bracketed in zriddr."); } return 0.0; Never get here. } CITED REFERENCES AND FURTHER READING: Ralston, A., and Rabinowitz, P. 1978, A First Course in Numerical Analysis, 2nd ed. (New York: McGraw-Hill), §8.3. Ostrowski, A.M. 1966, Solutions of Equations and Systems of Equations, 2nd ed. (New York: Academic Press), Chapter 12. Ridders, C.J.F. 1979, IEEE Transactions on Circuits and Systems, vol. CAS-26, pp. 979–980. [1] 9.3 Van Wijngaarden–Dekker–Brent Method While secant and false position formally converge faster than bisection, one finds in practice pathological functions for which bisection converges more rapidly. These can be choppy, discontinuous functions, or even smooth functions if the second derivative changes sharply near the root. Bisection always halves the interval, while secant and false position can sometimes spend many cycles slowly pulling distant bounds closer to a root. Ridders’ method does a much better job, but it too can sometimes be fooled. Is there a way to combine superlinear convergence with the sureness of bisection?

360 Chapter 9.Root Finding and Nonlinear Sets of Equations Yes.We can keep track of whether a supposedly superlinear method is actually converging the way it is supposed to,and,if it is not,we can intersperse bisection steps so as to guarantee at least linear convergence.This kind of super-strategy requires attention to bookkeeping detail,and also careful consideration of how roundoff errors can affect the guiding strategy.Also,we must be able to determine reliably when convergence has been achieved. An excellent algorithm that pays close attention to these matters was developed in the 1960s by van Wijngaarden,Dekker,and others at the Mathematical Center in Amsterdam,and later improved by Brent [1].For brevity,we refer to the final form of the algorithm as Brent's method.The method is guaranteed (by Brent) 81 to converge,so long as the function can be evaluated within the initial interval known to contain a root. Brent's method combines root bracketing,bisection,and inverse quadratic interpolation to converge from the neighborhood of a zero crossing.While the false position and secant methods assume approximately linear behavior between two prior root estimates,inverse quadratic interpolation uses three prior points to fit an inverse quadratic function(x as a quadratic function ofy)whose value at y =0 is taken as the next estimate of the root z.Of course one must have contingency plans for what to do if the root falls outside of the brackets.Brent's method takes care of all that.If the three point pairs are [a,f(a)],[b,f(b)],[c,f(c)]then the interpolation formula (cf.equation 3.1.1)is ly-f(a)y-f(b)]c ly-f(b)]ly-f(c)la T= If(c)-f(a)]If(c)-f(6)]If(a)-f(b)lIf(a)-f(c)] a公 (9.3.1) ly-f(c)]ly-f(a)o [f(b)-f(c[f(b)-f(a)】] Setting y to zero gives a result for the next root estimate,which can be written as I=b+P/Q (9.3.2) where,in terms of Nume 10621 R≡f(b)/f(c), S=f(b)/f(a),T=f(a)/f(c) (9.3.3) we have P=ST(R-T)(c-b)-(1-)(b-a] (9.3.4) Q=(T-1)(R-1)(S-1) (9.3.5) In practice b is the current best estimate of the root and P/ought to be a"small" correction.Quadratic methods work well only when the function behaves smoothly; they run the serious risk of giving very bad estimates of the next root or causing machine failure by an inappropriate division by a very small number (O 0) Brent's method guards against this problem by maintaining brackets on the root and checking where the interpolation would land before carrying out the division. When the correction P/Q would not land within the bounds,or when the bounds are not collapsing rapidly enough,the algorithm takes a bisection step.Thus
360 Chapter 9. Root Finding and Nonlinear Sets of Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Yes. We can keep track of whether a supposedly superlinear method is actually converging the way it is supposed to, and, if it is not, we can intersperse bisection steps so as to guarantee at least linear convergence. This kind of super-strategy requires attention to bookkeeping detail, and also careful consideration of how roundoff errors can affect the guiding strategy. Also, we must be able to determine reliably when convergence has been achieved. An excellent algorithm that pays close attention to these matters was developed in the 1960s by van Wijngaarden, Dekker, and others at the Mathematical Center in Amsterdam, and later improved by Brent [1]. For brevity, we refer to the final form of the algorithm as Brent’s method. The method is guaranteed (by Brent) to converge, so long as the function can be evaluated within the initial interval known to contain a root. Brent’s method combines root bracketing, bisection, and inverse quadratic interpolation to converge from the neighborhood of a zero crossing. While the false position and secant methods assume approximately linear behavior between two prior root estimates, inverse quadratic interpolation uses three prior points to fit an inverse quadratic function (x as a quadratic function of y) whose value at y = 0 is taken as the next estimate of the root x. Of course one must have contingency plans for what to do if the root falls outside of the brackets. Brent’s method takes care of all that. If the three point pairs are [a, f(a)], [b, f(b)], [c, f(c)] then the interpolation formula (cf. equation 3.1.1) is x = [y − f(a)][y − f(b)]c [f(c) − f(a)][f(c) − f(b)] + [y − f(b)][y − f(c)]a [f(a) − f(b)][f(a) − f(c)] + [y − f(c)][y − f(a)]b [f(b) − f(c)][f(b) − f(a)] (9.3.1) Setting y to zero gives a result for the next root estimate, which can be written as x = b + P/Q (9.3.2) where, in terms of R ≡ f(b)/f(c), S ≡ f(b)/f(a), T ≡ f(a)/f(c) (9.3.3) we have P = S [T (R − T )(c − b) − (1 − R)(b − a)] (9.3.4) Q = (T − 1)(R − 1)(S − 1) (9.3.5) In practice b is the current best estimate of the root and P/Q ought to be a “small” correction. Quadratic methods work well only when the function behaves smoothly; they run the serious risk of giving very bad estimates of the next root or causing machine failure by an inappropriate division by a very small number (Q ≈ 0). Brent’s method guards against this problem by maintaining brackets on the root and checking where the interpolation would land before carrying out the division. When the correction P/Q would not land within the bounds, or when the bounds are not collapsing rapidly enough, the algorithm takes a bisection step. Thus

9.3 Van Wiingaarden-Dekker-Brent Method 361 Brent's method combines the sureness of bisection with the speed of a higher-order method when appropriate.We recommend it as the method of choice for general one-dimensional root finding where a function's values only (and not its derivative or functional form)are available. #include #include "nrutil.h" #define ITMAX 100 Maximum allowed number of iterations. #define EPS 3.0e-8 Machine floating-point precision. http://www.nr. float zbrent(float (*func)(float),float x1,float x2,float tol) Using Brent's method,find the root of a function func known to lie between x1 and x2.The root,returned as zbrent,will be refined until its accuracy is tol. int iter; granted for 18881992 float a=x1,b=x2,c=x2,d,e,min1,min2; float fa=(*func)(a),fb=(*func)(b),fc,p,q,r,s,tol1,xm; 100 (including this one) if((fa>0.0&fb>0.0)1l(fa0.0&&fc>0.0)I1(fb=tol1&&fabs(fa)>fabs(fb)){ s=fb/fa; Attempt inverse quadratic interpolation. if (a ==c){ p=2.0*xm*s; 9=1.0-s; idge.org To order Numerical Recipes books or personal use.Further reproduction, OF SCIENTIFIC COMPUTING (ISBN 0-521 else 1988-1992 by Numerical Recipes q=fa/fc; -431085 r=fb/fc; p=s*(2.0*xm*q*(q-r)-(b-a)*(r-1.0)); q=(q-1.0)*(r-1.0)*(s-1.0); 1f(p>0.0)q=-9 Check whether in bounds. p=fabs(p); min1=3.0*xm*q-fabs(tol1*q); (outside North America) Software. min2=fabs(etq); if (2.0*p (min1 min2 min1 min2)){ e=d; Accept interpolation. d=p/qi else d=xm; Interpolation failed,use bisection. e=d; else Bounds decreasing too slowly,use bisection. d=xmi e=d;
9.3 Van Wijngaarden–Dekker–Brent Method 361 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Brent’s method combines the sureness of bisection with the speed of a higher-order method when appropriate. We recommend it as the method of choice for general one-dimensional root finding where a function’s values only (and not its derivative or functional form) are available. #include #include "nrutil.h" #define ITMAX 100 Maximum allowed number of iterations. #define EPS 3.0e-8 Machine floating-point precision. float zbrent(float (*func)(float), float x1, float x2, float tol) Using Brent’s method, find the root of a function func known to lie between x1 and x2. The root, returned as zbrent, will be refined until its accuracy is tol. { int iter; float a=x1,b=x2,c=x2,d,e,min1,min2; float fa=(*func)(a),fb=(*func)(b),fc,p,q,r,s,tol1,xm; if ((fa > 0.0 && fb > 0.0) || (fa 0.0 && fc > 0.0) || (fb = tol1 && fabs(fa) > fabs(fb)) { s=fb/fa; Attempt inverse quadratic interpolation. if (a == c) { p=2.0*xm*s; q=1.0-s; } else { q=fa/fc; r=fb/fc; p=s*(2.0*xm*q*(q-r)-(b-a)*(r-1.0)); q=(q-1.0)*(r-1.0)*(s-1.0); } if (p > 0.0) q = -q; Check whether in bounds. p=fabs(p); min1=3.0*xm*q-fabs(tol1*q); min2=fabs(e*q); if (2.0*p < (min1 < min2 ? min1 : min2)) { e=d; Accept interpolation. d=p/q; } else { d=xm; Interpolation failed, use bisection. e=d; } } else { Bounds decreasing too slowly, use bisection. d=xm; e=d;

362 Chapter 9.Root Finding and Nonlinear Sets of Equations 2 a=b: Move last best guess to a. fa-fb; if (fabs(d)>tol1) Evaluate new trial root. b+=d; else b+=SIGN(tol1,xm); fb=(*func)(b); nrerror("Maximum number of iterations exceeded in zbrent"); return 0.0; Never get here. 83g CITED REFERENCES AND FURTHER READING: Brent,R.P.1973,Algorithms for Minimization without Derivatives(Englewood Cliffs,NJ:Prentice- 11800 Hall),Chapters 3,4.[1] Forsythe,G.E.,Malcolm,M.A.,and Moler,C.B.1977,Computer Methods for Mathematical Computations (Englewood Cliffs,NJ:Prentice-Hall),87.2. 9.4 Newton-Raphson Method Using Derivative America Press. Perhaps the most celebrated ofall one-dimensional root-finding routines is New- ton's method,also called the Newton-Raphson method.This method is distinguished from the methods of previous sections by the fact that it requires the evaluation SCIENTIFIC( of both the function f(x),and the derivative f'(z),at arbitrary points x.The Newton-Raphson formula consists geometrically of extending the tangent line at a current pointri until it crosses zero,then setting the next guess to the abscissa of that zero-crossing(see Figure 9.4.1).Algebraically,the method derives from the familiar Taylor series expansion of a function in the neighborhood of a point, fe+)≈f回+fa5+"回+. (9.4.1) 2 Numerica 10621 For small enough values of 6,and for well-behaved functions,the terms beyond 43126 linear are unimportant,hence f(x+6)=0 implies 6s、x) (9.4.2) f'(x) North Newton-Raphson is not restricted to one dimension.The method readily generalizes to multiple dimensions,as we shall see in 89.6 and 89.7.below. Far from a root,where the higher-order terms in the series are important,the Newton-Raphson formula can give grossly inaccurate,meaningless corrections.For instance,the initial guess for the root might be so far from the true root as to let the search interval include a local maximum or minimum of the function.This can be death to the method (see Figure 9.4.2).If an iteration places a trial guess near such a local extremum,so that the first derivative nearly vanishes,then Newton- Raphson sends its solution off to limbo,with vanishingly small hope of recovery
362 Chapter 9. Root Finding and Nonlinear Sets of Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). } a=b; Move last best guess to a. fa=fb; if (fabs(d) > tol1) Evaluate new trial root. b += d; else b += SIGN(tol1,xm); fb=(*func)(b); } nrerror("Maximum number of iterations exceeded in zbrent"); return 0.0; Never get here. } CITED REFERENCES AND FURTHER READING: Brent, R.P. 1973, Algorithms for Minimization without Derivatives (Englewood Cliffs, NJ: PrenticeHall), Chapters 3, 4. [1] Forsythe, G.E., Malcolm, M.A., and Moler, C.B. 1977, Computer Methods for Mathematical Computations (Englewood Cliffs, NJ: Prentice-Hall), §7.2. 9.4 Newton-Raphson Method Using Derivative Perhaps the most celebrated of all one-dimensional root-finding routines is Newton’s method, also called the Newton-Raphson method. This method is distinguished from the methods of previous sections by the fact that it requires the evaluation of both the function f(x), and the derivative f (x), at arbitrary points x. The Newton-Raphson formula consists geometrically of extending the tangent line at a current point xi until it crosses zero, then setting the next guess xi+1 to the abscissa of that zero-crossing (see Figure 9.4.1). Algebraically, the method derives from the familiar Taylor series expansion of a function in the neighborhood of a point, f(x + δ) ≈ f(x) + f (x)δ + f(x) 2 δ2 + .... (9.4.1) For small enough values of δ, and for well-behaved functions, the terms beyond linear are unimportant, hence f(x + δ)=0 implies δ = − f(x) f (x) . (9.4.2) Newton-Raphson is not restricted to one dimension. The method readily generalizes to multiple dimensions, as we shall see in §9.6 and §9.7, below. Far from a root, where the higher-order terms in the series are important, the Newton-Raphson formula can give grossly inaccurate, meaningless corrections. For instance, the initial guess for the root might be so far from the true root as to let the search interval include a local maximum or minimum of the function. This can be death to the method (see Figure 9.4.2). If an iteration places a trial guess near such a local extremum, so that the first derivative nearly vanishes, then NewtonRaphson sends its solution off to limbo, with vanishingly small hope of recovery
按次数下载不扣除下载券;
注册用户24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 09.2 Root Finding and Nonlinear Sets of Equations 9.2 Secant Method, False Position Method, and Ridders’ Method.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 09.1 Root Finding and Nonlinear Sets of Equations 9.1 Bracketing and Bisection.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 09.0 Root Finding and Nonlinear Sets of Equations 9.0 Introduction.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 08.6 Sorting 8.6 Determination of Equivalence Classes.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 08.5 Sorting 8.5 Selecting the Mth Largest.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 08.4 Sorting 8.4 Indexing and Ranking.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 08.3 Sorting 8.3 Heapsort.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 08.2 Sorting 8.2 Quicksort.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 08.1 Sorting Sedgewick, R. 1988, Algorithms, 2nd ed.(Reading, MA:Addison-Wesley), Chapters 8–13. [2] 8.1 Straight Insertion and Shell’s Method.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 08.0 Sorting 8.0 Introduction.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 07.8 Random Numbers 7.8 Adaptive and Recursive Monte Carlo Methods.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 07.7 Random Numbers 7.7 Quasi-(that is, Sub-)Random Sequences.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 07.6 Random Numbers 7.6 Simple Monte Carlo Integration.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 07.5 Random Numbers 7.5 Random Sequences Based on Data Encryption.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 07.4 Random Numbers 7.4 Generation of Random Bits.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 07.3 Random Numbers 7.3 Rejection Method:Gamma, Poisson, Binomial Deviates.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 07.2 Random Numbers 7.2 Transformation Method:Exponential and Normal Deviates.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 07.1 Random Numbers 7.1 Uniform Deviates.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 07.0 Random Numbers 7.0 Introduction.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 06.9 Special Functions 6.9 Fresnel Integrals, Cosine and Sine Integrals.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 09.4 Root Finding and Nonlinear Sets of Equations 9.4 Newton-Raphson Method Using Derivative.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 09.5 Root Finding and Nonlinear Sets of Equations 9.5 Roots of Polynomials.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 09.6 Root Finding and Nonlinear Sets of Equations 9.6 Newton-Raphson Method for Nonlinear Systems of Equations.pdf
- 《数字信号处理》教学参考资料(Numerical Recipes in C,The Art of Scientific Computing Second Edition)Chapter 09.7 Root Finding and Nonlinear Sets of Equations 9.7 Globally Convergent Methods for Nonlinear Systems of Equations.pdf
- 《数字信号处理》教学参考资料(MATLAB 5手册)第1章 MATLAB是什么.pdf
- 《数字信号处理》教学参考资料(MATLAB 5手册)第2章 MATLAB启动.pdf
- 《数字信号处理》教学参考资料(MATLAB 5手册)第3章 矩阵运算.pdf
- 《数字信号处理》教学参考资料(MATLAB 5手册)第4章 创建新矩阵.pdf
- 《数字信号处理》教学参考资料(MATLAB 5手册)第5章 字符串和其他数据类型.pdf
- 《数字信号处理》教学参考资料(MATLAB 5手册)第6章 数据分析和统计.pdf
- 《数字信号处理》教学参考资料(MATLAB 5手册)第7章 线性方程系统.pdf
- 《数字信号处理》教学参考资料(MATLAB 5手册)第8章 特征值和特征向量.pdf
- 《数字信号处理》教学参考资料(MATLAB 5手册)第9章 稀疏矩阵.pdf
- 《数字信号处理》教学参考资料(MATLAB 5手册)第10章 函数、插值和曲线拟合分析.pdf
- 《数字信号处理》教学参考资料(MATLAB 5手册)第11章 积分和微分方程组.pdf
- 《数字信号处理》教学参考资料(MATLAB 5手册)第12章 MATLAB程序设计.pdf
- 《数字信号处理》教学参考资料(MATLAB 5手册)第13章 图形和声音.pdf
- 《数字信号处理》教学参考资料(MATLAB 5手册)第14章 高级图形.pdf
- 《数字信号处理》教学参考资料(MATLAB 5手册)第15章 MATLAB与其他编程语言结合.pdf
- 《数字信号处理》教学参考资料(MATLAB 5手册)附录A MATLAB初步.pdf