《数值分析》课程教学资源(课外阅读)The Best of the 20th Century - Editors Name Top 10 Algorithms(SIAM News, 2000)

from SLAMNews,Volume 33.Number 4 The Best of the 20th Century:Editors Name Top 10 Algorithms By Barry A.Cipra e时 ry Arab scholar whose book be impre vances I -and non-selection are bound to be oCmaotshealeormsshoudberead as first-order approximations.Most algorithms take shape over time,with many contributors.) 1946:John algorithm.also known as the Monte Carlo method. dNick Metropolis,all at the Los Alan ory cook up the Metropolis random numher orporation cre S the simplex method for lin ramming dominates the world of industry where eco the ability tooptim urse,the "real problem of ind stry are often nonlinear;the use ntible to e in practice is highly efficient-which in itself says something interesting about the nature of computation. 1950:Magnus He all from the institute for Nume ntzig's sim cal Analysis heatheNationalBureauofStadiard,iniaethedeveiopnentofKnloysubspaceitcratioamcbodn hese algo hms address the seemingly sin ask fonmeeors.o he form Ax =b.The catch useful co such as solving the form Kx,.:=Kx with asimpler matrixthat's ideally" ead to the study ofKrylov subspaces.Name for the u an math issymmetric.Hestens and Stf proposednv method,knowns theoagradintmethod for systems thatare Bi-CGSTAB premiered in and 96 and 1992. respectively.) 1951:Alston Householder ofOak Ridge National Laboratory formalizes the decompositionalapproach to matrix computations facilitates the nalysis of one of thebi Lugbears of numerical linear algebra (n1.James Wilkinsonf the LU d tion of a matrix as uppe triangular factors.) Alston Householde the Fo
from SIAM News, Volume 33, Number 4 By Barry A. Cipra Algos is the Greek word for pain. Algor is Latin, to be cold. Neither is the root for algorithm, which stems instead from alKhwarizmi, the name of the ninth-century Arab scholar whose book al-jabr wa’l muqabalah devolved into today’s high school algebra textbooks. Al-Khwarizmi stressed the importance of methodical procedures for solving problems. Were he around today, he’d no doubt be impressed by the advances in his eponymous approach. Some of the very best algorithms of the computer age are highlighted in the January/February 2000 issue of Computing in Science & Engineering, a joint publication of the American Institute of Physics and the IEEE Computer Society. Guest editors Jack Don-garra of the University of Tennessee and Oak Ridge National Laboratory and Fran-cis Sullivan of the Center for Comput-ing Sciences at the Institute for Defense Analyses put togeth-er a list they call the “Top Ten Algorithms of the Century.” “We tried to assemble the 10 al-gorithms with the greatest influence on the development and practice of science and engineering in the 20th century,” Dongarra and Sullivan write. As with any top-10 list, their selections—and non-selections—are bound to be controversial, they acknowledge. When it comes to picking the algorithmic best, there seems to be no best algorithm. Without further ado, here’s the CiSE top-10 list, in chronological order. (Dates and names associated with the algorithms should be read as first-order approximations. Most algorithms take shape over time, with many contributors.) 1946: John von Neumann, Stan Ulam, and Nick Metropolis, all at the Los Alamos Scientific Laboratory, cook up the Metropolis algorithm, also known as the Monte Carlo method. The Metropolis algorithm aims to obtain approximate solutions to numerical problems with unmanageably many degrees of freedom and to combinatorial problems of factorial size, by mimicking a random process. Given the digital computer’s reputation for deterministic calculation, it’s fitting that one of its earliest applications was the generation of random numbers. 1947: George Dantzig, at the RAND Corporation, creates the simplex method for linear programming. In terms of widespread application, Dantzig’s algorithm is one of the most successful of all time: Linear programming dominates the world of industry, where economic survival depends on the ability to optimize within budgetary and other constraints. (Of course, the “real” problems of industry are often nonlinear; the use of linear programming is sometimes dictated by the computational budget.) The simplex method is an elegant way of arriving at optimal answers. Although theoretically susceptible to exponential delays, the algorithm in practice is highly efficient—which in itself says something interesting about the nature of computation. 1950: Magnus Hestenes, Eduard Stiefel, and Cornelius Lanczos, all from the Institute for Numerical Analysis at the National Bureau of Standards, initiate the development of Krylov subspace iteration methods. These algorithms address the seemingly simple task of solving equations of the form Ax = b. The catch, of course, is that A is a huge n n matrix, so that the algebraic answer x = b/A is not so easy to compute. (Indeed, matrix “division” is not a particularly useful concept.) Iterative methods—such as solving equations of the form Kxi + 1 = Kxi + b – Axi with a simpler matrix K that’s ideally “close” to A—lead to the study of Krylov subspaces. Named for the Russian mathematician Nikolai Krylov, Krylov subspaces are spanned by powers of a matrix applied to an initial “remainder” vector r0 = b – Ax0. Lanczos found a nifty way to generate an orthogonal basis for such a subspace when the matrix is symmetric. Hestenes and Stiefel proposed an even niftier method, known as the conjugate gradient method, for systems that are both symmetric and positive definite. Over the last 50 years, numerous researchers have improved and extended these algorithms. The current suite includes techniques for non-symmetric systems, with acronyms like GMRES and Bi-CGSTAB. (GMRES and Bi-CGSTAB premiered in SIAM Journal on Scientific and Statistical Computing, in 1986 and 1992, respectively.) 1951: Alston Householder of Oak Ridge National Laboratory formalizes the decompositional approach to matrix computations. The ability to factor matrices into triangular, diagonal, orthogonal, and other special forms has turned out to be extremely useful. The decompositional approach has enabled software developers to produce flexible and efficient matrix packages. It also facilitates the analysis of rounding errors, one of the big bugbears of numerical linear algebra. (In 1961, James Wilkinson of the National Physical Laboratory in London published a seminal paper in the Journal of the ACM, titled “Error Analysis of Direct Methods of Matrix Inversion,” based on the LU decomposition of a matrix as a product of lower and upper triangular factors.) 1957: John Backus leads a team at IBM in developing the Fortran optimizing compiler. The creation of Fortran may rank as the single most important event in the history of computer programming: Finally, scientists The Best of the 20th Century: Editors Name Top 10 Algorithms In terms of widespread use, George Dantzig’s simplex method is among the most successful algorithms of all time. Alston Householder

(and others)could tell the computer what they wanted it to do.without having to descend into the netherworld of machine code Although standar isted of amere2 Fortran nd published thesthe ompil produced codeuc efficiency that its output would startle the programmers who studied it. 1959-61:J.G.F.Francis of Ferranti Ltd.,London,finds a stable method for computingeigenvalues,known as the OR algorithm relaienvaues ae aruably the most umporant latris har nd they can be the trickiest to compute.It's problems into routine of elliott bro of doing so quickly.Hoare's algorithm uses the age-old recursive strategy of divide and conquer tosolve the problem:Pick one element as a"pivo he rest int ents (as compared h the pivot),and then repeat th the pos-terchild of computational complexity. 1965:James Cooley ofthe IBMT J Watson Research Center and John Tukey of Princetor transforms can be compute e Quicksort,the FFT ,egorcirTiinanwRnCog es on a divide-and- "conque oard This in itself Jam s Cooley gave computer science n impetus to investigate the inherent complexity ofcomputational problems and algorithms John Tukey 1977:Helaman Fergusc on and Rodney Forcade of Brigham Young University advance an integer relation detection algorithm msinhacotao6rwhid sion ofIf is rational.thee If the Euclidean algorithm oesn't termina io罗Pm edure at leas bounds on t though mu the precise coefficients of the polynomials satisfied by the third and fourth bifurcation points. eft他cltcm-gmeigDohmgmdlisofdezSal2slargctcoeticieats257)nhasasoporedueunsimpli6ie 197:Leslie Greengard and Vladimir Rokhlin of Yale Uni ersity invent the fast multipole algorithm one rate culbtionsofthemotiom One of the of the fast that it comes qupped ith.a feature that many methods lack. What new insights and algorithms will the 21st e ry bring?The c on't be known for anothe hundred years Onethnmowevnthendho-istThe newcentrys Barry A.Cipra is amathematician and writer based in Northfield.Minnesota
(and others) could tell the computer what they wanted it to do, without having to descend into the netherworld of machine code. Although modest by modern compiler standards—Fortran I consisted of a mere 23,500 assembly-language instructions—the early compiler was nonetheless capable of surprisingly sophisticated computations. As Backus himself recalls in a recent history of Fortran I, II, and III, published in 1998 in the IEEE Annals of the History of Computing, the compiler “produced code of such efficiency that its output would startle the programmers who studied it.” 1959–61: J.G.F. Francis of Ferranti Ltd., London, finds a stable method for computing eigenvalues, known as the QR algorithm. Eigenvalues are arguably the most important numbers associated with matrices—and they can be the trickiest to compute. It’s relatively easy to transform a square matrix into a matrix that’s “almost” upper triangular, meaning one with a single extra set of nonzero entries just below the main diagonal. But chipping away those final nonzeros, without launching an avalanche of error, is nontrivial. The QR algorithm is just the ticket. Based on the QR decomposition, which writes A as the product of an orthogonal matrix Q and an upper triangular matrix R, this approach iteratively changes Ai = QR into Ai + 1 = RQ, with a few bells and whistles for accelerating convergence to upper triangular form. By the mid-1960s, the QR algorithm had turned once-formidable eigenvalue problems into routine calculations. 1962: Tony Hoare of Elliott Brothers, Ltd., London, presents Quicksort. Putting N things in numerical or alphabetical order is mind-numbingly mundane. The intellectual challenge lies in devising ways of doing so quickly. Hoare’s algorithm uses the age-old recursive strategy of divide and conquer to solve the problem: Pick one element as a “pivot,” separate the rest into piles of “big” and “small” elements (as compared with the pivot), and then repeat this procedure on each pile. Although it’s possible to get stuck doing all N(N – 1)/2 comparisons (especially if you use as your pivot the first item on a list that’s already sorted!), Quicksort runs on average with O(N log N) efficiency. Its elegant simplicity has made Quicksort the pos-terchild of computational complexity. 1965: James Cooley of the IBM T.J. Watson Research Center and John Tukey of Princeton University and AT&T Bell Laboratories unveil the fast Fourier transform. Easily the most far-reaching algo-rithm in applied mathematics, the FFT revolutionized signal processing. The underlying idea goes back to Gauss (who needed to calculate orbits of asteroids), but it was the Cooley–Tukey paper that made it clear how easily Fourier transforms can be computed. Like Quicksort, the FFT relies on a divide-and-conquer strategy to reduce an ostensibly O(N2 ) chore to an O(N log N) frolic. But unlike Quick- sort, the implementation is (at first sight) nonintuitive and less than straightforward. This in itself gave computer science an impetus to investigate the inherent complexity of computational problems and algorithms. 1977: Helaman Ferguson and Rodney Forcade of Brigham Young University advance an integer relation detection algorithm. The problem is an old one: Given a bunch of real numbers, say x1, x2, . . . , xn, are there integers a1, a2, . . . , an (not all 0) for which a1x1 + a2x2 + . . . + anxn = 0? For n = 2, the venerable Euclidean algorithm does the job, computing terms in the continued-fraction expansion of x1/x2. If x1/x2 is rational, the expansion terminates and, with proper unraveling, gives the “smallest” integers a1 and a2. If the Euclidean algorithm doesn’t terminate—or if you simply get tired of computing it—then the unraveling procedure at least provides lower bounds on the size of the smallest integer relation. Ferguson and Forcade’s generalization, although much more difficult to implement (and to understand), is also more powerful. Their detection algorithm, for example, has been used to find the precise coefficients of the polynomials satisfied by the third and fourth bifurcation points, B3 = 3.544090 and B4 = 3.564407, of the logistic map. (The latter polynomial is of degree 120; its largest coefficient is 25730.) It has also proved useful in simplifying calculations with Feynman diagrams in quantum field theory. 1987: Leslie Greengard and Vladimir Rokhlin of Yale University invent the fast multipole algorithm. This algorithm overcomes one of the biggest headaches of N-body simulations: the fact that accurate calculations of the motions of N particles interacting via gravitational or electrostatic forces (think stars in a galaxy, or atoms in a protein) would seem to require O(N2 ) computations—one for each pair of particles. The fast multipole algorithm gets by with O(N) computations. It does so by using multipole expansions (net charge or mass, dipole moment, quadrupole, and so forth) to approximate the effects of a distant group of particles on a local group. A hierarchical decomposition of space is used to define ever-larger groups as distances increase. One of the distinct advantages of the fast multipole algorithm is that it comes equipped with rigorous error estimates, a feature that many methods lack. What new insights and algorithms will the 21st century bring? The complete answer obviously won’t be known for another hundred years. One thing seems certain, however. As Sullivan writes in the introduction to the top-10 list, “The new century is not going to be very restful for us, but it is not going to be dull either!” Barry A. Cipra is a mathematician and writer based in Northfield, Minnesota. James Cooley John Tukey
按次数下载不扣除下载券;
注册用户24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
- 《数值分析》课程教学资源(课外阅读)Numerical Analysis(Trefethen, Princeton Companion to Mathematics, 2008).pdf
- 华东师范大学:《数值分析》课程教学资源(参考资料)IEEE浮点运算标准.pdf
- 华东师范大学:《数值分析》课程教学资源(课件讲义)第一讲 引论与预备知识——数值计算中的误差.pdf
- 华东师范大学:《数值分析》课程教学资源(课件讲义)第一讲 引论与预备知识——线性代数基础.pdf
- 《数值分析》课程教学资源(课件讲义)第一讲 引论与预备知识——数值分析引论 Numerical Analysis(主讲:潘建瑜).pdf
- 《数值分析》课程教学资源(学习资料)常用积分表(常用积分公式)Numerical Analysis.pdf
- 大学数学(参考资料)中学数学和微积分公式参考(英文).pdf
- 南阳师范学院:《高等数学》课程教学资源(试卷习题)微分方程(数三)考研真题.pdf
- 南阳师范学院:《高等数学》课程教学资源(试卷习题)定积分考研(数三)真题.pdf
- 南阳师范学院:《高等数学》课程教学资源(试卷习题)不定积分(数三)考研真题.pdf
- 南阳师范学院:《高等数学》课程教学资源(试卷习题)微分中值定理与导数的应用考研(数三)真题.pdf
- 南阳师范学院:《高等数学》课程教学资源(试卷习题)一元函数极限、连续(数三)考研真题.pdf
- 南阳师范学院:《高等数学》课程教学资源(试卷习题)一元函数导数与微分(数三)考研真题.pdf
- 南阳师范学院:《高等数学》课程教学资源(试卷习题)微分方程(数二)考研真题.pdf
- 南阳师范学院:《高等数学》课程教学资源(试卷习题)定积分考研(数二)真题.pdf
- 南阳师范学院:《高等数学》课程教学资源(试卷习题)微分中值定理与导数的应用考研(数二)真题.pdf
- 南阳师范学院:《高等数学》课程教学资源(试卷习题)不定积分(数二)考研真题.pdf
- 南阳师范学院:《高等数学》课程教学资源(试卷习题)一元函数极限、连续(数二)考研真题.pdf
- 南阳师范学院:《高等数学》课程教学资源(试卷习题)一元函数导数与微分(数二)考研真题.pdf
- 南阳师范学院:《高等数学》课程教学资源(试卷习题)微分方程(数一)考研真题.pdf
- 《数值分析》课程教学资源(课外阅读)Is Numerical Analysis Boring(Sullivan, 2006).pdf
- 《数值分析》课程教学资源(文献资料)科学计算——科技创新的第三种方法(中国科学院:陈志明, 2012).pdf
- 华东师范大学:《数值分析》课程教学资源(课件讲义)第二讲 线性方程组直接解法——Gauss消去法与矩阵分解法.pdf
- 《数值分析》课程教学资源(参考资料)Gaussian Elimination(Higham, 2011).pdf
- 《数值分析》课程教学资源(参考资料)Matrix factorizations and direct solution of linear systems(Beattie, Handbook of LA, 2014).pdf
- 华东师范大学:《数值分析》课程教学资源(课件讲义)第二讲 线性方程组直接解法——扰动分析与解的改进.pdf
- 华东师范大学:《数值分析》课程教学资源(课件讲义)第三讲 线性最小二乘问题——问题介绍与 Householder、Givens变换.pdf
- 华东师范大学:《数值分析》课程教学资源(课件讲义)第三讲 线性最小二乘问题——QR分解.pdf
- 华东师范大学:《数值分析》课程教学资源(课件讲义)第三讲 线性最小二乘问题——奇异值分解(SVD).pdf
- 华东师范大学:《数值分析》课程教学资源(课件讲义)第三讲 线性最小二乘问题——线性最小二乘问题的求解方法.pdf
- 《数值分析》课程教学资源(参考资料)Singular Values and Singular Value Inequalities(Mathias, Handbook of LA, 2014).pdf
- 《数值分析》课程教学资源(课外阅读)Professor SVD(Moler, 2006).pdf
- 《数值分析》课程教学资源(课外阅读)G. H. Golub, History of Numerical Linear Algebra, a Personal View, 2007.pdf
- 《数值分析》课程教学资源(课外阅读)Lloyd N. Trefethen, The Definition of Numerical Analysis, SIAM News, Nov 1992.pdf
- 《数值分析》课程教学资源(课外阅读)Lloyd N. Trefethen, Predictions for scientific computing 50 years from now, Mathematics Today, 2000.pdf
- 《数值分析》课程教学资源(课外阅读)数值计算指南, Sun Microsystems, Inc., 2005.pdf
- 大连大学:数学与应用数学(师范类)专业课程教学大纲汇编(2010).doc
- 大连大学:数学与应用数学(金融数学)专业课程教学大纲汇编(2010).doc
- 华东师范大学:《迭代方法与预处理》课程教学课件(迭代方法与预处理)第一讲 线性代数基础(矩阵基础).pdf
- 华东师范大学:《迭代方法与预处理》课程教学课件(迭代方法与预处理)第二讲 非负矩阵与M-矩阵.pdf