中国高校课件下载中心 》 教学资源 》 大学文库

自适应控制(英文)_Lecture14 A Perspective on Adaptive Control

文档信息
资源类别:文库
文档格式:PDF
文档页数:9
文件大小:117.12KB
团购合买:点击进入团购
内容简介
自适应控制(英文)_Lecture14 A Perspective on Adaptive Control
刷新页面文档预览

Control and Signal Processing Lecture 14-A Perspective on Signal pro cessing and co nt rol Adaptive Control Theme: How does a daptive control relate to k Similar pro blems with intelligent features Differences 1. Introduct Different 3. Intelligent Syst Si 4.N Relations to recursive estimat io n 5. Expert Cont rol Hybrid Systems Out put er ror estimat io n 6. Conclusio ns Adaptive noise cancellat ion Adaptive pulse code modulation Adaptive Signal Processing Adaptive Filtering FIR Process model Filter 1 y(t)=61(t-1)+bzu(t-2)+…+bu(t-m) Standard fe a(t=s(t)+n(t) and y(t=s(t +r) 92(t-1) nt roduce The adaptive filt s sig ● n disturbance y()=b1(t-1)u(t-1)+…+b1(t-1)u(t-n) Estimate given by t he st and ard rls Let the desired out put be y()=s(t+r). Three Block diagram t hing t 0 FIR filter ● filtering T=0 prediction T>0 Adaptive versio ns: adaptive smoot hing et c Adjustment a recursi timator is an adapt ive predicto r! mechanism C K.J. Astrom and B. Wittenmark

Lecture 14 { A Perspective on Adaptive Control Theme: How does adaptive control relate to adaptive signal processing and other systems with "intelligent features". 1. Introduction 2. Adaptive Signal Processing 3. Intelligent Systems 4. Neural Networks 5. Expert Control - Hybrid Systems 6. Conclusions Control and Signal Processing  Signal processing and control { Similarities Same methodology Similar problems { Di erences Time delays Sampling rates { Di erent market places { Silly to have separate communities  Relations to recursive estimation  Output error estimation  Adaptive noise cancellation  Adaptive pulse code modulation Adaptive Signal Processing Filter ε −1 Σ y x y ˆ x(t) = s(t) + n(t) and y(t) = s(t +  ) Introduce  s signal  n disturbance  x = s + n measurement Let the desired output be y(t) = s(t+ ). Three problems  smoothing  0 Adaptive versions: adaptive smoothing etc! A recursive estimator is an adaptive predictor!! Adaptive Filtering FIR Process model y(t) = b1u(t ￾ 1) + b2u(t ￾ 2) +  + bnu(t ￾ n) Standard form y(t) = 'T (t ￾ 1) T = ( b1 ::: bn ) 'T (t ￾ 1) = ( u(t ￾ 1) ::: u(t ￾ n) ) The adaptive lter y^(t) = ^ b1(t ￾ 1)u(t ￾ 1) + +^ bn(t ￾ 1)u(t ￾ n) Estimate given by the standard RLS Block diagram ε FIR filter Adjustment mechanism θ y u Σ y − 1 y ˆ c K. J. Åström and B. Wittenmark 1

Out put error Est imat ion Adapt ive Filt ering ARMA Model quat ion error models t)+ay(t-1+…+an(t-n) yt)+at-I+.+anyt-n Estimato Hence (t-1=(-t-10t-2):-%t-n 1:t-n) -1=(-t-1 =y(t)-pr(t-1)e(t-1 1 e(t+1)=6+P(y(tlk(t+1 (t+1=yt+1)-(t)p(0 Block Diagrams Adapt ive noise Can cellat ion E quation er ror estImation The Fenton Silencer(A. C. Clarke, 1957) B. Wid Hands free car phone Driver's microphone Microphone for Filtered voice ambient noise g 1=6( g(t-1)y(t-1 C K.J. Aston and B. wittenmak

Adaptive Filtering ARMA Equation error models y(t) + a1y(t ￾ 1) +  + any(t ￾ n) = b1u(t + m ￾ n ￾ 1) +  + bmu(t ￾ n) Hence T = ( a1 ::: an b1 ::: bm ) 'T (t ￾ 1) = ￾ y(t ￾ 1) ::: ￾ y(t ￾ n) u(t + m ￾ n ￾ 1) : : :u(t ￾ n) (t + 1) = (t) + P (t)'(t)(t + 1) (t + 1) = y(t + 1) ￾ T (t)'(t) Output Error Estimation Model y^(t) + a1y^(t ￾ 1) +  + any^(t ￾ n) = b1u(t + m ￾ n ￾ 1) +  + bmu(t ￾ n) Estimator 'T (t ￾ 1) = ￾y^(t ￾ 1)^y(t ￾ 2) ::: ￾ y^(t ￾ n) u(t + m ￾ n ￾ 1) : : :u(t ￾ n) "(t) = y(t) ￾ 'T (t ￾ 1)^ (t ￾ 1) Filter Adjustment mechanism ε θ −1 Σ y (a) x Σ y x − θ (b) ε y ˆ y ˆ Block Diagrams Equation error estimation Output error estimation Adaptive Noise Cancellation  The Fenton Silencer (A. C. Clarke, 1957)  B. Widrow  Lots of applications  Hands free car phone − θ Microphone for ambient noise Driver's microphone Σ Filtered voice signal Traditional estimation based on RMS (a gradient algorithm) (t + 1) = (t) + '(t ￾ 1) + 'T (t ￾ 1)'(t ￾ 1) (t + 1) c K. J. Åström and B. Wittenmark 2

Adaptive Differential Pulse Code Modulation(ADP CM) ADPCM Detaills Trans missio n of telephone sig nals Standard sampling and D co nversio n: 8 kHz and 12 bit Sig nal 8 kHz and 8 bit Trans mit only the innovat io n 8kHz and 4 The need for st and ardization Ccitt Differential pulse code mo dulation(DP CM) Fi Filter h()425 Parameter estimator ADPCM standard bi(t)=(1-2-)bi(t-1)+2-signe(t-isigne(t) Filter sIon Fi Intelligent Systems Technical Biolog ical Systems ntelligent systems Underst and Semantⅰ Adapt adjust to new conditions ● Cybernet acquire knowledge or skill by Wiener 1948 study As hby 1956 Intelligence capacity for reasoning ● Neural Systems underst andign and similar forms of Mc Culloch pitts 1943 earn, recog nize, abst ract benefit from Rosenblatt 1957 experience, co pe wit h new sit uations · Adaptive Systems Flig ht Cont rol 1955 An historical Pers pective Artificial Intellig ence . Mindsets Dart mo ut h Co nference 1956 1970 Mind and matter C K.J. Astrom and B Wittenmark

Adaptive Di erential Pulse Code Modulation (ADPCM) Transmission of telephone signals.  Standard sampling and AD conversion: 8 kHz and 12 bit gives 96 kbit/s  Signal compression: 8 kHz and 8 bit gives 64 kbit/s  Transmit only the innovation: 8kHz and 4 bit gives 32 kbit/s Di erential pulse code modulation (DPCM) Filter ε −1 Σ y Transmis￾sion line Filter ε y ˆ y ˆ ADPCM standard Filter ε −1 Σ y Transmis￾sion line Filter Adjustment mechanism θ θ Adjustment mechanism ε y ˆ y ˆ ADPCM Details Filter ε −1 Σ y Transmis￾sion line Filter Adjustment mechanism θ θ Adjustment mechanism ε y ˆ y ˆ The need for standardization CCITT Filter H(z) = b0z 5 + b1z 4 + ::: + b5 z4 (z2 + a1z + a2 Parameter estimator ^ bi(t) = (1 ￾ 2￾8 )^ bi(t ￾ 1) + 2￾7 sign e(t ￾ i)sign e(t) Fix point calculations Intelligent Systems  Semantics { Adapt adjust to new conditions { Learn to acquire knowledge or skill by study, instruction or experience { Intelligence capacity for reasoning, understandign and similar forms of mental activity. Ability to adapt, learn, recognize, abstract, bene t from experience, cope with new situations  Intelligent Control  An Historical Perspective  Mindsets Intelligent Systems  Technical & Biological Systems { Understand { Imitate  Cybernetics { Wiener 1948 { Ashby 1956  Neural Systems { Mc Culloch Pitts 1943 { Rosenblatt 1957  Adaptive Systems { Flight Control 1955  Arti cial Intelligence { Dartmouth Conference 1956 { Fuzzy Logic 1970  Mind and Matter c K. J. Åström and B. Wittenmark 3

Ne ural networks · The beg inning McCulloch and Pitts 1943 Wiener 1948 Hebb 1949 Exa mple s e first successes · Adaptive system Rosenblatt 1958 Wid row-Hoff 1961 Learni ng systems Artificial int elligence e Into the doldrums ns ky a Survivors Andersson, Grossberg . Neural networks · Neuro- Fuzzy systen Ho field 1982 The Parallel Dist ribut ed Process Gro up at us ro Fuzzy Ne ural networ ks ●Rea| Neurons Minds ts Theory versus Experiments Analytic versus Heuristic · A simple art ifi Th yt)=f(∑au(t) White bo C K.J. Astrom and B Wittenmark

Examples  Adaptive systems  Learning systems  Arti cial intelligence  Expert systems  Neural networks  Neuro-Fuzzy systems Neural Networks  The beginning { McCulloch and Pitts 1943 { Wiener 1948 { Hebb 1949  First successes { Rosenblatt 1958 { Widrow-Ho 1961  Into the Doldrums { Minsky and Papert 1969 { Survivors Andersson, Grossberg, Kohonen  A Revival { Hop eld 1982 { The Parallel Distributed Process Group { The Snowbird Conference  Cult Status { Neuro Fuzzy Neural Networks  Real Neurons  A simple arti cial neuron y(t) = f ￾X aiui(t)  Arti cial neural systems Mindsets  Theory versus Experiments  Analytic versus Heuristic  The role of prior information { White boxes { Grey boxes { Black boxes c K. J. Åström and B. Wittenmark 4

N eural P aradig ms A sim ple art ifi cial neuron pplIcatIons o f Neural nets y()=9∑a() Nonlinear funct io n wit h training mech · Types of networks Pattern recognition Feedforward nets k Rosenblatt Percept ron k Mult layer nets Content adressable memory Radial basis funct ions Soft sensing Nets with feed back Prediction k Boltzmann nets Cont rol Kohonen nets Buil ding co mplex systems fro m simple Nets with feed back and dy namics co mponents * boltzmann e New co m puting architect ures? ** NaCht Networks New hardware-silico n neurons Silico n neuro ns Feed forw ard n etworks Ko honens network Ing Represent ation power(Kol mo go rov) Self-org nizing map f(X1 1b9(∑=1ax) · Locality Competit ive learning . Para meterizat ion Auto matic classificat io n InIl e phonetic typ · Similarity to fuzzy

Neural Paradigms  A simple arti cial neuron y(t) = g ￾X aiui(t)  Types of networks { Feedforward nets Rosenblatts Perceptron Multilayer nets Radial basis functions { Nets with feedback Boltzmann nets Kohonen nets { Nets with feedback and dynamics Boltzmann NACHT Networks  Silicon neurons Applications of Neural Nets  Nonlinear function with training mecha￾nism  Pattern recognition  Classi cation  Optimization  Content adressable memory  Soft sensing  Prediction  Control  Building complex systems from simple components  New computing architectures?  New hardware - silicon neurons? Feedforward Networks  The perceptron  Multilayer nets  Representation power (Kolmogorov)  f (x1; x2;::: ;xn) = Pn i=1 big(Pm j=1 aijxj )  Locality  Parameterization  "Overtraining"  Similarity to fuzzy Kohonens Network  Lateral inhibition  Learning with and without teacher  Self-organizing map  Applications { Competitive learning { Automatic classi cation { The phonetic typewriter c K. J. Åström and B. Wittenmark 5

The Phonetic Ty pe write r hopfie lds Ne t works Ar bitrary connections Dynamics in ne urons () · Steady state y:=f(∑a;x) Optim ization Ne ura l Ne t works for Cont rol Tra ining a Ne twork ace Ax+B (,u) Neural dt where f and g are feedforward networks b)Inverse modeling -□千 Useto Desired response Setpoint Com pare With MRAS C K.J. Astrom and B Wittenmark

The Phonetic Typewriter Hop elds Networks  Arbitrary connections  Dynamics in neurons dx dt = ￾x + Ay; yi = f (xi)  Steady state yi = f ￾P aijxj   Applications { Optimization { Associative memory Neural Networks for Control Replace dx dt = Ax + Bu; y = Cx by dx dt = f (x; u); y = g(x) where f and g are feedforward networks a) ∑ x B A C u y Neural network u x y b) Neural network 1 s 1 s Training a Network Process Neural network ∑ Input − + Desired response e Input Desired response Neural network ∑ − + M e Desired response Neural network ∑ − + e M Process model Setpoint a) Modeling/identification b) Inverse modeling c) Control design Compare with MRAS c K. J. Åström and B. Wittenmark 6

Ex periments Network alog vlsi Silicon neurons Deweerth. Nielsen. Mead and Ast ro m A Simple N Se IEEE T Neural Networks, 1991: 2 · Pulsed o peratio n Integ ratio n of sensing, act uating and nt rol er consumptIo Reliability Comparison with pl contr Issues in Neural Networks Pro perties of individual neuron Network struct u earning issues Algorit hms Wit h -org Representation power the miso par ig m

Silicon Neurons  DeWeerth, Nielsen, Mead and Åström: A Simple Neuron Servo. IEEE Trans on Neural Networks, 1991:2.  Pulsed operation  Integration of sensing, actuating and control  Interesting possibilities  Chip area  Power consumption  Reliability Experiments Network implemented in analog VLSI Comparison with PI Control Issues in Neural Networks  Properties of individual neuron  Network structure  Parameterization  "Learning" issues { Algorithms { With or without teacher { Self-organization { Overtraining  Representation power  The MISO paradigm c K. J. Åström and B. Wittenmark 7

Summa ry of Ne ura l Ne t works Neural Networ ks N arrow view: nonlinear funct io ns wit h adjust ment mechanism Broad view: new computing struct ures Two Vie ws on Artificial lnt e lige nce Mind or matter · Adapt at ion Represent ations of Neurons Narrow view: special algo rit hms STR MRAS Broad view: mechanis ms for adjust ment and learning We have o nly scrat ched the surfa Building com plex syst ems from simple Expe rt Cont rol -I nt roduct ion Ordinary reg ulato rs co nt ain a sig nificant Expe rt Cont rol -Wha Adaptive co nt rollers cont ain a large A feed back co nt roller with a rule and amo unt of logic in the safety jacke script based expert system built in a good way to struct ure log ic and ago- cquires knowledge automatically thro ug l on-line experiments and interact io n w it h the process operator e New feat ures e Orchest rates numerical thms for . Autono mous cont rol rol. ident ifi cation a Increases and refi nes plant knowledge Tuning, gain scheduling and adapt a alized adapt iy agnostIcs co ntroller erformance assessment K.J. Astrom and B. Wittenmark 8

Summary of Neural Networks  Neural Networks { Narrow view: nonlinear functions with adjustment mechanism { Broad view: new computing structures  Adaptation { Narrow view: special algorithms STR, MRAS { Broad view: mechanisms for adjust￾ment and learning  We have only scratched the surface  Building complex systems from simple MISO components Two Views on Arti cial Intelligence Mind or matter Representations of Neurons Expert Control - Introduction  Ordinary regulators contain a signi cant amount of logic  Adaptive controllers contain a large amount of logic in the safety jacket  A good way to structure logic and algo￾rithm  New features  Autonomous Control { Control { Tuning, gain scheduling and adapta￾tion { Diagnostics { Loop assessment { Performance assessment Expert Control - What is is?  A feedback controller with a rule and script based expert system built in  Acquires knowledge automatically through on-line experiments and interaction with the process operator  Orchestrates numerical algorithms for control, identi cation and supervision  Increases and re nes plant knowledge successively  May be viewed as a generalized adaptive controller c K. J. Åström and B. Wittenmark 8

Struct Exam ple of funct io nality e Are fl uct uat io ns nor mal? What algorit hm is used in loop 17? Why?operator+-+Knowledge- Why is derivative act io n not used? it h subst and ard behavior Monitor loop 15 for st ability margins Plot st at ic input-o ut put relatio n for loop 6 List all loops w here dead-time co mpens a Contro Process H y b rid syst ems o n usIons Expert co nt rol leads to complicat ed systems t hat cont ain · Signal processing Towards hig her auto mat io n levels ● Dynamical systems . Finite st ate machines Adaptation is an import ant ingredient Several ot her a pproaches Neural They are not easy to analyse and desig n ert systems Much research is needed C K J. Astlomand B. Wittenmark

Example of Functionality  Are uctuations normal?  What algorithm is used in loop 17? Why?  Why is derivative action not used?  List all loops with substandard behavior  Monitor loop 15 for stability margins  Plot static input-output relation for loop 6  List all loops where dead-time compensa￾tion is used System Structure Identi￾fication Super￾vision Excitation Control Σ Process Knowledge￾based system Operator Hybrid Systems Expert control leads to complicated systems that contain:  Dynamical systems  Finite state machines  Logic  Knowledge-based systems They are not easy to analyse and design. Much research is needed. Conclusions  Signal processing  Towards higher automation levels  Adaptation is an important ingredient  Several other approaches  Neural  Conventional AI  Expert systems c K. J. Åström and B. Wittenmark 9

已到末页,全文结束
刷新页面下载完整文档
VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
注册用户24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
相关文档