西安电子科技大学:《神经网络与模糊系统》课程教学资源(PPT课件讲稿)Chapter 6 结构和平衡 Architecture and Equilibria

胂经网络与模糊系统 Chapter 6 Architecture and Equilibria 结构和平衡 学生:李琦 导师:高新波
Architecture and Equilibria 结构和平衡 Chapter 6 神经网络与模糊系统 学生: 李 琦 导师:高新波

6. 1 Neutral Network As Stochastic Gradient system Classify Neutral network model by their synaptic connection topologies and by how learning modifies their connection topologies synaptic connection topologies feedforward: if no closed synaptic loops l2feedback: if closed synaptic loops or feedback pathways how learning modifies their connection topologies 1. Supervised learning: use class-membership information of' training samples 2.Unsupervised learning: use unlabelled training samplings 2003.11.19
2003.11.19 2 6.1 Neutral Network As Stochastic Gradient system Classify Neutral network model By their synaptic connection topologies and by how learning modifies their connection topologies 1. : 2. : feedforward if no closed synaptic loops feedback if closed synaptic loops or feedback pathways 1. : 2. : Supervised learning use class membership information of training samples Unsupervised learning use unlabelled training samplings − synaptic connection topologies how learning modifies their connection topologies

6.1 Neutral Network As stochastic Gradient system Decode Feedforward Feedback Gradient descent LMS Recurrent BackPropagation BackPropagation R t Learing RABAM Vetor Quantization Brownian annealing Boltzmann learning Self-Organization Maps ABAM ART-2 Counter-propagation BAM-Cohen-Grossberg Model ople cUI Brain-State-In-A-Box Masking field Adaptive-R ART-2 Neural Net Work taxonomy 2003.11.19
2003.11.19 3 6.1 Neutral Network As Stochastic Gradient system Gradient descent LMS BackPropagation Reinforcement Learing Recurrent BackPropagation Vetor Quantization Self-Organization Maps Competitve learning Counter-propagation RABAM Brownian annealing Boltzmann learning ABAM ART-2 BAM-Cohen-Grossberg Model Hopfield circuit Brain-State-In-A-Box Masking field Adaptive-Resonance ART-1 ART-2 Feedforward Feedback Decode S u p e r v i s e d U n s u p e r v i s e d E n c o d e Neural NetWork Taxonomy

6.2 Global Equilibria: convergence and stability Three dynamical systems in neural network synaptic dynamical system M neuronal dynamical system x Joint neuronal-synaptic dynamical system (i, M) Historically, Neural engineers study the first or second neural network. They usually study learning in feedforward neural networks and neural stability in nonadaptive feed back neural networks. RaBaM and art network depend on joint equilibration of the synaptic and neuronal dynamical systems 2003.11.19
2003.11.19 4 6.2 Global Equilibria: convergence and stability Three dynamical systems in neural network: synaptic dynamical system neuronal dynamical system joint neuronal-synaptic dynamical system Historically,Neural engineers study the first or second neural network.They usually study learning in feedforward neural networks and neural stability in nonadaptive feedback neural networks. RABAM and ART network depend on joint equilibration of the synaptic and neuronal dynamical systems. M x ( , ) x M

6.2 Global equilibria: convergence and stability Equilibrium is steady state(for fixed-point attractors Convergence is synaptic equilibrium. M=0 Stability is neuronal equilibrium. x=0 More generally neural signals reach steady state even though the activations still change. We denote steady state in the neuronal field f.F=0 Global stability: x=0.M=0 Stability - Equilibrium dilemma Neurons fluctuate faster than synapses fluctuate Convergence undermines stability 2003.11.19
2003.11.19 5 6.2 Global Equilibria: convergence and stability Equilibrium is steady state (for fixed-point attractors) Convergence is synaptic equilibrium. Stability is neuronal equilibrium. More generally neural signals reach steady state even though the activations still change. We denote steady state in the neuronal field : Global stability: Stability - Equilibrium dilemma : Neurons fluctuate faster than synapses fluctuate. Convergence undermines stability. Μ = 0 x = 0 F x Fx = 0 x = 0,M = 0

6.3 Synaptic convergence to centroids: AvQ algorithms Competitive learning adaptively quantizes the input pattern space Rn. Probability density function p(x)characterizes the continuous distributions of patterns in r We shall prove that competitive AvQ synaptic vector m converge exponentially quickly to pattern-class centroids and more generally, at equilibrium they vibrate about the centroids in a brownian motion 2003.11.19
2003.11.19 6 6.3 Synaptic convergence to centroids: AVQ Algorithms We shall prove that competitive AVQ synaptic vector converge exponentially quickly to pattern-class centroids and, more generally, at equilibrium they vibrate about the centroids in a Browmian motion. m j Competitive learning adaptively quantizes the input pattern space . Probability density function characterizes the continuous distributions of patterns in . n R p(x) n R

6.3 Synaptic convergence to centroids: AvQ algorithms Competitive avQ Stochastic Differential Equations R=D1∪D2∪D3…Dk 1≠ The random indicator function fx∈D 0 if xed Supervised learning algorithms depend explicitly on the indicator functions. Unsupervised learning algorithms don't require this pattern-class information Centriod of D 「D.xp(x)dx x Ip p(x)dx 2003.11.19
2003.11.19 7 6.3 Synaptic convergence to centroids: AVQ Algorithms 1 2 3.... , k n R D D D D D D if i j i j = = The Random Indicator function Supervised learning algorithms depend explicitly on the indicator functions.Unsupervised learning algorithms don’t require this pattern-class information. Centriod of : 1 ( ) 0 j j D j if x D I x if x D = ( ) ( ) j j D j D xp x dx x p x dx = Competitive AVQ Stochastic Differential Equations: Dj

6.3 Synaptic convergence to centroids: AvQ algorithms The stochastic unsupervised competitive learning law m;=S,V,I We want to show that at equilibrium m, =x, or E(m, ) =x As discussed in Chapter 4: S,lD (x) The linear stochastic competitive learning law x)X-m,:|+n The linear supervised competitive learning law m=r(xI(x[x-m, ]+n Dur ∑(x) 2003.11.19
2003.11.19 8 6.3 Synaptic convergence to centroids: AVQ Algorithms The Stochastic unsupervised competitive learning law: ( )[ ] m S y x m n j j j j j = − + We want to show that at equilibrium or E( ) m x m x j j j j = = ( ) j j D As discussed in Chapter 4: S I x The linear stochastic competitive learning law: ( )[ ] j j D j j m = − + I x x m n The linear supervised competitive learning law: ( )[ ] ( ) ( ) ( ) ( ) j j i j j D j j j D D i j r I x x m n r I x I x m x x = − + = −

6.3 Synaptic convergence to centroids: AvQ algorithms The linear differential competitive learning law x-m,+n n practice m, =sony,[x-m,+n I if z>0 sg2]=10∥z=0 f 2003.11.19
2003.11.19 9 6.3 Synaptic convergence to centroids: AVQ Algorithms The linear differential competitive learning law: In practice: [ ] m S x m n j j j j = − + sgn[ ][ ] 1 0 sgn[ ] 0 0 1 0 m y x m n j j j j if z z if z if z = − + = = −

6.3 Synaptic convergence to centroids: AvQ algorithms Competitive avQ algorithms 1. Initialize synaptic vectors: m, 0=x(i),i=1 2. For random sample x(t), find the closest( winning)synaptic vector m, (t): m, ()-x(o=min lm,()x(I where=x+.+x gives the squared Euclidean norm of x 3. Update the wining synaptic vectors m, (t) by the UCL, scl, or DCL learning algorithm 2003.11.19
2003.11.19 10 6.3 Synaptic convergence to centroids: AVQ Algorithms Competitive AVQ Algorithms 1. Initialize synaptic vectors: mi (0) = x(i) , i =1,......,m 2.For random sample , find the closest (“winning”) synaptic vector : x(t) m (t) j ( ) ( ) min ( ) ( ) j i i m t x t m t x t − = − 3.Update the wining synaptic vectors by the UCL ,SCL,or DCL learning algorithm. m (t) j 2 2 2 1 ....... where x x x = + + n gives the squared Euclidean norm of x
按次数下载不扣除下载券;
注册用户24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
- 清华大学:A Feature Weighting Method for Robust Speech Recognition(Speech Activities in CST).ppt
- 北京师范大学现代远程教育:《计算机应用基础》课程教学资源(PPT课件讲稿)第2章 计算机网络应用.ppsx
- 《Java网站开发》教学资源(PPT讲稿)第9章 过滤器和监听器技术.ppt
- 长春大学:《计算机应用基础》课程教学资源(PPT课件讲稿)第一章 计算机基础知识(崔天明).ppt
- 合肥工业大学:《网络安全概论》课程教学资源(PPT课件讲稿)第2讲 密码学简介(主讲:苏兆品).ppt
- 《计算机网络与因特网 Computer Networks and Internets》课程教学资源(PPT课件讲稿)Part II 物理层(信号、媒介、数据传输).ppt
- 东南大学:《数据结构》课程教学资源(PPT课件讲稿)第三章 栈与队列.ppt
- 清华大学:An Efficient Trie-based Method for Approximate Entity Extraction with Edit-Distance Constraints.pptx
- 四川大学:《操作系统 Operating System》课程教学资源(PPT课件讲稿)Chapter 5 互斥与同步(Mutual Exclusion and Synchronization)5.4 Monitors 5.5 Message Passing 5.6 Readers/Writers Problem.ppt
- 上海交通大学:《程序设计》课程教学资源(PPT课件讲稿)第6章 过程封装——函数.ppt
- 《3ds Max》教学资源(PPT课件)第4章 基本三维模型的创建.ppt
- 南京大学:复杂系统学习(PPT课件讲稿)佩特里网 Petri Nets.pptx
- 香港科技大学:《软件开发》教学资源(PPT课件讲稿)Functions.ppt
- 《计算机文化基础》课程教学资源(PPT课件讲稿)第二章 Windows XP操作系统.ppt
- 电子科技大学:《计算机操作系统》课程教学资源(PPT课件讲稿)第五章 设备管理.ppt
- 山东大学:语音识别技术(PPT课件讲稿)自动语音识别 Automatic Speech Recognition.pptx
- 数据集成 Data Integration(PPT讲稿)成就与展望 Achievements and Perspectives.ppt
- 北京师范大学:拓扑序及其量子相变(PPT课件讲稿)Topological Order and its Quantum Phase Transition.ppt
- 计算机系教学资源(PPT课件讲稿)信息安全与保密技术.ppt
- 汤姆森 Thomson:利用Web of Knowledge对课题进行检索、分析、跟踪、管理.ppt
- 北京大学:人工神经网络(PPT课件讲稿)Artificial Neural Networks,ANN.ppt
- 《计算机组成原理》课程教学资源(PPT课件讲稿)第4章 处理器(CPU).ppt
- 吉林大学:《C语言》课程教学资源(PPT课件讲稿)第6章 利用数组处理批量数据.ppt
- 《Vb程序设计教程》课程教学资源(PPT课件讲稿)第三章 VB语言基础.pps
- 安徽理工大学:《汇编语言》课程教学资源(PPT课件讲稿)第七章 高级汇编语言技术(主讲:李敬兆).ppt
- 《软件质量与测试》课程教学资源(PPT大纲课件,目录版).pptx
- 香港理工大学:Discovering Classification Rules.ppt
- 北京科技大学:物联网知识体系和学科建设(PPT讲稿,王志良).ppt
- 中国科学技术大学:《信号与图像处理基础 Signal and Image Processing》课程教学资源(PPT课件讲稿)傅里叶分析与卷积 Fourier Analysis and Convolution.pptx
- 沈阳理工大学:《单片机C语言应用程序设计》课程PPT教学课件(单片机C语言编程)04 C51编程设计(廉哲).pptx
- 《软件工程 Software Engineering》教学资源:课程教学大纲.pdf
- 上海交通大学:《编译器构造》课程教学资源(PPT讲稿,马融)Compiler.pptx
- 《数字图象处理》课程教学资源(PPT课件讲稿)第七章 邻域运算.ppt
- 北京航空航天大学:《数据挖掘——概念和技术(Data Mining - Concepts and Techniques)》课程教学资源(PPT课件讲稿)Chapter 03 Data Preprocessing.ppt
- 电子工业出版社:《计算机网络》课程教学资源(第五版,PPT课件讲稿)第一章 概述(谢希仁).ppt
- 上海交通大学:Mining Massive Datasets(PPT讲稿).ppt
- 东南大学:《数据结构》课程教学资源(PPT课件讲稿)动态规划.pptx
- 《数据结构》课程教学资源:课程教学资源(PPT课件讲稿)第九章 查找表.ppt
- 南京大学:《面向对象技术 OOT》课程教学资源(PPT课件讲稿)抽象数据类型 Abstract Data Types.ppt
- 中国科学技术大学:《并行计算 Parallel Computing》课程教学资源(PPT课件讲稿)并行编译简介.ppt