中国高校课件下载中心 》 教学资源 》 大学文库

香港理工大学:Artificial Neural Networks for Data Mining

文档信息
资源类别:文库
文档格式:PPT
文档页数:88
文件大小:1.25MB
团购合买:点击进入团购
内容简介
香港理工大学:Artificial Neural Networks for Data Mining
刷新页面文档预览

COMP 578 Artificial Neural Networks for Data Mining Keith c.c. chan Department of Computing The Hong Kong Polytechnic University

1 COMP 578 Artificial Neural Networks for Data Mining Keith C.C. Chan Department of Computing The Hong Kong Polytechnic University

Human VS Computer Computers Not good at performing such tasks as visual or audio processing recognition Execute instructions one after another extremely rapidly Good at serial activities(e.g. counting, adding · Human brain Units respond at x 10/s(vS PV25GHz Work on many different things at once Vision or speech recognition by interaction of many different pieces of information

2 Human vs. Computer • Computers – Not good at performing such tasks as visual or audio processing/recognition. – Execute instructions one after another extremely rapidly. – Good at serial activities (e.g. counting, adding). • Human brain – Units respond at 10/s (vs. PV 2.5GHz). – Work on many different things at once. – Vision or speech recognition by interaction of many different pieces of information

The brain Human brain is complicated and poorly understood Contains approximately 1010 basic units called neurons Each neuron connected to about 10.000 others Dendrites Soma(or Cell Body) Axon Synapse

3 The brain • Human brain is complicated and poorly understood. • Contains approximately 1010 basic units called neurons. • Each neuron connected to about 10,000 others. Axon Dendrites Synapse Soma (or Cell Body)

The neuron Dendrites endre Soma Axon synapse Neuron accepts many inputs(through dendrites) Inputs are all added up in some fashion If enough active inputs are received at once, neuron will be activated and"fire"(through axon)

4 The Neuron • Neuron accepts many inputs (through dendrites). • Inputs are all added up in some fashion. • If enough active inputs are received at once, neuron will be activated and “fire” (through axon). Dendrites Axon Soma Synapse

The Synapse AXon produce voltage pulse called action potential(AP) Need arrival of more than one aP to trigger synapse Synapse releases neurotransmitters when AP is raised sufficiently Neurotransmitters diffuse across the gap chemically 0 activating dendrites on the L neurotransmitter.0 o other side 00,0 synaptic dett Some synapses pass a neurotransmitter large signal across, whilst others allow very little dendrite through

5 The Synapse • Axon produce voltage pulse called action potential (AP). • Need arrival of more than one AP to trigger synapse. • Synapse releases neurotransmitters when AP is raised sufficiently. • Neurotransmitters diffuse across the gap chemically activating dendrites on the other side. • Some synapses pass a large signal across, whilst others allow very little through

Modeling the Single Neuron n inputs Efficiency of synapses modeled by having a multiplicative factor on each of the inputs to the neuron Multiplicative factor associated weights on ∑)y input lines · Neuron' s tasks Calculates weighted sum of its inputs Compares sum to some internal threshold Xn Turn on if threshold exceeded

6 Modeling the Single Neuron • n inputs. • Efficiency of synapses modeled by having a multiplicative factor on each of the inputs to the neuron. • Multiplicative factor = associated weights on input lines. • Neuron’s tasks: – Calculates weighted sum of its inputs. – Compares sum to some internal threshold. – Turn on if threshold exceeded. Σ x1 x2 xn w1 w2 wn y

A Mathematical Model of neurons Neuron computes The thresholding function is altenative weighted sum known as the"step"function, or the "Heaviside"function. Threshold functian. SUM thresholding st 8. W:X Fire if sum exceeds a threshold e -y=l if SUM> 0 Threshold function. y= O if SUM≤

7 A Mathematical Model of Neurons • Neuron computes weighted sum: • Fire if SUM exceeds a threshold θ. – y=1 if SUM > θ – y=0 if SUM  θ.   n i i i SUM w x 1

Learning in Simple Neurons Need to be able to determine connection weights Inspiration comes from looking at real neural systems Reinforce good behavior and reprimand bad E.g., train a nn to recognize 2 characters H and F Output 1 when a H is presented and 0 when it sees a F If it produces an incorrect output, we want to reduce the chances of that happening again This is done by modifying the weights

8 Learning in Simple Neurons • Need to be able to determine connection weights. • Inspiration comes from looking at real neural systems. – Reinforce good behavior and reprimand bad. – E.g., train a NN to recognize 2 characters H and F – Output 1 when a H is presented and 0 when it sees a F. – If it produces an incorrect output, we want to reduce the chances of that happening again. – This is done by modifying the weights

Learning in Simple Neurons (2) Neuron given random initial weights At starting state, neuron knows nothing Present an H Neuron computes the weighted sum of inputs Compare weighted sum with threshold If exceeds threshold, output a 1 otherwise a 0 If output is 1, neuron is correct Do nothing Otherwise if neuron produces a 0 Increase the weights so that next time it will exceed the threshold and produces a 1

9 Learning in Simple Neurons (2) • Neuron given random initial weights. – At starting state, neuron knows nothing. • Present an H. – Neuron computes the weighted sum of inputs. – Compare weighted sum with threshold. – If exceeds threshold, output a 1 otherwise a 0. • If output is 1, neuron is correct. – Do nothing. • Otherwise if neuron produces a 0. – Increase the weights so that next time it will exceed the threshold and produces a 1

A Simple Learning Rule How much weight to increase? Can follow simple rule Add the input values to the weights when we want the output to be on Subtract the input values from the weights when we want the output to be off This learning rule is called the Hebb rule It is a variant on one proposed by donald Hebb and is called Hebbian learning It is the earliest and simplest learning rule for a neuron 10

10 A Simple Learning Rule • How much weight to increase? • Can follow simple rule: – Add the input values to the weights when we want the output to be on. – Subtract the input values from the weights when we want the output to be off. • This learning rule is called the Hebb rule: – It is a variant on one proposed by Donald Hebb and is called Hebbian learning. – It is the earliest and simplest learning rule for a neuron

刷新页面下载完整文档
VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
注册用户24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
相关文档