《人工智能》课程教学资源(PPT课件讲稿)Ch10 Auto-encoders(Auto and variational encoders v.9r6)

Ch10 Auto-encoders KH Wong Ch10. Auto and variational encoders v.9r6
Ch10. Auto-encoders KH Wong Ch10. Auto and variational encoders v.9r6 1

Two types of autoencoders Part1: Vanilla(traditional) autoencoder or simply called autoencoder Part 2: Variational autoencoder Ch10. Auto and variational encoders v.9r6
Two types of autoencoders • Part1 : Vanilla (traditional) Autoencoder – or simply called Autoencoder • Part 2: Variational Autoencoder Ch10. Auto and variational encoders v.9r6 2

Part 1 Overview of vanilla(traditional) Autoencoder ntroduction Theory Architecture Application Examples Ch10. Auto and variational encoders v.9r6
Part 1: Overview of Vanilla (traditional) Autoencoder • Introduction • Theory • Architecture • Application • Examples Ch10. Auto and variational encoders v.9r6 3

Introduction What is auto-decoder? A unsupervised method Application For noise removal Dimensiona| reductⅰon Method Use noise- free ground truth data(e.g MNIST)+ self generative noise to train the network The final network can remove noise of input corrupted by noise(e.g. hand written characters), the output will be similar to the ground truth data Ch10. Auto and variational encoders v.9r6
Introduction • What is auto-decoder? – A unsupervised method • Application – For noise removal – Dimensional reduction • Method – Use noise-free ground truth data (e.g. MNIST)+ self generative noise to train the network – The final network can remove noise of input corrupted by noise (e.g. hand written characters), the output will be similar to the ground truth data Ch10. Auto and variational encoders v.9r6 4

Noise remova .https://www.slideshare.net/billlangiun/simple-introduction-to-autoencoder 影2 → Encoder Decoder Noisy input Compressed representation Denoised image The feature we want to extract from the image Result: plt title(original images: top rows Z乙(3 Corrupted Input: middle rows Denoised Input: third rows") p乙733 Ch10. Auto and variational encoders v gr6
Noise removal • https://www.slideshare.net/billlangjun/simple-introduction-to-autoencoder Ch10. Auto and variational encoders v.9r6 5 Result: plt.title('Original images: top rows,' 'Corrupted Input: middle rows, ' 'Denoised Input: third rows')

Auto encoder structure An autoencoder is a feedforward neural network hat learns to predict the input corrupted by noise compressed Data itself in the output Input Output The input-to-hidden part corresponds to an encoder The hidden-to-output part corresponds to a decoder. Input and output are of Erode Docomo the same dimension and encoder decoder SIze https://towardsdatascience.com/deep-autoencoders-using-tensorflow-c68f075fd1a3 Ch10. Auto and variational encoders v. 9r6
Auto encoder Structure An autoencoder is a feedforward neural network that learns to predict the input (corrupted by noise) itself in the output. 𝑦 (𝑖) = 𝑥 (𝑖) • The input-to-hidden part corresponds to an encoder • The hidden-to-output part corresponds to a decoder. • Input and output are of the same dimension and size. Ch10. Auto and variational encoders v.9r6 6 https://towardsdatascience.com/deep-autoencoders-using-tensorflow-c68f075fd1a3 Input Output encoder decoder

nput output Theory x->F->X X z=o(WX+b) Wσ W′ =o(Wz+b)--( 水 Autoencoders are trained to minimize reconstruction errors(such as squared errors) often referred to as the loss(L) x->F->X By combining )and(x) L(X,Xx)=||X×212 Xx(Wσ(WⅩ+b)+b)|2 Ch10. Auto and variational encoders v.9r6
Theory • x->F->x’ • z=(Wx+b)-----------(*) • x’=’(W’z+b’) -------(**) • Autoencoders are trained to minimize reconstruction errors (such as squared errors), often referred to as the "loss (L)": • By combining (*) and (**) • L(x,x’)=||x-x’||2 • =||x-’(W’ (Wx+b)+b’)||2 Ch10. Auto and variational encoders v.9r6 7 ’ x->F->x’ W W’

Exercise 1 ° How many input hidden layers, output Input Output layers for the figure imprese vara shown? How many neurons in these layers? What is the relation between the number rode of input and output neurons ? Ch10. Auto and variational encoders v.9r6 8
Exercise 1 • How many input, hidden layers, output layers for the figure shown? • How many neurons in these layers? • What is the relation between the number of input and output neurons? Ch10. Auto and variational encoders v.9r6 8 Input Output

Answer 1 Input Output How many input hidden ccaresse Dara layers, output layers for the figure shown? Answer: 1 input, 3 hidden, 1 output layers How many neurons in these layers? Answer: input(4) Enode Decode hidden(3, 2, 3), output (4 What is the relation between the number of input and output neurons? Answer: same Ch10. Auto and variational encoders v.9r6
Answer 1 • How many input, hidden layers, output layers for the figure shown? – Answer:1 input, 3 hidden,1 output layers • How many neurons in these layers? – Answer: input(4), hidden(3,2,3), output (4) • What is the relation between the number of input and output neurons? – Answer: same Ch10. Auto and variational encoders v.9r6 9 Input Output

Architecture Encoder and decoder Original Information aInIng can use Autoencoder typical Encoder backpropagation methods Compressed Information Decoder https://towardsdatascience.com/how-to reduce- image-noises-by-autoencoder- Restored Information 65d5e6de543 Ch10. Auto and variational encoders v. 9r6
Architecture • Encoder and decoder • Training can use typical backpropagation methods Ch10. Auto and variational encoders v.9r6 10 https://towardsdatascience.com/how-toreduce-image-noises-by-autoencoder- 65d5e6de543
按次数下载不扣除下载券;
注册用户24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
- 《ARM Cortex-M3权威指南》课程教学资源(PPT课件讲稿)Cortex M3 存储系统访问.pptx
- 北京师范大学现代远程教育:《计算机应用基础》课程教学资源(PPT课件讲稿)第四篇 数据处理与数据分析.ppsx
- 《数字图像处理》课程教学资源(PPT课件讲稿)第八章 形态学处理.ppt
- 《计算机网络技术及应用》课程教学资源(PPT课件讲稿)第十一章 网络安全.ppt
- 《人工智能》课程教学资源(PPT课件讲稿)第13章 智能优化计算简介.ppt
- 清华大学出版社:《计算机网络安全与应用技术》课程教学资源(PPT课件讲稿)第5章 Windows NT/2000的安全与保护措施.ppt
- 上海交通大学:《计算机组成原理 Computer Organization》课程教学资源(PPT课件讲稿)Chapter 4A The Processor, Part A.pptx
- 香港城市大学:PERFORMANCE ANALYSIS OF CIRCUIT SWITCHED NETWORKS(PPT讲稿).pptx
- 《结构化程序设计》课程教学资源(PPT课件讲稿)第4章 VB控制结构.ppt
- 安徽理工大学:《算法设计与分析 Algorithm Design and Analysis》课程教学资源(PPT课件讲稿)第一章 导引与基本数据结构.ppt
- 四川大学:《操作系统 Operating System》课程教学资源(PPT课件讲稿)Chapter 1 Computer System Overview.ppt
- 南京大学:《面向对象技术 OOT》课程教学资源(PPT课件讲稿)分布对象 Distributed Objects(1).ppt
- 《C语言程序设计》课程教学资源(PPT课件讲稿)第10章 指针.ppt
- 北京大学:《高级软件工程》课程教学资源(PPT课件讲稿)第九讲 静态代码的可信性分析概述.ppt
- 澳门大学:统计机器翻译领域适应性研究 Domain Adaptation for Statistical Machine Translation Master Defense.pptx
- 山东大学:《数据结构》课程教学资源(PPT课件讲稿)第5章 堆栈(STACKS)Restricted version of a linear list.ppt
- 《C语言程序设计》课程教学资源(PPT课件讲稿)第2章 数据类型与常用库函数.ppt
- 南京大学:《面向对象技术 OOT》课程教学资源(PPT课件讲稿)设计模式 Design Pattern(3).ppt
- 安徽理工大学:《汇编语言》课程教学资源(PPT课件讲稿)第二章 80x86计算机组织.ppt
- SVM原理与应用(PPT讲稿).pptx
- 西安电子科技大学:《操作系统 Operating Systems》课程教学资源(PPT课件讲稿)Chapter 05 输入输出 Input/Output.ppt
- 《计算机应用基础》课程教学资源(PPT课件讲稿)第5章 文件文档工具.ppt
- 南京大学:《面向对象技术 OOT》课程教学资源(PPT课件讲稿)敏捷软件开发 Agile Software Development.ppt
- 《信息安全工程》课程教学资源(PPT课件讲稿)第3章 密码学基础.ppt
- 中国科学技术大学:《计算机体系结构》课程教学资源(PPT课件讲稿)RISC-V指令集及简单实现.pptx
- 电子科技大学:《计算机操作系统》课程教学资源(PPT课件讲稿)第三章 存储管理 Memory Management.ppt
- 《C语言教程》课程教学资源(PPT课件讲稿)第三章 C语言程序设计初步.ppt
- 《数据结构》课程教学资源(PPT课件讲稿)第十章 内部排序.ppt
- 清华大学:A Pivotal Prefix Based Filtering Algorithm for String Similarity Search(PPT讲稿).pptx
- 河南中医药大学(河南中医学院):《计算机文化》课程教学资源(PPT课件讲稿)第四章 计算机软件系统(主讲:许成刚、阮晓龙).ppt
- 《人工智能技术导论》课程教学资源(PPT课件讲稿)第1章 人工智能概述.ppt
- 山东大学:《微机原理及单片机接口技术》课程教学资源(PPT课件讲稿)第八章 数据通信.ppt
- 信息和通信技术ICT(PPT讲稿)浅谈信息技术和低碳经济(中国科学技术大学:王煦法).ppt
- 北京大学:网络信息体系结构(PPT讲稿)Web-based Information Architecture.ppt
- P2P Tutorial(PPT讲稿).ppt
- 微软分布式计算技术(PPT讲稿)Dryad and DryadLINQ.ppt
- 《数字图像处理》课程教学资源(PPT课件)第6章 图像复原.ppt
- 电子工业出版社:《计算机网络》课程教学资源(第五版,PPT课件讲稿)第三章 数据链路层.ppt
- 《单片机应用技术》课程PPT教学课件(C语言版)第8章 MCS-51单片机串行通信接口.ppt
- 操作系统原理(PPT讲稿)Windows OS Principles(Windows XP).pps