中国高校课件下载中心 》 教学资源 》 大学文库

《大数据 Big Data》课程教学资源(参考文献)Learning to Hash for Big Data

文档信息
资源类别:文库
文档格式:PDF
文档页数:42
文件大小:2.63MB
团购合买:点击进入团购
内容简介
1 Introduction Problem Definition Existing Methods 2 Scalable Graph Hashing with Feature Transformation Motivation Model and Learning Experiment 3 Conclusion 4 Reference
刷新页面文档预览

Learning to Hash for Big Data 李武军 LAMDA Group 南京大学计算机科学与技术系 软件新技术国家重点实验室 liwujun@nju.edu.cn May09,2015 日卡三4元,互Q0 Li (http://cs.nju.edu.cn/lwj) Learning to Hash LAMDA,CS.NJU 1/43

Learning to Hash for Big Data o… LAMDA Group HÆåÆOéÅâÆÜE‚X ^á#E‚I[­:¢ø liwujun@nju.edu.cn May 09, 2015 Li (http://cs.nju.edu.cn/lwj) Learning to Hash LAMDA, CS, NJU 1 / 43

Outline ① Introduction o Problem Definition oExisting Methods 2Scalable Graph Hashing with Feature Transformation Motivation ●Model and Learning o Experiment Conclusion ④ Reference 日卡回”三4元,互Q0 Li (http://cs.nju.edu.cn/lwj) Learning to Hash LAMDA,CS.NJU 2/43

Outline 1 Introduction Problem Definition Existing Methods 2 Scalable Graph Hashing with Feature Transformation Motivation Model and Learning Experiment 3 Conclusion 4 Reference Li (http://cs.nju.edu.cn/lwj) Learning to Hash LAMDA, CS, NJU 2 / 43

Introduction Outline ① Introduction o Problem Definition oExisting Methods Scalable Graph Hashing with Feature Transformation o Motivation Model and Learning o Experiment Conclusion Reference +日卡+得¥三4元互)Q0 Li (http://cs.nju.edu.cn/lwj) Learning to Hash LAMDA,CS.NJU 3 /43

Introduction Outline 1 Introduction Problem Definition Existing Methods 2 Scalable Graph Hashing with Feature Transformation Motivation Model and Learning Experiment 3 Conclusion 4 Reference Li (http://cs.nju.edu.cn/lwj) Learning to Hash LAMDA, CS, NJU 3 / 43

Introduction Problem Definition Nearest Neighbor Search(Retrieval) oGiven a query point g,return the points closest(similar)to g in the database (e.g.,images). o Underlying many machine learning,data mining,information retrieval problems Challenge in Big Data Applications: o Curse of dimensionality Storage cost ●Query speed 日卡三4元,互Q0 Li (http://cs.nju.edu.cn/lvj) Learning to Hash LAMDA,CS.NJU 4/43

Introduction Problem Definition Nearest Neighbor Search (Retrieval) Given a query point q, return the points closest (similar) to q in the database (e.g., images). Underlying many machine learning, data mining, information retrieval problems Challenge in Big Data Applications: Curse of dimensionality Storage cost Query speed Li (http://cs.nju.edu.cn/lwj) Learning to Hash LAMDA, CS, NJU 4 / 43

Introduction Problem Definition Similarity Preserving Hashing h(Statue of Liberty)= h(Napoleon)= h (Napoleon)= 10001010 01100001 011001Q1 flipped bit Should be very different Should be similar 0Q0 Li (http://cs.nju.edu.cn/lvj) Learning to Hash LAMDA.CS.NJU 5/43

Introduction Problem Definition Similarity Preserving Hashing Li (http://cs.nju.edu.cn/lwj) Learning to Hash LAMDA, CS, NJU 5 / 43

Introduction Problem Definition Reduce Dimensionality and Storage Cost Gist vector Binary reduction 10 million images 20 GB 160MB 口卡+得二4元互)Q0 Li (http://cs.nju.edu.cn/lwj) Learning to Hash LAMDA,CS.NJU 6/43

Introduction Problem Definition Reduce Dimensionality and Storage Cost Li (http://cs.nju.edu.cn/lwj) Learning to Hash LAMDA, CS, NJU 6 / 43

Introduction Problem Definition Querying Hamming distance: 。101101110,00101101la=3 。l11011,01011lg=1 Query Image Dataset 50Q0 Li (http://cs.nju.edu.cn/lvj) Learning to Hash LAMDA.CS.NJU 7/43

Introduction Problem Definition Querying Hamming distance: ||01101110, 00101101||H = 3 ||11011, 01011||H = 1 Li (http://cs.nju.edu.cn/lwj) Learning to Hash LAMDA, CS, NJU 7 / 43

Introduction Problem Definition Fast Query Speed o By using hashing-based index,we can achieve constant or sub-linear search time complexity. Exhaustive search is also acceptable because the distance calculation cost is cheap now. 日卡三4元,互Q0 Li (http://cs.nju.edu.cn/lwj) Learning to Hash LAMDA,CS.NJU 8/43

Introduction Problem Definition Fast Query Speed By using hashing-based index, we can achieve constant or sub-linear search time complexity. Exhaustive search is also acceptable because the distance calculation cost is cheap now. Li (http://cs.nju.edu.cn/lwj) Learning to Hash LAMDA, CS, NJU 8 / 43

Introduction Problem Definition Hash Function Learning Easy or hard? Hard:discrete optimization problem Easy by approximation:two stages of hash function learning oProjection stage (dimensionality reduction) Projected with real-valued projection function Given a point x,each projected dimension i will be associated with a real-valued projection function fi(x)(e.g.fi(x)=wx) ●Quantization stage Turn real into binary However,there exist essential differences between metric learning(dimensionality reduction)and learning to hash.Simply adapting traditional metric learning is not enough. Li (http://cs.nju.edu.cn/lvj) Learning to Hash LAMDA.CS.NJU 9/43

Introduction Problem Definition Hash Function Learning Easy or hard? Hard: discrete optimization problem Easy by approximation: two stages of hash function learning Projection stage (dimensionality reduction) Projected with real-valued projection function Given a point x, each projected dimension i will be associated with a real-valued projection function fi(x) (e.g. fi(x) = wT i x) Quantization stage Turn real into binary However, there exist essential differences between metric learning (dimensionality reduction) and learning to hash. Simply adapting traditional metric learning is not enough. Li (http://cs.nju.edu.cn/lwj) Learning to Hash LAMDA, CS, NJU 9 / 43

Introduction Existing Methods Data-Independent Methods The hash function family is defined independently of the training dataset: Locality-sensitive hashing (LSH):(Gionis et al.,1999;Andoni and Indyk,2008)and its extensions (Datar et al.,2004;Kulis and Grauman,2009;Kulis et al.,2009). SIKH:Shift invariant kernel hashing (SIKH)(Raginsky and Lazebnik, 2009). Hash function:random projections. 日卡三4元,互Q0 Li (http://cs.nju.edu.cn/lwj) Learning to Hash LAMDA,CS.NJU 10/43

Introduction Existing Methods Data-Independent Methods The hash function family is defined independently of the training dataset: Locality-sensitive hashing (LSH): (Gionis et al., 1999; Andoni and Indyk, 2008) and its extensions (Datar et al., 2004; Kulis and Grauman, 2009; Kulis et al., 2009). SIKH: Shift invariant kernel hashing (SIKH) (Raginsky and Lazebnik, 2009). Hash function: random projections. Li (http://cs.nju.edu.cn/lwj) Learning to Hash LAMDA, CS, NJU 10 / 43

刷新页面下载完整文档
VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
注册用户24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
相关文档