中国高校课件下载中心 》 教学资源 》 大学文库

美国麻省理工大学:《Communication Systems Engineering(通讯系统工程)》Lecture 5: Source Coding

文档信息
资源类别:文库
文档格式:PDF
文档页数:15
文件大小:38.06KB
团购合买:点击进入团购
内容简介
Source coding Source Encode Channel Alphabet alphabet Source symbols Letters of alphabet, ASClI symbols, English
刷新页面文档预览

16.36: Communication Systems Engineering Lecture 5: Source Coding Eytan Modiano

Eytan Modiano Slide 1 16.36: Communication Systems Engineering Lecture 5: Source Coding Eytan Modiano

Source coding Source Encode Channel Alphabet alphabet Source symbols Letters of alphabet, ASClI symbols, English dictionary, etc... Quantized voice Channel symbols In general can have an arbitrary number of channel symbols Typically [0, 1]for a binary channel Objectives of source coding Unique decodability Compression Encode the alphabet using the smallest average number of channel symbols

Eytan Modiano Slide 2 Source coding • Source symbols – Letters of alphabet, ASCII symbols, English dictionary, etc... – Quantized voice • Channel symbols – In general can have an arbitrary number of channel symbols Typically {0,1} for a binary channel • Objectives of source coding – Unique decodability – Compression Encode the alphabet using the smallest average number of channel symbols Source Alphabet {a1..aN} Encode Channel Alphabet {c1..cN}

Compression Lossless compression Enables error free decoding Unique decodability without ambiguity · Lossy compression Code may not be uniquely decodable, but with very high probability can be decoded correctly

Eytan Modiano Slide 3 Compression • Lossless compression – Enables error free decoding – Unique decodability without ambiguity • Lossy compression – Code may not be uniquely decodable, but with very high probability can be decoded correctly

Prefix(free) codes a prefix code is a code in which no codeword is a prefix of any other codeword Prefix codes are uniquely decodable Prefix codes are instantaneously decodable The following important inequality applies to prefix codes and in general to all uniquely decodable codes Kraft Inequality Let n,.. n, be the lengths of codewords in a prefix or any Uniquely decodable)code. Then, 2<1

Eytan Modiano Slide 4 Prefix (free) codes • A prefix code is a code in which no codeword is a prefix of any other codeword – Prefix codes are uniquely decodable – Prefix codes are instantaneously decodable • The following important inequality applies to prefix codes and in general to all uniquely decodable codes Kraft Inequality Let n1…nk be the lengths of codewords in a prefix (or any Uniquely decodable) code. Then, 2 1 1 − = ∑ ≤ n i k i

Proof of Kraft Inequality Proof only for prefix codes Can be extended for all uniquely decodable codes Map codewords onto a binary tree Codewords must be leaves on the tree A codeword of length n; is a leaf at depth n; ° Let nk2nk1…≥n1B> depth of tree=nk In a binary tree of depth nk, up to 2nk leaves are possible(if all leaves are at depth nk Each leaf at depth n, eliminates 2nK-n of the leaves at depth nk Hence 22"→∑2"≤

Eytan Modiano Slide 5 Proof of Kraft Inequality • Proof only for prefix codes – Can be extended for all uniquely decodable codes • Map codewords onto a binary tree – Codewords must be leaves on the tree – A codeword of length ni is a leaf at depth ni • Let n k ≥ nk-1 … ≥ n1 => depth of tree = n k – In a binary tree of depth n k, up to 2nk leaves are possible (if all leaves are at depth n k) – Each leaf at depth ni eliminates 2nk -ni of the leaves at depth n k – Hence, 2 2 21 1 1 n n i k n n i k k − i k i = − = ∑ ∑ ≤⇒ ≤

Kraft Inequality-converse If a set of integers (n,nk] satisfies the Kraft inequality the a prefix code can be found with codeword lengths(n,n y Hence the Kraft inequality is a necessary and sufficient condition for the existence of a uniquely decodable code Proof is by construction of a code Given(n,.n], starting with n, assign node at level n, for codeword ot length n;. Kraft inequality guarantees that assignment can be made Example: n=(2, 2, 2, 3, 3, (verify that Kraft inequality holds!

Eytan Modiano Slide 6 Kraft Inequality - converse • If a set of integers {n1..n k} satisfies the Kraft inequality the a prefix code can be found with codeword lengths {n1..n k } – Hence the Kraft inequality is a necessary and sufficient condition for the existence of a uniquely decodable code • Proof is by construction of a code – Given {n1..n k}, starting with n1 assign node at level ni for codeword of length ni. Kraft inequality guarantees that assignment can be made Example: n = {2,2,2,3,3}, (verify that Kraft inequality holds!) n n 2 1 n 3 n 4 n 5

Average codeword length Kraft inequality does not tell us anything about the average length of a codeword. The following theorem gives a tight lower bound Theorem: Given a source with alphabet (a, a l, probabilities (p,p,1, and entropy H(X), the average length of a uniquely decodable binary code satisfies n≥H(X) Proof: ()-n=∑pg∑m1=∑plg inequality→>log(X)≤X-1 H()-n≤∑P 2-1≤0

Eytan Modiano Slide 7 Average codeword length • Kraft inequality does not tell us anything about the average length of a codeword. The following theorem gives a tight lower bound Theorem: Given a source with alphabet {a1..a k}, probabilities {p1..p k}, and entropy H(X), the average length of a uniquely decodable binary code satisfies: ≥ H(X) Proof: n HX n p p pn p p inequality X X HX n p p i i i i k i i i i k i n i i i k i n i i i k n i i k i i i ( ) log log log log( ) ( ) −= − = => ≤ − => −≤ −       = −≤ = = = = − = = − = = − = = ∑ ∑∑ ∑ ∑ 1 2 1 2 1 2 10 1 11 1 1

Average codeword length Can we construct codes that come close to H(X)? Theorem: Given a source with alphabet (a a l, probabilities (p, Px], and entropy H(X) it is possible to construct a prefix(hence uniquely decodable code of average length satisfying n<H(X)+1 Proof (shannon-fano codes Letn=|og()|→n2lg()→2sP n=log(k<log()+1, P Now, -Kraftinequality satisfied! n=∑<∑叫0g)+1=H(x+ =Can find a prefix code with lengths Heno n=0og()<og()+1 H(1)≤n<H(X+1

Eytan Modiano Slide 8 Average codeword length • Can we construct codes that come close to H(X)? Theorem: Given a source with alphabet {a1..a k}, probabilities {p1..p k}, and entropy H(X), it is possible to construct a prefix (hence uniquely decodable) code of average length satisfying: Proof (Shannon-fano codes): n < H(X) + 1 Let p p p p p p i i i i k i i k i i n n Kraftinequalitysatisfied! Can find a prefix code with lengths, n i i n n i i i =       ⇒≥ ⇒ ≤ ⇒ ≤≤ ⇒ ⇒ =       < + − − = = ∑ ∑ log( ) log( ) log( ) log( ) 1 1 2 2 1 1 1 1 1 1 ni =       < + =< +       = + ≤< + = = ∑ ∑ log( ) log( ) , , log( ) ( ) . , () () 1 1 1 1 1 1 1 1 1 p p Now n pn p p H X Hence HX n HX i i i i i k i i i k

Getting Closer to H(X) Consider blocks of n source letters There are K possible n letter blocks(N-tuples) Let y be the"new source alphabet of n letter blocks If each of the letters is independently generated, HY= H(.XN=NH(X) Encode y using the same procedure as before to obtain H()sn,<H(Y)+1 →N*H(Xs万,<N*H(X)+1 →H(X≤n<H(X)+1/N Where the last inequality is obtained because each letter of Y corresponds to N letters of the original source We can now take the block length(N to be arbitrarily large and get arbitrarily close to H(X)

Eytan Modiano Slide 9 Getting Closer to H(X) • Consider blocks of N source letters – There are KN possible N letter blocks (N-tuples) – Let Y be the “new” source alphabet of N letter blocks – If each of the letters is independently generated, H(Y) = H(x1..xN) = N*H(X) • Encode Y using the same procedure as before to obtain, Where the last inequality is obtained because each letter of Y corresponds to N letters of the original source • We can now take the block length (N) to be arbitrarily large and get arbitrarily close to H(X) HY n HY N HX n N HX HX n HX N y y () () * () * () () () / ≤< + ⇒ ≤< + ⇒ ≤< + 1 1 1

Huffman codes Huffman codes are special prefix codes that can be shown to be optimal (minimize average codeword length) H(X) Huffman Shannon/ H(X)+1 codes Fano codes Huffman Algorithm 1) Arrange source letters in decreasing order of probability(p1≥p2…≥p) 2)Assign"0' to the last digit of X and 1 to the last digit of Xk1 3) Combine pk and pk- l to form a new set of probabilities 132J…Ik-2k1 4)If left with just one letter then done, otherwise go to step 1 and repeat Slide 10

Eytan Modiano Slide 10 Huffman codes • Huffman codes are special prefix codes that can be shown to be optimal (minimize average codeword length) Huffman Algorithm: 1) Arrange source letters in decreasing order of probability (p1 ≥ p2 .. ≥ pk) 2) Assign ‘0’ to the last digit of Xk and ‘1’ to the last digit of Xk-1 3) Combine pk and pk-1 to form a new set of probabilities {p1, p2 ,.., pk-2,(pk-1+ pk)} 4) If left with just one letter then done, otherwise go to step 1 and repeat H(X) H(X)+1 Shannon/ Fano codes Huffman codes

共15页,试读已结束,阅读完整版请下载
刷新页面下载完整文档
VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
注册用户24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
相关文档