美国麻省理工大学:《Communication Systems Engineering(通讯系统工程)》Lecture 5: Source Coding

16.36: Communication Systems Engineering Lecture 5: Source Coding Eytan Modiano
Eytan Modiano Slide 1 16.36: Communication Systems Engineering Lecture 5: Source Coding Eytan Modiano

Source coding Source Encode Channel Alphabet alphabet Source symbols Letters of alphabet, ASClI symbols, English dictionary, etc... Quantized voice Channel symbols In general can have an arbitrary number of channel symbols Typically [0, 1]for a binary channel Objectives of source coding Unique decodability Compression Encode the alphabet using the smallest average number of channel symbols
Eytan Modiano Slide 2 Source coding • Source symbols – Letters of alphabet, ASCII symbols, English dictionary, etc... – Quantized voice • Channel symbols – In general can have an arbitrary number of channel symbols Typically {0,1} for a binary channel • Objectives of source coding – Unique decodability – Compression Encode the alphabet using the smallest average number of channel symbols Source Alphabet {a1..aN} Encode Channel Alphabet {c1..cN}

Compression Lossless compression Enables error free decoding Unique decodability without ambiguity · Lossy compression Code may not be uniquely decodable, but with very high probability can be decoded correctly
Eytan Modiano Slide 3 Compression • Lossless compression – Enables error free decoding – Unique decodability without ambiguity • Lossy compression – Code may not be uniquely decodable, but with very high probability can be decoded correctly

Prefix(free) codes a prefix code is a code in which no codeword is a prefix of any other codeword Prefix codes are uniquely decodable Prefix codes are instantaneously decodable The following important inequality applies to prefix codes and in general to all uniquely decodable codes Kraft Inequality Let n,.. n, be the lengths of codewords in a prefix or any Uniquely decodable)code. Then, 2<1
Eytan Modiano Slide 4 Prefix (free) codes • A prefix code is a code in which no codeword is a prefix of any other codeword – Prefix codes are uniquely decodable – Prefix codes are instantaneously decodable • The following important inequality applies to prefix codes and in general to all uniquely decodable codes Kraft Inequality Let n1…nk be the lengths of codewords in a prefix (or any Uniquely decodable) code. Then, 2 1 1 − = ∑ ≤ n i k i

Proof of Kraft Inequality Proof only for prefix codes Can be extended for all uniquely decodable codes Map codewords onto a binary tree Codewords must be leaves on the tree A codeword of length n; is a leaf at depth n; ° Let nk2nk1…≥n1B> depth of tree=nk In a binary tree of depth nk, up to 2nk leaves are possible(if all leaves are at depth nk Each leaf at depth n, eliminates 2nK-n of the leaves at depth nk Hence 22"→∑2"≤
Eytan Modiano Slide 5 Proof of Kraft Inequality • Proof only for prefix codes – Can be extended for all uniquely decodable codes • Map codewords onto a binary tree – Codewords must be leaves on the tree – A codeword of length ni is a leaf at depth ni • Let n k ≥ nk-1 … ≥ n1 => depth of tree = n k – In a binary tree of depth n k, up to 2nk leaves are possible (if all leaves are at depth n k) – Each leaf at depth ni eliminates 2nk -ni of the leaves at depth n k – Hence, 2 2 21 1 1 n n i k n n i k k − i k i = − = ∑ ∑ ≤⇒ ≤

Kraft Inequality-converse If a set of integers (n,nk] satisfies the Kraft inequality the a prefix code can be found with codeword lengths(n,n y Hence the Kraft inequality is a necessary and sufficient condition for the existence of a uniquely decodable code Proof is by construction of a code Given(n,.n], starting with n, assign node at level n, for codeword ot length n;. Kraft inequality guarantees that assignment can be made Example: n=(2, 2, 2, 3, 3, (verify that Kraft inequality holds!
Eytan Modiano Slide 6 Kraft Inequality - converse • If a set of integers {n1..n k} satisfies the Kraft inequality the a prefix code can be found with codeword lengths {n1..n k } – Hence the Kraft inequality is a necessary and sufficient condition for the existence of a uniquely decodable code • Proof is by construction of a code – Given {n1..n k}, starting with n1 assign node at level ni for codeword of length ni. Kraft inequality guarantees that assignment can be made Example: n = {2,2,2,3,3}, (verify that Kraft inequality holds!) n n 2 1 n 3 n 4 n 5

Average codeword length Kraft inequality does not tell us anything about the average length of a codeword. The following theorem gives a tight lower bound Theorem: Given a source with alphabet (a, a l, probabilities (p,p,1, and entropy H(X), the average length of a uniquely decodable binary code satisfies n≥H(X) Proof: ()-n=∑pg∑m1=∑plg inequality→>log(X)≤X-1 H()-n≤∑P 2-1≤0
Eytan Modiano Slide 7 Average codeword length • Kraft inequality does not tell us anything about the average length of a codeword. The following theorem gives a tight lower bound Theorem: Given a source with alphabet {a1..a k}, probabilities {p1..p k}, and entropy H(X), the average length of a uniquely decodable binary code satisfies: ≥ H(X) Proof: n HX n p p pn p p inequality X X HX n p p i i i i k i i i i k i n i i i k i n i i i k n i i k i i i ( ) log log log log( ) ( ) −= − = => ≤ − => −≤ − = −≤ = = = = − = = − = = − = = ∑ ∑∑ ∑ ∑ 1 2 1 2 1 2 10 1 11 1 1

Average codeword length Can we construct codes that come close to H(X)? Theorem: Given a source with alphabet (a a l, probabilities (p, Px], and entropy H(X) it is possible to construct a prefix(hence uniquely decodable code of average length satisfying n<H(X)+1 Proof (shannon-fano codes Letn=|og()|→n2lg()→2sP n=log(k<log()+1, P Now, -Kraftinequality satisfied! n=∑<∑叫0g)+1=H(x+ =Can find a prefix code with lengths Heno n=0og()<og()+1 H(1)≤n<H(X+1
Eytan Modiano Slide 8 Average codeword length • Can we construct codes that come close to H(X)? Theorem: Given a source with alphabet {a1..a k}, probabilities {p1..p k}, and entropy H(X), it is possible to construct a prefix (hence uniquely decodable) code of average length satisfying: Proof (Shannon-fano codes): n < H(X) + 1 Let p p p p p p i i i i k i i k i i n n Kraftinequalitysatisfied! Can find a prefix code with lengths, n i i n n i i i = ⇒≥ ⇒ ≤ ⇒ ≤≤ ⇒ ⇒ = < + − − = = ∑ ∑ log( ) log( ) log( ) log( ) 1 1 2 2 1 1 1 1 1 1 ni = < + =< + = + ≤< + = = ∑ ∑ log( ) log( ) , , log( ) ( ) . , () () 1 1 1 1 1 1 1 1 1 p p Now n pn p p H X Hence HX n HX i i i i i k i i i k

Getting Closer to H(X) Consider blocks of n source letters There are K possible n letter blocks(N-tuples) Let y be the"new source alphabet of n letter blocks If each of the letters is independently generated, HY= H(.XN=NH(X) Encode y using the same procedure as before to obtain H()sn,<H(Y)+1 →N*H(Xs万,<N*H(X)+1 →H(X≤n<H(X)+1/N Where the last inequality is obtained because each letter of Y corresponds to N letters of the original source We can now take the block length(N to be arbitrarily large and get arbitrarily close to H(X)
Eytan Modiano Slide 9 Getting Closer to H(X) • Consider blocks of N source letters – There are KN possible N letter blocks (N-tuples) – Let Y be the “new” source alphabet of N letter blocks – If each of the letters is independently generated, H(Y) = H(x1..xN) = N*H(X) • Encode Y using the same procedure as before to obtain, Where the last inequality is obtained because each letter of Y corresponds to N letters of the original source • We can now take the block length (N) to be arbitrarily large and get arbitrarily close to H(X) HY n HY N HX n N HX HX n HX N y y () () * () * () () () / ≤< + ⇒ ≤< + ⇒ ≤< + 1 1 1

Huffman codes Huffman codes are special prefix codes that can be shown to be optimal (minimize average codeword length) H(X) Huffman Shannon/ H(X)+1 codes Fano codes Huffman Algorithm 1) Arrange source letters in decreasing order of probability(p1≥p2…≥p) 2)Assign"0' to the last digit of X and 1 to the last digit of Xk1 3) Combine pk and pk- l to form a new set of probabilities 132J…Ik-2k1 4)If left with just one letter then done, otherwise go to step 1 and repeat Slide 10
Eytan Modiano Slide 10 Huffman codes • Huffman codes are special prefix codes that can be shown to be optimal (minimize average codeword length) Huffman Algorithm: 1) Arrange source letters in decreasing order of probability (p1 ≥ p2 .. ≥ pk) 2) Assign ‘0’ to the last digit of Xk and ‘1’ to the last digit of Xk-1 3) Combine pk and pk-1 to form a new set of probabilities {p1, p2 ,.., pk-2,(pk-1+ pk)} 4) If left with just one letter then done, otherwise go to step 1 and repeat H(X) H(X)+1 Shannon/ Fano codes Huffman codes
按次数下载不扣除下载券;
注册用户24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
- 美国麻省理工大学:《Communication Systems Engineering(通讯系统工程)》Lecture 4: Quantization.pdf
- 美国麻省理工大学:《Communication Systems Engineering(通讯系统工程)》Lecture 3: The Sampling theorem.pdf
- 美国麻省理工大学:《Communication Systems Engineering(通讯系统工程)》Lecture 2: Entropy.pdf
- 美国麻省理工大学:《Communication Systems Engineering(通讯系统工程)》Lecture 1: Introduction.pdf
- 麻省理工学院:《Robust System Design》Final exam.pdf
- 麻省理工学院:《Robust System Design》The mahalanobis distance in Character Recognition.pdf
- 麻省理工学院:《Robust System Design》Objectives.pdf
- 麻省理工学院:《Robust System Design》Final Project Questions.pdf
- 麻省理工学院:《Robust System Design》Term Project Final presentation.pdf
- 麻省理工学院:《Robust System Design》Plan for the session.pdf
- 麻省理工学院:《Robust System Design》Plan for the session.pdf
- 麻省理工学院:《Robust System Design》Plan for the session.pdf
- 麻省理工学院:《Robust System Design》Standard Orthogonal Arrays.pdf
- 麻省理工学院:《Robust System Design》Constructing Orthogonal arrays.pdf
- 麻省理工学院:《Robust System Design》Performance characterization Don Clausing.pdf
- 麻省理工学院:《Robust System Design》analysis of variance anoVa.pdf
- 麻省理工学院:《Robust System Design》Control and noise factors.pdf
- 麻省理工学院:《Robust System Design》Matrix Experiments Using Orthogonal Arrays.pdf
- 麻省理工学院:《Robust System Design》Context of robust Design.pdf
- 麻省理工学院:《Robust System Design》Course Introduction.pdf
- 美国麻省理工大学:《Communication Systems Engineering(通讯系统工程)》Lecture 10: Link Budget Analysis and Design.pdf
- 美国麻省理工大学:《Communication Systems Engineering(通讯系统工程)》Lectures 12/13: Channel Capacity and Coding.pdf
- 美国麻省理工大学:《Communication Systems Engineering(通讯系统工程)》Lectures 8-9: Signal detection in Noise.pdf
- 美国麻省理工大学:《Communication Systems Engineering(通讯系统工程)》Lecture 6&7 Modulation.pdf
- 美国麻省理工大学:《Communication Systems Engineering(通讯系统工程)》Lectures 14: Cyclic Codes and error detection.pdf
- 美国麻省理工大学:《Communication Systems Engineering(通讯系统工程)》Lecture 17/18: Delay Models for Data.pdf
- 美国麻省理工大学:《Communication Systems Engineering(通讯系统工程)》Packet Multiple access ll Local Area Networks(LANs).pdf
- 美国麻省理工大学:《Communication Systems Engineering(通讯系统工程)》Lectures 21: Routing in Data Networks.pdf
- 美国麻省理工大学:《Communication Systems Engineering(通讯系统工程)》Lectures 15: ARQ Protocols.pdf
- 美国麻省理工大学:《Communication Systems Engineering(通讯系统工程)》Eytan Modiano Massachusetts Institute of Technology.pdf
- 美国麻省理工大学:《Communication Systems Engineering(通讯系统工程)》TCP/IP and the Internet Eytan Modiano Massachusetts Institute of Technologylec22.pdf
- 《卫星工程》英文版(一) Attitude Determination and Control (ADCS).pdf
- 《卫星工程》英文版(一) Introduction to Optics part II.pdf
- 《卫星工程》英文版(一)Minimum Energy Trajectories for Techsat 21 Earth Orbiting Clusters.pdf
- 《卫星工程》英文版(一) LAUNCH SYSTEMS Col John Keesee.pdf
- 《卫星工程》英文版(一) l2_orbital_mech.pdf
- 《卫星工程》英文版(一) l3 scpowersys dm done2.pdf
- 《卫星工程》英文版(一) l4 propulsion.pdf
- 《卫星工程》英文版(一) l5 space environ done2.pdf
- 《卫星工程》英文版(一) 10 emformflight.pdf