Yury Polyanskiy
        Associate Professor

              Curriculum Vitae
              Publications & Preprints

Research:   Information theory, coding theory & related fields

Education:     2010   Princeton University, Ph.D. (Advisers: H. V. Poor and S. Verdú)
2005   MIPT, M. S. (honors), Dept. General and Applied Physics

Address:     32-D668, MIT             (also: E17-457, IDSS, MIT)
77 Massachusetts Avenue
Cambridge, MA 02139
Phone:     (617) 324-0047


Yury Polyanskiy is an Associate Professor of Electrical Engineering and Computer Science in the Dept. of EECS at MIT and a member of LIDS, IDSS and Center of Statistics. Yury received the M.S. degree in applied mathematics and physics from the Moscow Institute of Physics and Technology, Moscow, Russia in 2005 and the Ph.D. degree in electrical engineering from Princeton University, Princeton, NJ in 2010. In 2000-2005 he lead the development of the embedded software in the Department of Surface Oilfield Equipment, Borets Company LLC (Moscow). Currently, his research focuses on basic questions in information theory, error-correcting codes, wireless communication and fault-tolerant and defect-tolerant circuits. Dr. Polyanskiy won the 2013 NSF CAREER award and 2011 IEEE Information Theory Society Paper Award. Back when he still had spare time, he enjoyed playing with mathematics (especially, algebraic geometry and algebraic topology) and hacking Linux kernel.
For more information check his CV and papers.

Channel dispersion

My current research interests spin around finding exact and approximate answers to non-asymptotic questions in communication theory. This direction has been explored in my thesis from an information-theoretic (fundamental limits) angle. For example, the capacity of an additive white Gaussian noise (AWGN) channel with SNR=0 dB is given by Shannon's formula

This classical result means that one can communicate at rates arbitrarily close to 0.5 bits per channel use with an arbitrary small probability of error ε in the limit of infinite number of channel uses (and no such communication is possible for any higher rate). This fundamental observation, however, means little for the practical engineer who is always limited by delay requirements. In such circumstances he would rather ask the question:

Assume I agree to step back from the capacity by a factor η=0.9 and I also agree to tolerate a probability of error ε=10-3. What is the minimum number n of channel uses do I need?

Obtaining exact solution is beyond the reach of current computers. Computationally feasible bounds, however, show that [PPV10]

2550   ≤   n   ≤  2850

These bounds are hard to compute and even harder to analyze. One, however, can get a good approximation and insight into the behavior of n via the following expression:

Here C is Shannon's capacity, Q-1 is a functional inverse of the standard Gaussian complimentary CDF, while V is a new fundamental characteristic of the channel, channel dispersion, which for the AWGN is given by [PPV10]

In our running example of the AWGN with SNR=0dB, η=0.9 and ε=10-3, one gets

n≈ 2750

an excellent match with the firm bounds. The closest classical approach to the same question (that of error-exponents) gives an estimate

n≈ 4120

which is evidently way off.

For a much more detailed introduction to this line of research please check Chapter 1 of my thesis (requires no information theory background!).

    Bounds on maximal achievable rate for the AWGN(0 dB)