Provide an introduction to the theory of Detection and Estimation, with applications mainly in the area of digital communications.
Syllabus (2 hours each class)
CLASS 1:
First hour: Course organization, objectives, textbooks, exam details. Sneaky preview of the course, motivations, applications.
Second hour: basic probability theory refresher: total probability, Bayes rule in discrete/continuous/mixed versions, double conditioning. A first elementary exercise on binary hypothesis testing.
CLASS 2:
First hour: completion of proposed exercise.
Second hour: Bayes Tests.
CLASS 3:
First hour: exercise on Bayes Test (Laplacian distributions)
Second hour: MiniMax Test.
CLASS 4:
First hour: esercise on Minimax.
Second hour: Neyman Pearson Test with example.
CLASS 5:
First hour: ROC properties. NP test with distrete RVs: randomization.
Second hour: Exercise on Bayes, Minimax, Neyman-Pearson tests.
CLASS 6:
First hour: Multiple hypothesis testing, Bayesian approach. MAP and ML tests. Decision regions, boundaries among regions: examples in R^1 and R^2.
Second hour: exercise: 3 equally-likely signal "hypotheses" -A,0,A in AWGN noise: Bayes rule (ML) based on the sample-mean (sufficient statistic).
CLASS 7:
First hour: Minimax in multiple hypotheses. Sufficient statistics: introduction.
Second hour: Factorization theorem, irrelevance theorem. Reversibility theorem.
Gaussian vectors refresher: joint PDF, MGF/CF.
CLASS 8:
First hour: Summary of known main results on Gaussian random vectors: Gaussian MGF, 4th order statistics from moment theorem, MGF-based proof of Gaussianity of linear transformations. Examples of Gaussian vectors: Fading Channel.
Second hour: A: MAP Test with Gaussian signals. B: Additive Gaussian noise channel. Decision regions are hyperplanes.
CLASS 9:
First hour: examples of decision regions. Optimal detection of continuous-time signals: motivation for their discrete representation.
Second hour: Discrete signal representation: definitions. Inner product, norm, distance, linear independence. Orthonormal bases and signal coordinates.
CLASS 10:
Gram-Schmidt orthonormalization. Detailed example. Operations on signals, and dual operations on signal images.
CLASS 11:
Unitary matrices in change of basis. Orthorgonal matrices: rotations and reflections. Orthogonality principle. Projection theorem. Interpretation of Gram-Schmidt procedure as repeated projections. Complete ON bases: motivations and definition.
CLASS 12:
First hour: exercises: 1. product of unitary matrices is unitary. 2. unitary matrix preserves norm of vectors. Projection matrices, eigenvectors, eigenvalues, spectral decomposition. Properties.
Second hour: examples of complete bases in L2: the space of band-limited functions, evaluation of series coefficients, sampling theorem, ON check. More examples of complete bases: Legendre, Hermite, Laguerre.
CLASS 13:
Discrete representation of a stochastic process. Mean and covariance of process coefficients. Properties of covariance matrices for finite random vectors: Hermitianity and related properties. Whitening.
Karhunen Leove (KL) theorem for whitening of discrete process representation (hint to proof). Statement of Mercer theorem. KL bases.
CLASS 14:
Summary of useful matrices: Normal and their subclasses: unitary, hermitian, skew-hermitian. If noise process is white, any ON complete basis is KL. Digital modulation. Example: QPSK. Digital demodulation with correlators bank or matched-filter bank.
CLASS 15:
First hour: Matched filter properties. Max SNR, physical reason of peak at T.
Second hour: back to M-ary hypothesis testing with time-continuous signals: receiver structure. With white noise, irrelevance of noise components outside signal basis. Optimal MAP receiver in AWGN. Basis detector. Signal detector.
CLASS 16:
Examples of MAP RX and evaluation of symbol error probability Pe.
First hour: MAP RX for QPSK signals and its Pe.
Second hour: MAP RX for generic binary signals, basis detector, reduced complexity signal detector. Evaluation of Pe.
CLASS 17:
First hour: Techniques to evaluate Pe: rotational invariance in AWGN and signal image shifts. Center of gravity for minimum energy.
Second hour: Pe evaluation for binary signaling. Comparisons between antipodal and orthogonal signals. Calculation of Pe for 16-QAM (begin).
CLASS 18:
First hour: Calculation of Pe for 16-QAM (end).
Second hour: Calculation of Pe for M-ary orthogonal signaling. Begin calculation of Bit error rate (BER).
CLASS 19:
Completion of BER evaluation in M-ary orthogonal signaling. Example: M-FSK. Occupied bandwidth. Limit as M->infinity and connection with Shannon channel capacity. Notes on Simplex constellation. BER evaluation for QPSK: natural vs. Gray mapping.
CLASS 20:
Further notes on Gray mapping. Approximate BER calculation: union upper bound, minimum distance bound, nearest-neighbor bound. Lower bounds. Example: M-PSK.
Review of cartesian(X,Y)-to-polar(R,Q) probability transformation. For zero-mean normal (X,Y), (R,Q) are independent with Rayleigh and Uniform marginals.
CLASS 21:
For non-zero-mean normal (X,Y), (R,Q) are dependent, with Rice and Bennet marginals. Properties of Rayleigh, Rice, Bennet PDFs. Use of Bennet PDF in the exact evaluation of Pe in M-PSK.
Composite hypothesis testing: introduction. Bayesian approach: Example of partially known signals in AWGN.
CLASS 22:
Partially known signals in AWGN: Bayesian MAP decision rule. Application to incoherent reception of passband signals. Optimal incoherent MAP receiver structure.
CLASS 23:
Alternative more compact derivation of incoherent MAP receiver for passband signals using complex envelopes. Incoherent OOK receiver and its BER evaluation.
CLASS 24:
Detection in additive colored Gaussian noise. Karhunen-Loeve formulation. Hints about the analog whitening filter. Reversibility theorem and whitening of the discretized signal sample.
Example 1: whitening by unitary transformation that alignes the orthonormal eigenvectors of the noise covariance matrix to the canonical basis. Example 2: Cholesky decomposition of covariance matrix and noise whitening. Example of calculation of Cholesky decomposition.
CLASS 25:
Exercise: whitening and Pe evaluation for sampled signals in colored Gaussian noise.
Detection with stochastic signals: the case of Gaussian signals. Binary hypothesis testing: Radiometer. BER evaluation.
CLASS 26:
Estimation theory: introduction. Classical (Fisherian) estimation. MSE cost. The bias-variance tradeoff. Example and motivation for unbiased estimators.
CLASS 27:
Asymptotically unbiased and consistent estimators. MVUE. Cramer Rao Lower Bound: motivazion, theorem statement, example: signals in AWGN (both discrete and continuous-time). Amplitude estimation.
CLASS 28:
Phase estimation. Proof of CRLB. Extension of CRLB to vector parameters: theorem statement and examples. ML estimation, introduction. If an efficient estimator exists, it is ML.
CLASS 29:
ML: asymptotic properties and invariance. Examples: 1) Gaussian observations with unknown (constant) mean and variance. 2) Linear Gaussian model and comparison with least-squares solution. 3) Phase estimation of passband signals (begin)
CLASS 30:
ML: Phase estimation of passband signals (end).
Bayesian Estimation: 1) MMSE estimator and minimum error. Orthogonality principle. Unbiasedness. Note on regression curve. Gaussian example. Exercise: both observations and parameter are negative exponentials.
CLASS 31:
Bayesian estimation: MAP estimator. Example. ML Criterion as a paticular MAP case. Ex: linear Gaussian model (homework, with solution). Extension to vector parameters. Gaussian multivariate regression.
MMSE linear Bayesian estimates. Optimal filter coefficients through orthogonality principle. Yule-Walker equations. LMMSE optimal estimator and minimal MSE.
CLASS 32:
Review of optimal scalar LMMSE estimator and minimum MSE. Extension to vector estimator.
Wiener Filter: problem statement, objectives.
A) Smoothing, optimal non-causal filter, MMSE error, case of additive noise channel. Alternative evaluation of MMSE with error filter.
CLASS 33:
B) Causal Wiener filter: problem setting in 2 steps: whitening and innovations estimation.
Whitening: 1) review of two-sided Z-transform and its ROC. 2) review: Z-transform of PSD of the output of a linear system. 3) statement of Spectral Factorization (SF) theorem.
CLASS 34:
SF theorem: key to proof. Calculation of innovations filter L(z) for real processes through the SF. Regular processes classification with L(z) a rational fraction. AR, MA, ARMA processes. Example: AR(1).
CLASS 35:
Wiener causal filter, formula in z. Example.
r-step predictor: form of filter in z. Error Formula.
CLASS 36:
Predictor: example: prediction of AR(p) processes.
r-step filtering and prediction: formula in z. General error formula for additive noise channels. Example.