POPULATION MONTE CARLO SAMPLING FOR HIGH DIMENSIONAL PROBLEMS
The Graduate School, Stony Brook University: Stony Brook, NY.
Many real-world data analysis problems involve estimating unknowns and approximating their posterior distributions when only partial or inaccurate/noised observations are available. In most cases, the system models are non-linear and/or non-Gaussian with high dimensions, where the posterior distributions cannot be obtained analytically. For the last few decades, many approximation schemes have been proposed to solve this problem. One group of tools favored in theory and practice are Monte Carlo (MC)-based methods. Population Monte Carlo (PMC) is one of the methods of the MC family for batch processing of data. PMC algorithms iterate on a set of samples and weights. The proposal distributions are updated at each iteration by learning from the performances of the previous proposal distributions compared to the target distribution. The target distribution is often the a posteriori distribution of a set of unknowns of interest given observed data and the employed model. The estimation quality and convergence efficiency rely on many factors including the number and ``quality'' of the generated samples. In problems with a high dimensional state space, the PMC implementation is very challenging due to the necessity of large numbers of samples. In this thesis the focus is on researching and advancing the PMC methodology towards its adequate and robust performance in scenarios of high dimensional systems. In some of these problems, some of the unknown parameters are conditionally linear given the remaining parameters. For those cases, marginalized PMC (MPMC) is proposed to lower the computational cost by only generating samples of the nonlinear parameters and marginalizing the remaining linear parameters. This approach is based on the well-known Rao-Blackwell theorem. The computational efficiency of the PMC method can be further improved by the use of a distributed structure. To that end, we propose a novel method referred to as multiple PMC where the state space of interest is partitioned into several subspaces with lower dimensions and handled by a set of parallel PMC estimators. Each PMC estimator updates the weights of the samples and the importance functions, if necessary, using information from the other PMC estimators. As with every method that uses importance sampling, the crucial factor for good performance of the method for PMC is the choice of generating functions of the particles. We also propose an alternative method where the generating functions are alternating conditionals, thereby mimicking the idea behind Gibbs sampling. With this approach, one can generate particles in high dimensions more efficiently. We test and demonstrate the proposed approaches on the classical problem of estimating the frequencies of multiple sinusoids. The simulation results show the accuracy of the estimates and the feasibility of the methods. The performances of the proposed methods are compared to that of other conventional approaches. Computational complexity and convergence properties are also studied.