Computationally Efficient Methods for Shift-variant Image Restoration in Two and Three Dimensions
Sastry, Shekhar Bangalore
MetadataShow full item record
Shift-variant image restoration or image deblurring is useful in many applications including Machine Vision, Image Processing, 3D Microscopy, medical image analysis, etc. Currently, several shift-variant restoration approaches exist. However, they are either computationally expensive or inaccurate leading to poor image quality. This thesis proposes and investigates computationally efficient techniques that produce high quality restoration, even in the presence of noise. The methods presented here are general in that they are not limited to certain types of kernels to be computationally efficient. Detailed analysis and computational algorithms for implementing the methods are provided. This thesis addresses blurring in linear shift-variant imaging systems in both two and three dimensions. Image restoration in such systems corresponds to solving the Fredholm Integral Equation of the Fist Kind. In the two dimensional case, computational efficiency is achieved through localization. In the case of three dimensions, a new domain transformation is applied to achieve computational efficiency. These results are presented in two parts. In the first part, three image restoration algorithms are discussed. The first algorithm is a localized approach to restore highly defocused images. It is based on an existing method called the single-interval RT (SRT) method. The SRT method is found to restore only small to medium levels of blur. It is extended to restore images blurred with large shift-variant point spread functions (PSFs). The new method is called the multi-interval RT (MRT) method. In the MRT technique, the region around a pixel, with size comparable to the support domain of the blurring kernel, is divided into several smaller regions (intervals). The blurred image in each interval is modeled separately by truncated Taylor-series polynomials. A linear system is derived by differentiating the polynomial with respect to spatial variables. A vector of blurred image derivatives is then expressed as sum of such linear systems. An iterative update formula is obtained that is evaluated to improve the focused image estimate. Experimental results for the MRT technique in 1D on analytic functions and in 2D on simulation data and real images are presented. The results show that MRT technique is effective for restoring highly defocused images but at a modest increase in computation cost compared to SRT. The next two restoration algorithms are iterative versions of SRT. One of them is the RT Iterative (RTI) method. In the RTI method, forward RT equation (of SRT) which expresses the blurred image as a weighted sum of focused image and its derivatives is rearranged to form an update equation. The RTI update equation is found to converge rapidly to a solution. The other method is a modification of the gradient based Landweber's iteration and is called the RT based Landweber's (RTLW) algorithm. The RTLW algorithm has a step-size parameter and hence provides more control over the convergence to the solution. Both RTI and RTLW methods are analyzed for computational complexity. It is found that for deblurring defocus aberration, the RTI and RTLW methods are O(NlogN) complex per iteration. Both the methods are compared with Landweber's algorithm and Tikhonov regularization (using SVD), for computation time, accuracy, robustness against noise and quality of restored images. An interesting new insight towards ill-conditioned nature of the image restoration problem becomes apparent by analyzing the localized methods. The second part of this thesis focuses on a new theorem called the Generalized Convolution Theorem (GCT). GCT provides the conditions under which a linear shift-variant system could be transformed to a linear shift-invariant system. The motivation for such transformation is the computational advantage of implementing shift-invariant systems and shift-invariant deblurring using the Fast Fourier Transform (FFT). In the transformed domain the shift-invariant equivalent of a shift-variant system is deblurred in O(NlogN). Implementing the transformations is not computationally expensive. Hence, shift-variant restoration becomes computationally efficient. GCT is stated and proved in one dimension (1D). The 1D GCT is applied to a hypothetical imaging system for verification. A proof of multi-dimensional version of GCT is also provided. Next, applications of GCT in 3D imaging with digital cameras and microscopes are considered. Blurred 3D image sequence is modeled as the result of shift-variant filtering with a 3D PSF. It is found that the 3D shift-variant kernel under geometric optics satisfies the conditions required by GCT for domain transformation. Therefore, GCT is applied to 3D deconvolution microscopy. Specifically, GCT is useful in reducing computational requirements of shift-variant or depth-dependent deconvolution techniques. Simulation experiments in 3D compare GCT with shift-invariance (SI) approximation and piecewise constant shift-invariance (PCSI) approximations. It is demonstrated that GCT provides better results both qualitatively and quantitatively when compared to SI and PCSI approximations. Moreover, GCT is also found to mitigate some of the artifacts common in deconvolution microscopy. Shape recovery using GCT is also briefly investigated.