ICCV 2013

Deblurring by Example using Dense Correspondence

Yoav HaCohen Eli Shechtman Dani Lischinski
Hebrew University Adobe Research Hebrew University
This paper presents a new method for deblurring photos using a sharp reference example that contains some shared content with the blurry photo. Most previous deblurring methods that exploit information from other photos require an accurately registered photo of the same static scene. In contrast, our method aims to exploit reference images where the shared content may have undergone substantial photometric and non-rigid geometric transformations, as these are the kind of reference images most likely to be found in personal photo albums. Our approach builds upon a recent method for example-based deblurring using non-rigid dense correspondence (NRDC) [HaCohen et al. 2011] and extends it in two ways. First, we suggest exploiting information from the reference image not only for blur kernel estimation, but also as a powerful local prior for the non-blind deconvolution step. Second, we introduce a simple yet robust technique for spatially varying blur estimation, rather than assuming spatially uniform blur. Unlike the above previous method, which has proven successful only with simple deblurring scenarios, we demonstrate that our method succeeds on a variety of real-world examples. We provide quantitative and qualitative evaluation of our method and show that it outperforms the state-of-the-art.

Files:


Below are the datasets, results of our method and comparisons to other methods presented in the paper.
Instructions: left-click on each image to view in full resolution.
Extracted blur kernels are shown on top of each image.
Please open the images in different tabs and switch between them for a thorough comparison.


Real Images (aquired with motion blur)

Avisar

Blurry
Whyte et al. 2011
Krishnan et al. 2011
Xu and Jia 2010
Levin et al. 2011
Sun et al. 2013
Our
Reference
Reconstruction
Our (Uniform Kernel)
Our (No Local Prior)

Yemin Moshe

Blurry
Whyte et al. 2011
Krishnan et al. 2011
Xu and Jia 2010
Levin et al. 2011
Sun et al. 2013
Our
Reference
Reconstruction
Our (Uniform Kernel)
Our (No Local Prior)

Flowers

Blurry
Whyte et al. 2011
Krishnan et al. 2011
Levin et al. 2011
Sun et al. 2013
Our
Reference
Reconstruction
Our (Uniform Kernel)
Our (No Local Prior)

Numbers

Blurry
Whyte et al. 2011
Krishnan et al. 2011
Xu and Jia 2010
Levin et al. 2011
Sun et al. 2013
Our
Reference
Reconstruction
Our (Uniform Kernel)
Our (No Local Prior)

Children

Blurry
Whyte et al. 2011
Krishnan et al. 2011
Levin et al. 2011
Sun et al. 2013
Our
Reference
Reconstruction
Our (Uniform Kernel)
Our (No Local Prior)

 


 

Our Synthetic Dataset

We convolved each of the following 5 images with each of the kernels acquired from real camera motion by [Levin et al. 2009] presented below.
The second row presents the example images we used as a reference for our method.

Kernels acquired from real camera motion by [Levin et al. 2009] used to generate blurred images in our dataset

 


 

Results from Synthetic Tests

A few results from our synthetic tests
(Please see Figure 3 in the paper for a quentitive comparison of the full dataset)

Blurry
Cho and Lee 2009
Krishnan et al. 2011
Levin et al. 2011
Xu and Jia 2010
Our
Blurry
Cho and Lee 2009
Krishnan et al. 2011
Levin et al. 2011
Xu and Jia 2010
Our
Blurry
Cho and Lee 2009
Krishnan et al. 2011
Levin et al. 2011
Xu and Jia 2010
Our
Blurry
Cho and Lee 2009
Krishnan et al. 2011
Levin et al. 2011
Xu and Jia 2010
Our
Blurry
Cho and Lee 2009
Krishnan et al. 2011
Levin et al. 2011
Xu and Jia 2010
Our