ABSTRACT
Total variation (TV) regularization has important applications in signal processing including image denoising, image deblurring, and image reconstruction. A significant challenge in the practical use of TV regularization lies in the nondifferentiable convex optimization, which is difficult to solve especially for large-scale problems. In this paper, we propose an efficient alternating augmented Lagrangian method (ADMM) to solve total variation regularization problems. The proposed algorithm is applicable for tensors, thus it can solve multidimensional total variation regularization problems. One appealing feature of the proposed algorithm is that it does not need to solve a linear system of equations, which is often the most expensive part in previous ADMM-based methods. In addition, each step of the proposed algorithm involves a set of independent and smaller problems, which can be solved in parallel. Thus, the proposed algorithm scales to large size problems. Furthermore, the global convergence of the proposed algorithm is guaranteed, and the time complexity of the proposed algorithm is O(dN/ε) on a d-mode tensor with N entries for achieving an ε-optimal solution. Extensive experimental results demonstrate the superior performance of the proposed algorithm in comparison with current state-of-the-art methods.
- A. Barbero and S. Sra. Fast newton-type methods for total variation regularization. In ICML, 2011.Google Scholar
- A. Beck and M. Teboulle. Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. Image Processing, IEEE Transactions on, 18(11):2419--2434, 2009. Google ScholarDigital Library
- A. Beck and M. Teboulle. A fast iterative shrinkage thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183--202, 2009. Google ScholarDigital Library
- S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends® in Machine Learning, 3(1):1--122, 2011. Google ScholarDigital Library
- S. Boyd and L. Vandenberghe. Convex optimation, 2004. Google ScholarDigital Library
- A. Chambolle. An algorithm for total variation minimization and applications. Journal of Mathematical imaging and vision, 20(1):89--97, 2004. Google ScholarDigital Library
- P. Combettes and J. Pesquet. Proximal splitting methods in signal processing. Fixed-Point Algorithms for Inverse Problems in Science and Engineering, pages 185--212, 2011.Google ScholarCross Ref
- L. Condat. A direct algorithm for 1d total variation denoising. 2012.Google Scholar
- A. Dhara and J. Dutta. Optimality Conditions in Convex Optimization, A finite-dimensional view. CRC Press, 2012.Google Scholar
- J. Eckstein and D. Bertsekas. On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Mathematical Programming, 55(1):293--318, 1992. Google ScholarDigital Library
- J. Friedman, T. Hastie, H. Höfling, and R. Tibshirani. Pathwise coordinate optimization. The Annals of Applied Statistics, 1(2):302--332, 2007.Google ScholarCross Ref
- T. Goldstein and S. Osher. The split Bregman method for l1-regularized problems. SIAM Journal on Imaging Sciences, 2(2):323--343, 2009. Google ScholarDigital Library
- B. He and X. Yuan. On the o(1/n) convergence rate of the Douglas-Rachford alternating direction method. SIAM Journal on Numerical Analysis, 50(2):700--709, 2012. Google ScholarDigital Library
- J. Huang, S. Zhang, and D. Metaxas. Efficient MR image reconstruction for compressed mr imaging. Medical Image Analysis, 15(5):670--679, 2011.Google ScholarCross Ref
- C. Li, W. Yin, and Y. Zhang. Tval3: Tv minimization by augmented lagrangian and alternating direction algorithms, 2009.Google Scholar
- J. Liu, L. Yuan, and J. Ye. An efficient algorithm for a class of fused lasso problems. In KDD, pages 323--332, 2010. Google ScholarDigital Library
- S. Ma, W. Yin, Y. Zhang, and A. Chakraborty. An efficient algorithm for compressed MR imaging using total variation and wavelets. In CVPR, pages 1--8, 2008.Google Scholar
- Z. Qin, D. Goldfarb, and S. Ma. An alternating direction method for total variation denoising. arXiv preprint arXiv:1108.1587, 2011.Google Scholar
- L. Rudin, S. Osher, and E. Fatemi. Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena, 60(1):259--268, 1992. Google ScholarDigital Library
- M. Schmidt, N. Roux, and F. Bach. Convergence rates of inexact proximal-gradient methods for convex optimization. NIPS, 2011.Google Scholar
- M. Sion. On general minimax theorems. Pacific J. Math, 8(1):171--176, 1958.Google ScholarCross Ref
- C. Vogel and M. Oman. Fast, robust total variation-based reconstruction of noisy, blurred images. Image Processing, IEEE Transactions on, 7(6):813--824, 1998. Google ScholarDigital Library
- B. Wahlberg, S. Boyd, M. Annergren, and Y. Wang. An admm algorithm for a class of total variation regularized estimation problems. IFAC, 2012.Google ScholarCross Ref
- J. Yang, Y. Zhang, and W. Yin. An efficient TVL1 algorithm for deblurring multichannel images corrupted by impulsive noise. SIAM Journal on Scientific Computing, 31(4):2842--2865, 2009. Google ScholarDigital Library
Index Terms
An efficient ADMM algorithm for multidimensional anisotropic total variation regularization problems
Recommendations
Iterative Adaptive Nonconvex Low-Rank Tensor Approximation to Image Restoration Based on ADMM
In this paper, in order to recover more finer details of the image and to avoid the loss of image structure information for image restoration problem, we develop an iterative adaptive weighted core tensor thresholding (IAWCTT) approach based on the ...
On group-wise ℓp regularization
Following advances in compressed sensing and high-dimensional statistics, many pattern recognition methods have been developed with 1 regularization, which promotes sparse solutions. In this work, we instead advocate the use of p ( 2 p 1 ) ...
Image denoising by generalized total variation regularization and least squares fidelity
Inspired by the ability of $$\ell _p$$ ℓ p -regularized algorithms and the close connection of total variation (TV) to the $$\ell _1$$ ℓ 1 norm, a $$p$$ p th-power type TV denoted as TV $$_p$$ p is proposed for $$0\le p \le 1$$ 0 ≤ p ≤ 1 . The TV $$_p$$ p -regularized problem for image denoising is nonconvex thus difficult to tackle ...
Comments