PyUNLocBoX: Optimization by Proximal Splitting

doc pypi license pyversions

binder travis coveralls github

The PyUNLocBoX is a Python package which uses proximal splitting methods to solve non-differentiable convex optimization problems. It is a free software, distributed under the BSD license, and available on PyPI. The documentation is available on Read the Docs and development takes place on GitHub. (A Matlab counterpart exists.)

The package is designed to be easy to use while allowing any advanced tasks. It is not meant to be a black-box optimization tool. You’ll have to carefully design your solver. In exchange you’ll get full control of what the package does for you, without the pain of rewriting the proximity operators and the solvers and with the added benefit of tested algorithms. With this package, you can focus on your problem and the best way to solve it rather that the details of the algorithms. It comes with the following solvers:

  • Gradient descent

  • Forward-backward splitting algorithm (FISTA, ISTA)

  • Douglas-Rachford splitting algorithm

  • Generalized forward-backward

  • Monotone+Lipschitz forward-backward-forward primal-dual algorithm

  • Projection-based primal-dual algorithm

Moreover, the following acceleration schemes are included:

  • FISTA acceleration scheme

  • Backtracking based on a quadratic approximation of the objective

  • Regularized nonlinear acceleration (RNA)

To compose your objective, you can either define your custom functions (which should implement an evaluation method and a gradient or proximity method) or use one of the followings:

  • L1-norm

  • L2-norm

  • TV-norm

  • Nuclear-norm

  • Projection on the L2-ball

Following is a typical usage example who solves an optimization problem composed by the sum of two convex functions. The functions and solver objects are first instantiated with the desired parameters. The problem is then solved by a call to the solving function.

>>> from pyunlocbox import functions, solvers
>>> f1 = functions.norm_l2(y=[4, 5, 6, 7])
>>> f2 = functions.dummy()
>>> solver = solvers.forward_backward()
>>> ret = solvers.solve([f1, f2], [0., 0, 0, 0], solver, atol=1e-5)
Solution found after 9 iterations:
    objective function f(sol) = 6.714385e-08
    stopping criterion: ATOL
>>> ret['sol']
array([ 3.99990766,  4.99988458,  5.99986149,  6.99983841])

You can try it online, look at the tutorials to learn how to use it, or look at the reference guide for an exhaustive documentation of the API. Enjoy the package!

Installation

The PyUNLocBoX is available on PyPI:

$ pip install pyunlocbox

Contributing

See the guidelines for contributing in CONTRIBUTING.rst.

Acknowledgments

The PyUNLocBoX was started in 2014 as an academic open-source project for research purpose at the EPFL LTS2 laboratory.

Tutorials

The following are some tutorials which show and explain how to use the toolbox to solve some real problems. They goes in increasing degree of difficulty. If you have never used the toolbox before, you are encouraged to follow them in order as they build one upon the other.

Simple least square problem

This simplistic example is only meant to demonstrate the basic workflow of the toolbox. Here we want to solve a least square problem, i.e. we want the solution to converge to the original signal without any constraint. Lets define this signal by :

>>> y = [4, 5, 6, 7]

The first function to minimize is the sum of squared distances between the current signal x and the original y. For this purpose, we instantiate an L2-norm object :

>>> from pyunlocbox import functions
>>> f1 = functions.norm_l2(y=y)

This standard function object provides the eval(), grad() and prox() methods that will be useful to the solver. We can evaluate them at any given point :

>>> f1.eval([0, 0, 0, 0])
126
>>> f1.grad([0, 0, 0, 0])
array([ -8, -10, -12, -14])
>>> f1.prox([0, 0, 0, 0], 1)
array([ 2.66666667,  3.33333333,  4.        ,  4.66666667])

We need a second function to minimize, which usually describes a constraint. As we have no constraint, we just define a dummy function object by hand. We have to define the _eval() and _grad() methods as the solver we will use requires it :

>>> f2 = functions.func()
>>> f2._eval = lambda x: 0
>>> f2._grad = lambda x: 0

Note

We could also have used the pyunlocbox.functions.dummy function object.

We can now instantiate the solver object :

>>> from pyunlocbox import solvers
>>> solver = solvers.forward_backward()

And finally solve the problem :

>>> x0 = [0., 0., 0., 0.]
>>> ret = solvers.solve([f2, f1], x0, solver, atol=1e-5, verbosity='HIGH')
    func evaluation: 0.000000e+00
    norm_l2 evaluation: 1.260000e+02
INFO: Forward-backward method
Iteration 1 of forward_backward:
    func evaluation: 0.000000e+00
    norm_l2 evaluation: 1.400000e+01
    objective = 1.40e+01
Iteration 2 of forward_backward:
    func evaluation: 0.000000e+00
    norm_l2 evaluation: 2.963739e-01
    objective = 2.96e-01
Iteration 3 of forward_backward:
    func evaluation: 0.000000e+00
    norm_l2 evaluation: 7.902529e-02
    objective = 7.90e-02
Iteration 4 of forward_backward:
    func evaluation: 0.000000e+00
    norm_l2 evaluation: 5.752265e-02
    objective = 5.75e-02
Iteration 5 of forward_backward:
    func evaluation: 0.000000e+00
    norm_l2 evaluation: 5.142032e-03
    objective = 5.14e-03
Iteration 6 of forward_backward:
    func evaluation: 0.000000e+00
    norm_l2 evaluation: 1.553851e-04
    objective = 1.55e-04
Iteration 7 of forward_backward:
    func evaluation: 0.000000e+00
    norm_l2 evaluation: 5.498523e-04
    objective = 5.50e-04
Iteration 8 of forward_backward:
    func evaluation: 0.000000e+00
    norm_l2 evaluation: 1.091372e-04
    objective = 1.09e-04
Iteration 9 of forward_backward:
    func evaluation: 0.000000e+00
    norm_l2 evaluation: 6.714385e-08
    objective = 6.71e-08
Solution found after 9 iterations:
    objective function f(sol) = 6.714385e-08
    stopping criterion: ATOL

The solving function returns several values, one is the found solution :

>>> ret['sol']
array([ 3.99990766,  4.99988458,  5.99986149,  6.99983841])

Another one is the value returned by each function objects at each iteration. As we passed two function objects (L2-norm and dummy), the objective is a 2 by 11 (10 iterations plus the evaluation at x0) ndarray. Lets plot a convergence graph out of it :

>>> import numpy as np
>>> objective = np.array(ret['objective'])
>>> import matplotlib.pyplot as plt
>>> _ = plt.figure()
>>> _ = plt.semilogy(objective[:, 1], 'x', label='L2-norm')
>>> _ = plt.semilogy(objective[:, 0], label='Dummy')
>>> _ = plt.semilogy(np.sum(objective, axis=1), label='Global objective')
>>> _ = plt.grid(True)
>>> _ = plt.title('Convergence')
>>> _ = plt.legend(numpoints=1)
>>> _ = plt.xlabel('Iteration number')
>>> _ = plt.ylabel('Objective function value')
_images/simple-8.png

The above graph shows an exponential convergence of the objective function. The global objective is obviously only composed of the L2-norm as the dummy function object was defined to always evaluate to 0 (f2._eval = lambda x: 0).

Compressed sensing using forward-backward

This tutorial presents a compressed sensing problem solved by the forward-backward splitting algorithm. The convex optimization problem is the sum of a data fidelity term and a regularization term which expresses a prior on the sparsity of the solution, given by

\[\min\limits_x \|Ax-y\|_2^2 + \tau \|x\|_1\]

where y are the measurements, A is the measurement matrix and \(\tau\) expresses the trade-off between the two terms.

The number of necessary measurements m is computed with respect to the signal size n and the sparsity level S in order to very often perform a perfect reconstruction. See [CR07] for details.

>>> n = 5000
>>> S = 100
>>> import numpy as np
>>> m = int(np.ceil(S * np.log(n)))
>>> print('Number of measurements: {}'.format(m))
Number of measurements: 852
>>> print('Compression ratio: {:3.2f}'.format(float(n) / m))
Compression ratio: 5.87

We generate a random measurement matrix A:

>>> np.random.seed(1)  # Reproducible results.
>>> A = np.random.normal(size=(m, n))

Create the S sparse signal x:

>>> x = np.zeros(n)
>>> I = np.random.permutation(n)
>>> x[I[0:S]] = np.random.normal(size=S)
>>> x = x / np.linalg.norm(x)

Generate the measured signal y:

>>> y = np.dot(A, x)

The prior objective to minimize is defined by

\[f_1(x) = \tau \|x\|_1\]

which can be expressed by the toolbox L1-norm function object. It can be instantiated as follows, while setting the regularization parameter tau:

>>> from pyunlocbox import functions
>>> tau = 1.0
>>> f1 = functions.norm_l1(lambda_=tau)

The fidelity objective to minimize is defined by

\[f_2(x) = \|Ax-y\|_2^2\]

which can be expressed by the toolbox L2-norm function object. It can be instantiated as follows:

>>> f2 = functions.norm_l2(y=y, A=A)

or alternatively as follows:

>>> f3 = functions.norm_l2(y=y)
>>> f3.A = lambda x: np.dot(A, x)
>>> f3.At = lambda x: np.dot(A.T, x)

Note

In this case the forward and adjoint operators were passed as functions not as matrices.

A third alternative would be to define the function object by hand:

>>> f4 = functions.func()
>>> f4._grad = lambda x: 2.0 * np.dot(A.T, np.dot(A, x) - y)
>>> f4._eval = lambda x: np.linalg.norm(np.dot(A, x) - y)**2

Note

The three alternatives to instantiate the function objects (f2, f3 and f4) are strictly equivalent and give the exact same results.

Now that the two function objects to minimize (the L1-norm and the L2-norm) are instantiated, we can instantiate the solver object. The step size for optimal convergence is \(\frac{1}{\beta}\) where \(\beta\) is the Lipschitz constant of the gradient of f2, f3, f4 given by:

\[\beta = 2 \cdot \|A\|_{\text{op}}^2 = 2 \cdot \lambda_{max} (A^*A).\]

To solve this problem, we use the Forward-Backward splitting algorithm which is instantiated as follows:

>>> step = 0.5 / np.linalg.norm(A, ord=2)**2
>>> from pyunlocbox import solvers
>>> solver = solvers.forward_backward(step=step)

Note

A complete description of the constructor parameters and default values is given by the solver object pyunlocbox.solvers.forward_backward reference documentation.

After the instantiations of the functions and solver objects, the setting of a starting point x0, the problem is solved by the toolbox solving function as follows:

>>> x0 = np.zeros(n)
>>> ret = solvers.solve([f1, f2], x0, solver, rtol=1e-4, maxit=300)
Solution found after 151 iterations:
    objective function f(sol) = 7.668167e+00
    stopping criterion: RTOL

Note

A complete description of the parameters, their default values and the returned values is given by the solving function pyunlocbox.solvers.solve() reference documentation.

Let’s display the results:

>>> import matplotlib.pyplot as plt
>>> _ = plt.figure()
>>> _ = plt.plot(x, 'o', label='Original')
>>> _ = plt.plot(ret['sol'], 'xr', label='Reconstructed')
>>> _ = plt.grid(True)
>>> _ = plt.title('Achieved reconstruction')
>>> _ = plt.legend(numpoints=1)
>>> _ = plt.xlabel('Signal dimension number')
>>> _ = plt.ylabel('Signal value')
_images/compressed_sensing_forward_backward-11.png

The above figure shows a good reconstruction which is both sparse (thanks to the L1-norm objective) and close to the measurements (thanks to the L2-norm objective).

Let’s display the convergence of the two objective functions:

>>> objective = np.array(ret['objective'])
>>> _ = plt.figure()
>>> _ = plt.semilogy(objective[:, 0], label='L1-norm objective')
>>> _ = plt.semilogy(objective[:, 1], label='L2-norm objective')
>>> _ = plt.semilogy(np.sum(objective, axis=1), label='Global objective')
>>> _ = plt.grid(True)
>>> _ = plt.title('Convergence')
>>> _ = plt.legend()
>>> _ = plt.xlabel('Iteration number')
>>> _ = plt.ylabel('Objective function value')
_images/compressed_sensing_forward_backward-12.png

Compressed sensing using Douglas-Rachford

This tutorial presents a compressed sensing problem solved by the Douglas-Rachford splitting algorithm. The convex optimization problem, a term which expresses a prior on the sparsity of the solution constrained by some data fidelity, is given by

\[\min\limits_x \|x\|_1 \text{ s.t. } \|Ax-y\|_2 \leq \epsilon\]

where y are the measurements and A is the measurement matrix.

The number of necessary measurements m is computed with respect to the signal size n and the sparsity level S in order to very often perform a perfect reconstruction. See [CR07] for details.

>>> n = 900
>>> S = 45
>>> import numpy as np
>>> m = int(np.ceil(S * np.log(n)))
>>> print('Number of measurements: {}'.format(m))
Number of measurements: 307
>>> print('Compression ratio: {:3.2f}'.format(float(n) / m))
Compression ratio: 2.93

We generate a random measurement matrix A:

>>> np.random.seed(1)  # Reproducible results.
>>> A = np.random.normal(size=(m, n))

Create the S sparse signal x:

>>> x = np.zeros(n)
>>> I = np.random.permutation(n)
>>> x[I[0:S]] = np.random.normal(size=S)
>>> x = x / np.linalg.norm(x)

Generate the measured signal y:

>>> y = np.dot(A, x)

The first objective function to minimize is defined by

\[f_1(x) = \|x\|_1\]

which can be expressed by the toolbox L1-norm function object. It can be instantiated as follows:

>>> from pyunlocbox import functions
>>> f1 = functions.norm_l1()

The second objective function to minimize is defined by

\[f_2(x) = \iota_C(x)\]

where \(\iota_C()\) is the indicator function of the set \(C = \left\{z \in \mathbb{R}^n \mid \|Az-y\|_2 \leq \epsilon \right\}\) which is zero if \(z\) is in the set and infinite otherwise. This function can be expressed by the toolbox L2-ball function object which can be instantiated as follows:

>>> f2 = functions.proj_b2(epsilon=1e-7, y=y, A=A, tight=False,
... nu=np.linalg.norm(A, ord=2)**2)

Now that the two function objects to minimize (the L1-norm and the L2-ball) are instantiated, we can instantiate the solver object. To solve this problem, we use the Douglas-Rachford splitting algorithm which is instantiated as follows:

>>> from pyunlocbox import solvers
>>> solver = solvers.douglas_rachford(step=1e-2)

After the instantiations of the functions and solver objects, the setting of a starting point x0, the problem is solved by the toolbox solving function as follows:

>>> x0 = np.zeros(n)
>>> ret = solvers.solve([f1, f2], x0, solver, rtol=1e-4, maxit=300)
Solution found after 43 iterations:
    objective function f(sol) = 5.607407e+00
    stopping criterion: RTOL

Let’s display the results:

>>> import matplotlib.pyplot as plt
>>> _ = plt.figure()
>>> _ = plt.plot(x, 'o', label='Original')
>>> _ = plt.plot(ret['sol'], 'xr', label='Reconstructed')
>>> _ = plt.grid(True)
>>> _ = plt.title('Achieved reconstruction')
>>> _ = plt.legend(numpoints=1)
>>> _ = plt.xlabel('Signal dimension number')
>>> _ = plt.ylabel('Signal value')
_images/compressed_sensing_douglas_rachford-9.png

The above figure shows a good reconstruction which is both sparse (thanks to the L1-norm objective) and close to the measurements (thanks to the L2-ball constraint).

Let’s display the convergence of the objective function:

>>> objective = np.array(ret['objective'])
>>> _ = plt.figure()
>>> _ = plt.semilogy(objective[:, 0], label='L1-norm objective')
>>> _ = plt.grid(True)
>>> _ = plt.title('Convergence')
>>> _ = plt.legend()
>>> _ = plt.xlabel('Iteration number')
>>> _ = plt.ylabel('Objective function value')
_images/compressed_sensing_douglas_rachford-10.png

Image reconstruction (Forward-Backward, Total Variation, L2-norm)

This tutorial presents an image reconstruction problem solved by the Forward-Backward splitting algorithm. The convex optimization problem is the sum of a data fidelity term and a regularization term which expresses a prior on the smoothness of the solution, given by

\[\min\limits_x \tau \|g(x)-y\|_2^2 + \|x\|_\text{TV}\]

where \(\|\cdot\|_\text{TV}\) denotes the total variation, y are the measurements, g is a masking operator and \(\tau\) expresses the trade-off between the two terms.

Load an image and convert it to grayscale

>>> import matplotlib.image as mpimg
>>> import numpy as np
>>> try:
...     im_original = mpimg.imread('tutorials/lena.png')
... except:
...     im_original = mpimg.imread('doc/tutorials/lena.png')
>>> im_original = np.dot(im_original[..., :3], [0.299, 0.587, 0.144])

and generate a random masking matrix

>>> np.random.seed(14)  # Reproducible results.
>>> mask = np.random.uniform(size=im_original.shape)
>>> mask = mask > 0.85

which masks 85% of the pixels. The masked image is given by

>>> g = lambda x: mask * x
>>> im_masked = g(im_original)

The prior objective to minimize is defined by

\[f_1(x) = \|x\|_\text{TV}\]

which can be expressed by the toolbox TV-norm function object, instantiated with

>>> from pyunlocbox import functions
>>> f1 = functions.norm_tv(maxit=50, dim=2)

The fidelity objective to minimize is defined by

\[f_2(x) = \tau \|g(x)-y\|_2^2\]

which can be expressed by the toolbox L2-norm function object, instantiated with

>>> tau = 100
>>> f2 = functions.norm_l2(y=im_masked, A=g, lambda_=tau)

Note

We set \(\tau\) to a large value as we trust our measurements and want the solution to be close to them. For noisy measurements a lower value should be considered.

The step size for optimal convergence is \(\frac{1}{\beta}\) where \(\beta=2\tau\) is the Lipschitz constant of the gradient of \(f_2\) [BT09a]. The Forward-Backward splitting algorithm is instantiated with

>>> from pyunlocbox import solvers
>>> solver = solvers.forward_backward(step=0.5/tau)

and the problem solved with

>>> x0 = np.array(im_masked)  # Make a copy to preserve im_masked.
>>> ret = solvers.solve([f1, f2], x0, solver, maxit=100)
Solution found after 93 iterations:
    objective function f(sol) = 4.268861e+03
    stopping criterion: RTOL

Let’s display the results:

>>> import matplotlib.pyplot as plt
>>> fig = plt.figure(figsize=(8, 2.5))
>>> ax1 = fig.add_subplot(1, 3, 1)
>>> _ = ax1.imshow(im_original, cmap='gray')
>>> _ = ax1.axis('off')
>>> _ = ax1.set_title('Original image')
>>> ax2 = fig.add_subplot(1, 3, 2)
>>> _ = ax2.imshow(im_masked, cmap='gray')
>>> _ = ax2.axis('off')
>>> _ = ax2.set_title('Masked image')
>>> ax3 = fig.add_subplot(1, 3, 3)
>>> _ = ax3.imshow(ret['sol'], cmap='gray')
>>> _ = ax3.axis('off')
>>> _ = ax3.set_title('Reconstructed image')
_images/reconstruction-8.png

The above figure shows a good reconstruction which is both smooth (the TV prior) and close to the measurements (the L2 fidelity).

Image denoising (Douglas-Rachford, Total Variation, L2-norm)

This tutorial presents an image denoising problem solved by the Douglas-Rachford splitting algorithm. The convex optimization problem, a term which expresses a prior on the smoothness of the solution constrained by some data fidelity, is given by

\[\min\limits_x \|x\|_\text{TV} \text{ s.t. } \|x-y\|_2 \leq \epsilon\]

where \(\|\cdot\|_\text{TV}\) denotes the total variation, y are the measurements and \(\epsilon\) expresses the noise level.

Create a white circle on a black background

>>> import numpy as np
>>> N = 650
>>> im_original = np.resize(np.linspace(-1, 1, N), (N, N))
>>> im_original = np.sqrt(im_original**2 + im_original.T**2)
>>> im_original = im_original < 0.7

and add some random Gaussian noise

>>> sigma = 0.5  # Variance of 0.25.
>>> np.random.seed(7)  # Reproducible results.
>>> im_noisy = im_original + sigma * np.random.normal(size=im_original.shape)

The prior objective function to minimize is defined by

\[f_1(x) = \|x\|_\text{TV}\]

which can be expressed by the toolbox TV-norm function object, instantiated with

>>> from pyunlocbox import functions
>>> f1 = functions.norm_tv(maxit=50, dim=2)

The fidelity constraint expressed as an objective function to minimize is defined by

\[f_2(x) = \iota_S(x)\]

where \(\iota_S()\) is the indicator function of the set \(S = \left\{z \in \mathbb{R}^n \mid \|z-y\|_2 \leq \epsilon \right\}\) which is zero if \(z\) is in the set and infinite otherwise. This function can be expressed by the toolbox L2-ball function, instantiated with

>>> y = np.reshape(im_noisy, -1)  # Reshape the 2D image as a 1D vector.
>>> epsilon = N * sigma           # Variance multiplied by N^2.
>>> f = functions.proj_b2(y=y, epsilon=epsilon)
>>> f2 = functions.func()
>>> f2._eval = lambda x: 0        # Indicator functions evaluate to zero.
>>> def prox(x, step):
...     return np.reshape(f.prox(np.reshape(x, -1), 0), im_noisy.shape)
>>> f2._prox = prox

Note

We defined a custom proximal operator which transforms the 2D image as a 1D vector because pyunlocbox.functions.proj_b2 operates on the columns of x while pyunlocbox.functions.norm_tv needs a two-dimensional array to compute the 2D TV norm.

The Douglas-Rachford splitting algorithm is instantiated with

>>> from pyunlocbox import solvers
>>> solver = solvers.douglas_rachford(step=0.1)

and the problem solved with

>>> x0 = np.array(im_noisy)  # Make a copy to preserve y aka im_noisy.
>>> ret = solvers.solve([f1, f2], x0, solver)
Solution found after 25 iterations:
    objective function f(sol) = 2.080376e+03
    stopping criterion: RTOL

Let’s display the results:

>>> import matplotlib.pyplot as plt
>>> fig = plt.figure(figsize=(8, 2.5))
>>> ax1 = fig.add_subplot(1, 3, 1)
>>> _ = ax1.imshow(im_original, cmap='gray')
>>> _ = ax1.axis('off')
>>> _ = ax1.set_title('Original image')
>>> ax2 = fig.add_subplot(1, 3, 2)
>>> _ = ax2.imshow(im_noisy, cmap='gray')
>>> _ = ax2.axis('off')
>>> _ = ax2.set_title('Noisy image')
>>> ax3 = fig.add_subplot(1, 3, 3)
>>> _ = ax3.imshow(ret['sol'], cmap='gray')
>>> _ = ax3.axis('off')
>>> _ = ax3.set_title('Denoised image')
_images/denoising-7.png

The above figure shows a good reconstruction which is both smooth (the TV prior) and close to the measurements (the L2 fidelity constraint).

Reference guide

The package is mainly organized around two class hierarchies: the functions and the solvers. Instantiated functions represent convex functions to optimize. Instantiated solvers represent solving algorithms. The pyunlocbox.solvers.solve() solving function takes as parameters a solver object and some function objects to actually solve the optimization problem. See this function’s documentation for a typical usage example.

The pyunlocbox package is divided into the following modules:

  • functions: objective functions to define an optimization problem,

  • solvers: the main solving function and common solvers,

  • acceleration: general acceleration schemes for various solvers,

  • operators: some operators.

Functions

The pyunlocbox.functions module implements an interface for solvers to access the functions to be optimized as well as common objective functions.

Interface

The func base class defines a common interface to all functions:

func.cap(self, x)

Test the capabilities of the function object.

func.eval(self, x)

Function evaluation.

func.prox(self, x, T)

Function proximal operator.

func.grad(self, x)

Function gradient.

Functions

Then, derived classes implement various common objective functions.

Norm operators (based on norm)

norm_l1(**kwargs)

L1-norm (eval, prox).

norm_l2(**kwargs)

L2-norm (eval, prox, grad).

norm_nuclear(**kwargs)

Nuclear-norm (eval, prox).

norm_tv([dim, verbosity])

TV-norm (eval, prox).

Projection operators (based on proj)

proj_b2(**kwargs)

Projection on the L2-ball (eval, prox).

Miscellaneous

dummy(**kwargs)

Dummy function which returns 0 (eval, prox, grad).

Inheritance diagram of pyunlocbox.functions

Solvers

The pyunlocbox.solvers module implements a solving function (which will minimize your objective function) as well as common solvers.

Solving

Call solve() to solve your convex optimization problem using your instantiated solver and functions objects.

Interface

The solver base class defines a common interface to all solvers:

solver.pre(self, functions, x0)

Solver-specific pre-processing.

solver.algo(self, objective, niter)

Call the solver iterative algorithm and the provided acceleration scheme.

solver.post(self)

Solver-specific post-processing.

Solvers

Then, derived classes implement various common solvers.

gradient_descent(**kwargs)

Gradient descent algorithm.

forward_backward([accel])

Forward-backward proximal splitting algorithm.

douglas_rachford([lambda_])

Douglas-Rachford proximal splitting algorithm.

generalized_forward_backward([lambda_])

Generalized forward-backward proximal splitting algorithm.

Primal-dual solvers (based on primal_dual)

mlfbf([L, Lt, d0])

Monotone+Lipschitz Forward-Backward-Forward primal-dual algorithm.

projection_based([lambda_])

Projection-based primal-dual algorithm.

Inheritance diagram of pyunlocbox.solvers

Acceleration

The pyunlocbox.acceleration module implements acceleration schemes for use with the pyunlocbox.solvers. Pass a given acceleration object as an argument to your chosen solver during its initialization so that the solver can use it.

Interface

The accel base class defines a common interface to all acceleration schemes:

accel.pre(self, functions, x0)

Pre-processing specific to the acceleration scheme.

accel.update_step(self, solver, objective, niter)

Update the step size for the next iteration.

accel.update_sol(self, solver, objective, niter)

Update the solution point for the next iteration.

accel.post(self)

Post-processing specific to the acceleration scheme.

Acceleration schemes

Then, derived classes implement various common acceleration schemes.

dummy()

Dummy acceleration scheme which does nothing.

backtracking([eta])

Backtracking based on a local quadratic approximation of the smooth part of the objective.

fista(**kwargs)

Acceleration scheme for forward-backward solvers.

fista_backtracking([eta])

Acceleration scheme with backtracking for forward-backward solvers.

regularized_nonlinear([k, lambda_, …])

Regularized nonlinear acceleration (RNA) for gradient descent.

Operators

The pyunlocbox.operators module implements the following operators:

grad(x[, dim])

Returns the gradient of an array.

div(*args, **kwargs)

Returns the divergence of an array.

Contributing

Contributions are welcome, and they are greatly appreciated! The development of this package takes place on GitHub. Issues, bugs, and feature requests should be reported there. Code and documentation can be improved by submitting a pull request. Please add documentation and tests for any new code.

The package can be set up (ideally in a virtual environment) for local development with the following:

$ git clone https://github.com/epfl-lts2/pygsp.git
$ pip install -U -r pyunlocbox/requirements.txt
$ pip install -e pyunlocbox

You can improve or add solvers, functions, and acceleration schemes in pyunlocbox/solvers.py, pyunlocbox/functions.py, and pyunlocbox/acceleration.py, along with their corresponding unit tests in pyunlocbox/tests/test_*.py (with reasonable coverage) and documentation in doc/reference/*.rst. If you have a nice example to demonstrate the use of the introduced functionality, please consider adding a tutorial in doc/tutorials.

Do not forget to update README.rst and doc/history.rst with e.g. new features. The version number needs to be updated in setup.py and pyunlocbox/__init__.py.

After making any change, please check the style, run the tests, and build the documentation with the following (enforced by Travis CI):

$ make lint
$ make test
$ make doc

Check the generated coverage report at htmlcov/index.html to make sure the tests reasonably cover the changes you’ve introduced.

History

0.5.2 (2017-12-15)

Mostly a maintenance release. Much cleaning happened and a conda package is now available in conda-forge. Moreover, the package can now be tried online thanks to binder.

0.5.1 (2017-07-04)

Development status updated from Alpha to Beta.

New features:

  • Acceleration module, decoupling acceleration strategies from the solvers

    • Backtracking scheme

    • FISTA acceleration

    • FISTA with backtracking

    • Regularized non-linear acceleration (RNA)

  • Solvers: gradient descent algorithm

Bug fix:

  • Decrease dimensionality of variables in Douglas Rachford tutorial to reduce test time and timeout on Travis CI.

Infrastructure:

  • Continuous integration: dropped 3.3 (matplotlib dropped it), added 3.6

  • We don’t build PDF documentation anymore. Less burden, HTML can be downloaded from readthedocs.

0.4.0 (2016-08-01)

New feature:

  • Monotone+Lipschitz forward-backward-forward primal-dual algorithm (MLFBF)

Bug fix:

  • Plots generated when building documentation (not stored in the repository)

Infrastructure:

  • Continuous integration: dropped 2.6 and 3.2, added 3.5

  • Travis-ci: check style and build doc

  • Removed tox config (too cumbersome to use on dev box)

  • Monitor code coverage and report to coveralls.io

0.3.0 (2015-05-29)

New features:

  • Generalized forward-backward splitting algorithm

  • Projection-based primal-dual algorithm

  • TV-norm function (eval, prox)

  • Nuclear-norm function (eval, prox)

  • L2-norm proximal operator supports non-tight frames

  • Two new tutorials using the TV-norm with Forward-Backward and Douglas-Rachford for image reconstruction and denoising

  • New stopping criterion XTOL allows to stop when the variable is stable

Bug fix:

  • Much more memory efficient. Note that the array which contains the initial solution is now modified in place.

0.2.1 (2014-08-20)

Bug fix version. Still experimental.

Bug fixes:

  • Avoid complex casting to real

  • Do not stop iterating if the objective function stays at zero

0.2.0 (2014-08-04)

Second usable version, available on GitHub and released on PyPI. Still experimental.

New features:

  • Douglas-Rachford splitting algorithm

  • Projection on the L2-ball for tight and non tight frames

  • Compressed sensing tutorial using L2-ball, L2-norm and Douglas-Rachford

  • Automatic solver selection

Infrastructure:

  • Unit tests for all functions and solvers

  • Continuous integration testing on Python 2.6, 2.7, 3.2, 3.3 and 3.4

0.1.0 (2014-06-08)

First usable version, available on GitHub and released on PyPI. Still experimental.

Features:

  • Forward-backward splitting algorithm

  • L1-norm function (eval and prox)

  • L2-norm function (eval, grad and prox)

  • Least square problem tutorial using L2-norm and forward-backward

  • Compressed sensing tutorial using L1-norm, L2-norm and forward-backward

Infrastructure:

  • Sphinx generated documentation using Numpy style docstrings

  • Documentation hosted on Read the Docs

  • Code hosted on GitHub

  • Package hosted on PyPI

  • Code checked by flake8

  • Docstring and tutorial examples checked by doctest (as a test suite)

  • Unit tests for functions module (as a test suite)

  • All test suites executed in Python 2.6, 2.7 and 3.2 virtualenvs by tox

  • Distributed automatic testing on Travis CI continuous integration platform

References

BT09a

Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2009.

BT09b

Amir Beck and Marc Teboulle. Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. Image Processing, IEEE Transactions on, 2009.

CR07

Emmanuel Candes and Justin Romberg. Sparsity and incoherence in compressive sampling. Inverse problems, 2007. arXiv:math/0611957.

CP07

Patrick L Combettes and Jean-Christophe Pesquet. A douglas–rachford splitting approach to nonsmooth convex variational signal recovery. Selected Topics in Signal Processing, IEEE Journal of, 2007.

CP11

Patrick L Combettes and Jean-Christophe Pesquet. Proximal splitting methods in signal processing. In Fixed-Point Algorithms for Inverse Problems in Science and Engineering. 2011. arXiv:0912.3522.

KP15

Nikos Komodakis and Jean-Christophe Pesquet. Playing with duality. IEEE Signal Processing Magazine, 2015. arXiv:1406.5429.

RFP13

Hugo Raguet, Jalal Fadili, and Gabriel Peyré. A generalized forward-backward splitting. SIAM Journal on Imaging Sciences, 2013. arXiv:1108.4404.

SdB16

Damien Scieur, Alexandre dAspremont, and Francis Bach. Regularized nonlinear acceleration. arXiv, 2016. arXiv:1606.04133.