|
Unfit
3.1.1
Data fitting and optimization software
|
A class to implement the Levenberg Marquardt optimization method. More...
#include <LevenbergMarquardt.hpp>
Public Member Functions | |
| LevenbergMarquardt () | |
| virtual | ~LevenbergMarquardt () |
| int | FindMin (GenericCostFunction &cost_function, std::vector< double > &coordinates) override |
| Implements the Levenberd-Marquardt optimization method. More... | |
| void | Reset () override |
Public Member Functions inherited from Unfit::GenericOptimizer | |
| GenericOptimizer () | |
| virtual | ~GenericOptimizer () |
| void | ResetGenericOptimizer () |
| virtual double | GetCost (std::size_t index=0) const noexcept |
| virtual bool | GetIsPopulationBased () const noexcept |
| virtual std::size_t | GetNumberOfIterations () const noexcept |
| virtual std::size_t | GetNumberOfFunctionEvaluations () const noexcept |
| virtual std::vector< std::vector< double > > | GetPopulation () const |
| virtual std::vector< double > | GetSolution (std::size_t index=0) const |
| virtual void | SetPopulation (const std::vector< std::vector< double >> &population) |
Private Member Functions | |
| std::vector< double > | LoopUntilWithinBounds (const std::vector< double > &point, std::vector< double > &hlm) |
| std::vector< double > | CalculateResiduals (GenericCostFunction &cost_function, const std::vector< double > &coordinates) |
| void | FindJacobian (GenericCostFunction &cost_function, const std::vector< double > &residuals, const std::vector< double > &coordinates, bool one_sided_difference=true) |
| void | BroydenUpdate (const std::vector< double > &residuals_new, const std::vector< double > &residuals, const std::vector< double > &hlm, Matrix &jacobianT) |
Private Attributes | |
| Matrix | jacobian_ |
| std::size_t | dimensions_ |
| std::size_t | observation_size_ |
| double | stored_cost_ |
Friends | |
| class | TestLevenbergMarquardt |
Additional Inherited Members | |
Public Attributes inherited from Unfit::GenericOptimizer | |
| Unfit::Bounds | bounds |
| Unfit::Options | options |
Protected Member Functions inherited from Unfit::GenericOptimizer | |
| virtual bool | CalculateCost (GenericCostFunction &CostFunction, std::vector< double > &x) |
| void | GeneratePopulation (GenericCostFunction &CostFunction, std::size_t dimensions) |
| void | GenerateRandomEngines () |
| virtual bool | IsConverged (const std::vector< double > &best_member, std::size_t truncated_index=0) const |
| Checks to see if the population has converged. More... | |
| virtual void | PrintInitialOutput (double best_cost) const |
| virtual void | PrintIterationOutput (double best_cost) const |
| virtual void | PrintFinalOutput () const |
| virtual void | SortPopulation () noexcept |
Protected Attributes inherited from Unfit::GenericOptimizer | |
| std::vector< std::vector< double > > | population_ |
| std::vector< std::mt19937 > | random_engines_ |
| std::atomic< std::size_t > | function_evaluations_ |
| std::atomic< std::size_t > | iterations_ |
| bool | is_population_based_ |
A class to implement the Levenberg Marquardt optimization method.
This class implements the Levenberg Marquardt optimization method. It requires a cost function (written by the user) and an initial guess of the unknown parameters. It then proceeds to try to find a set of parameters that minimize the cost function.
The inspiration for this implementation was derived from the book: Methods for Non-linear Least Squares Problems, by K. Madsen, H.B. Nielsen, O. Tingleff. At the time of writing, this was freely available online. There is also a sample matlab implementation on their website that was useful for getting the Broyden rank one updates to work.
| Unfit::LevenbergMarquardt::LevenbergMarquardt | ( | ) |
The constructor sets all of the parameters to their defaults.
|
inlinevirtual |
As we are deriving from this class, the destructor should be virtual. In this case an empty destructor is just fine as we are putting everything into std::vectors which take care of themselves.
|
private |
Rather than calculate the jacobian matrix via finite differences at each iteration, for expensive cost functions in particular, it is worth using Brodyen Rank One Updates. These given an estimate of the step size and do not require any function evaluations. See the text quoted in the class documentation for details.
| residuals_new | the residuals at the proposed parameter estimate |
| residuals | the residuals at the current parameter estimate |
| hlm | the step size between the current and new parameter estimates |
| jacobianT | the transpose of the jacobian matrix (updated) |
|
private |
Calculate the residuals (costs) associated with the the current estimate of the unknown parameters. NOTE that you should never call the cost function directly, call this method instead. This method also tracks the number of function evaluations.
| cost_function | the function to be used to calculate the residuals |
| coordinates | the estimate of the unknown parameters |
|
private |
This method computes the first derivatives of the cost function with respect to each of the unknown parameters and stores the values in the jacobian matrix. Note that it is more efficient to store JT (J-transpose) so that is what is done here.
| cost_function | cost functions to calculate the residuals |
| residuals | a vector of residuals at the current guess. Only used for one-sided differencing; can be empty for two-sided differencing. |
| coordinates | vector of the coefficients of the function |
| one_sided_difference | select between one sided (forward) and two sided finite different approximations for the Jacobian calculation |
|
overridevirtual |
Implements the Levenberd-Marquardt optimization method.
This method takes in a cost function (user supplied) and an initial guess for the unknown parameters and uses the Levenberd-Marquardt algorithm to try to find the set of parameters that provides a minimum cost. This is implemented as a nonlinear least squares problem, so the cost will always be >= 0. This is known as a local optimizer, so the results you get will depend on how good the initial guess is.
Intended use: auto rc = object.FindMin(cost_function, coordinates)
| cost_function | residuals to be tested |
| coordinates | vector of initial guesses |
Implements Unfit::GenericOptimizer.
|
private |
This method calculates the new coordinates based on the old coordinates and the step size, hlm. If the new coordinates fall within the bounds of the domain that is all it does. If the new coordinates are outside the bounds of the domain, the step size is iteratively reduced by a factor of two until it is inside the domain.
| point | current coordinates |
| hlm | step to be taken |
|
overridevirtual |
Resets everything back to their default values. This is equivalent to destroying the object and creating a new one.
Implements Unfit::GenericOptimizer.
|
friend |
For unit testing purposes only
|
private |
Number of parameters to find
|
private |
Jacobian matrix
|
private |
Number of experimental observations
|
private |
Variable to store the current cost of the solution
1.8.13