A custom conjugate gradient descent algorithm

Module Contents


fmax_cg(f, x0, maxiters=100, tol=1e-08, dfdx_and_bdflag=None, xopt=None)

Custom conjugate-gradient (CG) routine for maximizing a function.

_maximize_1d(g, s1, s2, g1)

_max_within_bracket(g, s1, g1, s2, g2, s3, g3)

_find_boundary(g, s1, s2)

_finite_diff_dfdx_and_bdflag(f, x, delta)

pygsti.optimize.customcg.fmax_cg(f, x0, maxiters=100, tol=1e-08, dfdx_and_bdflag=None, xopt=None)

Custom conjugate-gradient (CG) routine for maximizing a function.

This function runs slower than scipy.optimize’s ‘CG’ method, but doesn’t give up or get stuck as easily, and so sometimes can be a better option.

  • f (function) – The function to optimize

  • x0 (numpy array) – The starting point (argument to fn).

  • maxiters (int, optional) – Maximum iterations.

  • tol (float, optional) – Tolerace for convergence (compared to absolute difference in f)

  • dfdx_and_bdflag (function, optional) – Function to compute jacobian of f as well as a boundary-flag.

  • xopt (numpy array, optional) – Used for debugging, output can be printed relating current optimum relative xopt, assumed to be a known good optimum.


scipy.optimize.Result object – Includes members ‘x’, ‘fun’, ‘success’, and ‘message’. Note: returns the negated maximum in ‘fun’ in order to conform to the return value of other minimization routines.

pygsti.optimize.customcg._maximize_1d(g, s1, s2, g1)
pygsti.optimize.customcg._max_within_bracket(g, s1, g1, s2, g2, s3, g3)
pygsti.optimize.customcg._find_boundary(g, s1, s2)
pygsti.optimize.customcg._finite_diff_dfdx_and_bdflag(f, x, delta)