Is there a way to do saddle reset in Pumas?

Is there a way to do saddle reset in Pumas to avoid saddle points?
For example, in NONMEM, we can do a single saddle reset step using the computed Hessian (SADDLE_RESET = 1 SADDLE_HESS = 1).

It’s possible to restart the optimization with the final parameters from a previous optimization. That will effectively reset the Hessian approximation to the identity. You simply do something like

fit1 = fit(model, population, starting_values, approximation)
fit2 = fit(model, population, coef(fit1), approximation)

That will have the same effects as the reset in NONMEM. Notice that this is one of the advantages with Pumas being integrated into a real programming language. It’s not necessary to use flags for all features. Some features are simple compositions of other functions.

My colleague @mohamed82008 pointed out that you are referring to the methods described in Saddle-Reset for Robust Parameter Estimation and Identifiability Analysis of Nonlinear Mixed Effects Models | SpringerLink so my last reply wasn’t helpful. We currently don’t have any implementations of the methods from that paper but we are considering introducing similar but not necessarily identical functionality in the future.

Just to clarify here, there are 2 cases you might be running into:

  1. The optimization is failing to improve the objective due to a bad Hessian approximation even when the gradient norm is not close to 0. In this case, Andreas’s suggestion will be helpful by resetting the Hessian approximation and trying again.
  2. The first optimization converged to a true saddle point. Generally speaking, the chances of this happening in the kinds of optimization algorithms similar to BFGS tends to be very small (https://arxiv.org/pdf/1710.07406.pdf) with a random initial point and dataset. But if the optimization algorithm was unlucky enough to land on one of those points, simply restarting the optimization will not help escape the saddle point. In this case, you can either restart the fitting from a different random initial point. Or you can perturb the current solution and re-fit. This can be a completely random perturbation or a perturbation along the directions that we suspect might lead to further improvement in the objective value which is what the “Saddle-Reset” paper attempts to do.

As a solution to your problem, try Andreas’s suggestion. If it doesn’t change the optimal parameter values, try a few random perturbations. If that doesn’t change the optimal parameter values too, then it’s very unlikely you are at a saddle point.

1 Like