Hello all. I have a quick question. Is it possible to use any of the Optimization.jl global optimizers for the outer optimization step of a population Pumas model (with FOCE approximation)?
I find myself having to keep copies of models in Pumas and MTK (along with different environments and Julia versions) so that I can use BBO optimizers where needed and then do full population estimation in Pumas. Alternatively, is there a way to have Pumas build some sort of objective function that can be used with the Optimization.jl suite? Even if it is not AD compatible since I mainly use BBO.
Tangent
If there were a way to give Pumas access to most of Optimization.jl, I think it would be a standout (and maybe sellable) feature. I keep seeing people trying to apply modern modeling approaches (automated model search, etc.) wrapped around the 1980ās Nonmem design principle, and I can see lots of potential for Pumas to leap over most of that (in much the same way it has with DeepPumas). Iām assuming youāre already building MTK models and using Optimization.jl. Of course, I donāt know what challenges come with this, especially with regards to the FOCE / Laplace approximation and the inner optimization that comes along with it. Tangent over.
Pumas uses Optim.jl in fit, so youād have to use one of the methods from that package. The fit function relies on the details of the callback handling in Optim.jl among a few other things, which is why you canāt just use another optimization package. However, you have access to the FOCE objective function via the loglikelihood function. Specifically, loglikelihood(model, population, param, FOCE()). As you know, the param argument is NamedTuple and most optimization algorithms require a Vector of the parameters. Pumas relies on GitHub - tpapp/TransformVariables.jl: Transformations to contrained variables from āāæ. for the conversion between Vector and NamedTuple representations so you should be able to define an objective with something like
trf = Pumas.totransform(model.param)
objf = vparam -> -loglikelihood(model, population, TransformVariables.transform(trf, vparam), FOCE())
Let us know how it goes.
1 Like
Thank you for the suggestion. It is nice to know that TransformVariables exist, I have been building vector_to_tuple functions for this exact reason with my models for awhile now (outside of Pumas).
I gave this a try but am running into an issue. I can get a working LL function but it seems to be incompatible with BBO for what looks like a housekeeping issue. Iām providing the general code below, I canāt send the actual model (though I donāt think it is a model issue)
Note that my_model is a working Pumas model; it successfully optimizes (locally) with the Pumas fit function. So I know the model is ok. Further, the constructed objf works as expected; when I put parameters in I get out what I expect. However, the optimization step at the end fails. It runs for 10 iterations and then throws what looks like a housekeeping error (see below). If I change the initial conditions and bounds for the optimizer, it runs for 10 iterations and fails.
Following the error into BBO leads to this line of code. I would assume that the deduce_retcode method would be built by the OptimizationFunction or OptimizationProblem calls and shouldnāt be a Pumas issue.
## Error line 185 in OptimizationBBO
opt_ret = Optimization.deduce_retcode(opt_res.stop_reason)
Any thoughts?
The relevant package versions are below. The optimization packages were added to the Pumas 2.6.1 environment with āpreserve=all, so these are the most recent available.
Pumas 2.6.1
Optimization 3.25.1
OptimizationBBO 0.3.0
## Optimization code
trf = Pumas.totransform(my_model.param)
objf = vparam -> -loglikelihood(my_model, my_pop, # Note the negative sign for minimization
TransformVariables.transform(trf, vparam), FOCE())
fun_opt(xx, pp) = objf(xx);
optfn = OptimizationFunction(fun_opt, Optimization.AutoForwardDiff());
optprob = Optimization.OptimizationProblem(optfn, init_vec, nothing,
lb = lb_vec,
ub = ub_vec
);
opt_res = Optimization.solve(optprob,
callback = cb_fun,
BBO_adaptive_de_rand_1_bin_radiuslimited(),
maxtime = 15)
This looks like an issue in Optimization.jl. Iād try to use BlackBoxOptim.jl directly to avoid possible issues in the wrapper code.