Functional generation of `PumasModel`s in a module that avoids world age issues?

I am trying to do something like this

module DummyModule
    using Pumas

    mwe(nt) = @eval @model begin
        @param begin
            Ω ∈ PDiagDomain($nt.dim_Ω)
        end

        @random begin
            η ~ MvNormal(Ω)
        end

        @derived begin
            dv ~ @. Normal(η, 1)
        end
    end

    function dummy_fitting(pop, nts)
        map(nts) do nt
            m = mwe(nt)
            fit(m, pop, init_params(m), JointMAP(), optim_options = (; iterations = 1))
        end
    end

    export mwe, dummy_fitting
end

using .DummyModule

using Pumas

n = 3
dummy_pop = [
    Subject(; id = "1", observations = (; dv = rand(3)), time = collect(LinRange(0, 1, n)));
]
nts = [
    (; dim_Ω = n);
]

dummy_fitting(dummy_pop, nts)

I’ll need to evaluate many combinations of hyper parameters, that’s why I am using a function, rather than writing out many individual models, which would be impractical. However, when I call dummy_fitting I get an error:

ERROR: TaskFailedException
Stacktrace:
  [1] wait(t::Task)
    @ Base .\task.jl:370
  [2] fetch
    @ .\task.jl:390 [inlined]
  [3] __tmapreduce(f::Pumas.var"#756#757"{…}, op::typeof(+), tasks::Vector{…}, len::Int64, init::Float64, src::Tuple{…}, batchargs::Tuple{…})
    @ Pumas .\none:548
  [4] _tmapreduce
    @ .\none:570 [inlined]
  [5] tmapreduce(f::Function, ::Type{…}, op::Function, src::Tuple{…}, batchargs::Tuple{…}; init::Float64, tasks::Vector{…})
    @ Pumas .\none:581
  [6] tmapreduce
    @ .\none:572 [inlined]
  [7] _logdensitygrad(b::Pumas.ThreadedBayesLogDensity{…}, v::Vector{…})
    @ Pumas .\none:489
  [8] logdensity_and_gradient(b::Pumas.ThreadedBayesLogDensity{…}, v::Vector{…})
    @ Pumas .\none:71
  [9] (::Pumas.var"#859#862"{Pumas.ThreadedBayesLogDensity{…}})(f::Float64, g::Vector{Float64}, vparam::Vector{Float64})
    @ Pumas .\none:236
 [10] (::NLSolversBase.var"#51#52"{NLSolversBase.InplaceObjective{…}, Float64})(G::Vector{Float64}, x::Vector{Float64})
    @ NLSolversBase C:\Users\D-LINKEVICIUS\.julia\packages\NLSolversBase\n7XXO\src\objective_types\incomplete.jl:54
 [11] value_gradient!!(obj::NLSolversBase.OnceDifferentiable{Float64, Vector{Float64}, Vector{Float64}}, x::Vector{Float64})
    @ NLSolversBase C:\Users\D-LINKEVICIUS\.julia\packages\NLSolversBase\n7XXO\src\interface.jl:82
 [12] initial_state(method::Optim.LBFGS{…}, options::Optim.Options{…}, d::NLSolversBase.OnceDifferentiable{…}, initial_x::Vector{…})
    @ Optim C:\Users\D-LINKEVICIUS\.julia\packages\Optim\7krni\src\multivariate\solvers\first_order\l_bfgs.jl:168
 [13] DefaultOptimizeFN
    @ .\none:3780 [inlined]
 [14] DefaultOptimizeFN
    @ .\none:3770 [inlined]
 [15] _fit_jointmap(bayes::Pumas.ThreadedBayesLogDensity{…}, init_randeffs::Nothing, optimize_fn::Pumas.DefaultOptimizeFN{…}, cb::Returns{…})
    @ Pumas .\none:251
 [16] _fit(model::PumasModel{…}, data::Vector{…}, param::@NamedTuple{…}, alg::JointMAP{…}, constantcoef::Tuple{}, init_randeffs::Nothing, ignore_numerical_error::Bool)
    @ Pumas .\none:174
 [17] #fit#857
    @ .\none:85 [inlined]
 [18] (::Main.DummyModule.var"#1#2"{Vector{Subject{…}}})(nt::@NamedTuple{dim_Ω::Int64})
    @ Main.DummyModule c:\Users\D-LINKEVICIUS\GeneralSciMLHHModels.jl\mwe_pumas.jl:21
 [19] iterate
    @ .\generator.jl:48 [inlined]
 [20] _collect(c::Vector{…}, itr::Base.Generator{…}, ::Base.EltypeUnknown, isz::Base.HasShape{…})
    @ Base .\array.jl:811
 [21] collect_similar
    @ .\array.jl:720 [inlined]
 [22] map
    @ .\abstractarray.jl:3371 [inlined]
 [23] dummy_fitting(pop::Vector{Subject{…}}, nts::Vector{@NamedTuple{…}})
    @ Main.DummyModule c:\Users\D-LINKEVICIUS\GeneralSciMLHHModels.jl\mwe_pumas.jl:19
 [24] top-level scope
    @ c:\Users\D-LINKEVICIUS\GeneralSciMLHHModels.jl\mwe_pumas.jl:40

    nested task error: MethodError: no method matching (::Main.DummyModule.var"#3#8")(::@NamedTuple{Ω::PDMats.PDiagMat{ForwardDiff.Dual{…}, Vector{…}}}, ::@NamedTuple{})
    The function `#3` exists, but no method is defined for this combination of argument types.

    Closest candidates are:
      (::Main.DummyModule.var"#3#8")(::NamedTuple, ::NamedTuple{()}) (method too new to be called from this world context.)
       @ Main.DummyModule none:472

    Stacktrace:
      [1] (::Pumas.RandomObj{…})(::Subject{…}, param::@NamedTuple{…})
        @ Pumas .\none:447
      [2] _penalized_conditional_nll(model::PumasModel{…}, subject::Subject{…}, param::@NamedTuple{…}, vrandeffsorth::SubArray{…}, diffeq_options::@NamedTuple{…})
        @ Pumas .\none:1396
ientConfig{…})
        @ ForwardDiff C:\Users\D-LINKEVICIUS\.julia\packages\ForwardDiff\X74OO\src\gradient.jl:98
      [6] gradient!(result::DiffResults.MutableDiffResult{…}, f::Pumas._L_rfx{…}, x::Vector{…}, cfg::ForwardDiff.GradientConfig{…}, ::Val{…})
        @ ForwardDiff C:\Users\D-LINKEVICIUS\.julia\packages\ForwardDiff\X74OO\src\gradient.jl:39
      [7] value_and_gradient!(::Pumas._L_rfx{…}, ::Vector{…}, ::DifferentiationInterfaceForwardDiffExt.ForwardDiffGradientPrep{…}, ::ADTypes.AutoForwardDiff{…}, ::Vector{…})
        @ DifferentiationInterfaceForwardDiffExt C:\Users\D-LINKEVICIUS\.julia\packages\DifferentiationInterface\zJHX8\ext\DifferentiationInterfaceForwardDiffExt\onearg.jl:396
      [8] (::Pumas.var"#756#757"{…})(i::Int64, buffer1_i::Vector{…}, buffer2_i::Vector{…}, res_i::Vector{…}, cfg_rfx_i::DifferentiationInterfaceForwardDiffExt.ForwardDiffGradientPrep{…})
        @ Pumas .\none:502
      [9] macro expansion
        @ .\none:562 [inlined]
     [10] macro expansion
        @ .\simdloop.jl:77 [inlined]
     [11] batch_mapreduce
        @ .\none:560 [inlined]
     [12] (::Pumas.var"#758#759"{Float64, UnitRange{…}, Pumas.var"#756#757"{…}, typeof(+), Tuple{…}, Tuple{…}})()
        @ Pumas .\none:543
Some type information was truncated. Use `show(err)` to see complete types.

I assume this is due to an anonymous function in @random which stays in DummyModule rather than where it should be, which I’d guess is Main. Is there a way to do this that would avoid having to use other external packages?

I am working with an academic license of DeepPumas v0.9.0.

Bump, following up on this, would greatly appreciate an “it’s not possible” if that’s the case.

The problem here is that @eval will execute its statement at top-level (Main when called in the REPL, DummyModule when called inside mwe(nt)) and if you use it to define new functions it will advance the world age counter. However the dummy_fitting function will not see the new methods until it returns, because the world age that a function sees cannot change during its execution. See e.g. World Age for beginners: one way to compile a dynamic language - Internals & Design - Julia Programming Language for some more details

If you want to use @eval to define a Pumas model you need to do it at top-level. In this case you could e.g. create at top-level a dictionary/vector of Pumas models, each of which is defined using @eval, and then pass this collection of models to dummy_fitting. Top-level here could mean either

  • In your script that gets executed in the REPL or in a call to julia script.jl (so the models are @evaled in Main
  • (never tried, but I think it should work) Inside DummyModule, with the resulting collection becoming a global variable inside DummyModule. Then you would not need even to pass the models to dummy_fitting because you could refer to the global variable directly.

Note that Pumas models @evaled inside Main are trickier to serialize and deserialize correctly. Defining them inside DummyModule may make deserialization easier.

You can also define a Pumas model inside a function like you tried, but in that case you must not use @eval. This should also be easier to deserialize, as long as you do not change the order of the functions in your DummyModule.

It should be relatively safe to refer to local variables of the function inside the Pumas model (as long as they are not mutables that you are going to modify elsewhere), especially if you are just modifying the @param block. In your MWE, it is perfectly fine to write

mwe(nt) = @model begin
        @param begin
            Ω ∈ PDiagDomain(nt.dim_Ω)
        end

        @random begin
            η ~ MvNormal(Ω)
        end

        @derived begin
            dv ~ @. Normal(η, 1)
        end
    end

To add to Lorenzo’s response, one way to workaround world age issues with a minor performance penalty is invokelatest. In your code, replacing the fit line with the following successfully works around the world age issue.

invokelatest(fit, m, pop, init_params(m), JointMAP(), optim_options = (; iterations = 1)