Simulating with vcov

Hello,

I am trying to simulate with vcov. However, I’m getting an error. Could I get some help with this?

julia> test_sims = simobs(model_fit, vcov(model_fit), samples=2)
Progress: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| Time: 0:00:00
ERROR: MethodError: no method matching PDMats.PDMat(::Pumas.CovarianceEstimate{Float64, Matrix{Float64}})
Closest candidates are:
  PDMats.PDMat(::AbstractMatrix{T} where T, ::LinearAlgebra.Cholesky{T, S}) where {T, S} at /builds/PumasAI/PumasSystemImages-jl/.julia/packages/PDMats/vgpB7/src/pdmat.jl:12
  PDMats.PDMat(::LinearAlgebra.Cholesky) at /builds/PumasAI/PumasSystemImages-jl/.julia/packages/PDMats/vgpB7/src/pdmat.jl:21
  PDMats.PDMat(::Symmetric) at /builds/PumasAI/PumasSystemImages-jl/.julia/packages/PDMats/vgpB7/src/pdmat.jl:20
  ...
Stacktrace:
 [1] MvNormal(μ::Vector{Float64}, Σ::Pumas.CovarianceEstimate{Float64, Matrix{Float64}})
   @ Distributions /builds/PumasAI/PumasSystemImages-jl/.julia/packages/Distributions/Xrm9e/src/multivariate/mvnormal.jl:211
 [2] simobs(fpm::Pumas.FittedPumasModel{PumasModel{ParamSet{NamedTuple{(:tvtr_ratio, :tvCL, :tvVc, :tvDuration, :ωtr_ratio, :ωCL, :ωVc, :σ_add, :σ_prop), NTuple{9, RealDomain{Float64, TransformVariables.Infinity{true}, Float64}}}}, var"#39#55", var"#40#56", var"#42#58", var"#44#60", Pumas.LinearODE, var"#45#61", var"#50#66", ModelingToolkit.ODESystem}, Vector{Subject{NamedTuple{(:dv,), Tuple{Vector{Union{Missing, Float64}}}}, Pumas.ConstantCovar{NamedTuple{(:rep, :params_num, :NSubj, :occ, :period, :sequence, :seq_n, :formulation, :isT, :input_tr_ratio), Tuple{Int64, Int64, Int64, Int64, Int64, String, Int64, String, Int64, Float64}}}, Vector{Pumas.Event{Float64, Float64, Float64, Float64, Float64, Float64, Int64}}, Vector{Float64}}}, Optim.MultivariateOptimizationResults{Optim.BFGS{LineSearches.InitialStatic{Float64}, LineSearches.BackTracking{Float64, Int64}, Nothing, Float64, Optim.Flat}, Float64, Vector{Float64}, Float64, Float64, Vector{Optim.OptimizationState{Float64, Optim.BFGS{LineSearches.InitialStatic{Float64}, LineSearches.BackTracking{Float64, Int64}, Nothing, Float64, Optim.Flat}}}, Bool, NamedTuple{(:f_limit_reached, :g_limit_reached, :h_limit_reached, :time_limit, :callback, :f_increased), NTuple{6, Bool}}}, Pumas.FOCE, Vector{Vector{Float64}}, NamedTuple{(:optimize_fn, :constantcoef, :omegas, :ensemblealg, :diffeq_options), Tuple{Pumas.DefaultOptimizeFN{Nothing, NamedTuple{(:show_trace, :store_trace, :extended_trace, :g_tol, :allow_f_increases), Tuple{Bool, Bool, Bool, Float64, Bool}}}, NamedTuple{(), Tuple{}}, Tuple{}, EnsembleThreads, NamedTuple{(), Tuple{}}}}, ParamSet{NamedTuple{(:tvtr_ratio, :tvCL, :tvVc, :tvDuration, :ωtr_ratio, :ωCL, :ωVc, :σ_add, :σ_prop), NTuple{9, RealDomain{Float64, TransformVariables.Infinity{true}, Float64}}}}}, population::Vector{Subject{NamedTuple{(:dv,), Tuple{Vector{Union{Missing, Float64}}}}, Pumas.ConstantCovar{NamedTuple{(:rep, :params_num, :NSubj, :occ, :period, :sequence, :seq_n, :formulation, :isT, :input_tr_ratio), Tuple{Int64, Int64, Int64, Int64, Int64, String, Int64, String, Int64, Float64}}}, Vector{Pumas.Event{Float64, Float64, Float64, Float64, Float64, Float64, Int64}}, Vector{Float64}}}, vcov::Pumas.CovarianceEstimate{Float64, Matrix{Float64}}, randeffs::Nothing; samples::Int64, rng::MersenneTwister)
   @ Pumas /builds/PumasAI/PumasSystemImages-jl/.julia/packages/Pumas/HDuXQ/src/estimation/inference.jl:630
 [3] simobs(fpm::Pumas.FittedPumasModel{PumasModel{ParamSet{NamedTuple{(:tvtr_ratio, :tvCL, :tvVc, :tvDuration, :ωtr_ratio, :ωCL, :ωVc, :σ_add, :σ_prop), NTuple{9, RealDomain{Float64, TransformVariables.Infinity{true}, Float64}}}}, var"#39#55", var"#40#56", var"#42#58", var"#44#60", Pumas.LinearODE, var"#45#61", var"#50#66", ModelingToolkit.ODESystem}, Vector{Subject{NamedTuple{(:dv,), Tuple{Vector{Union{Missing, Float64}}}}, Pumas.ConstantCovar{NamedTuple{(:rep, :params_num, :NSubj, :occ, :period, :sequence, :seq_n, :formulation, :isT, :input_tr_ratio), Tuple{Int64, Int64, Int64, Int64, Int64, String, Int64, String, Int64, Float64}}}, Vector{Pumas.Event{Float64, Float64, Float64, Float64, Float64, Float64, Int64}}, Vector{Float64}}}, Optim.MultivariateOptimizationResults{Optim.BFGS{LineSearches.InitialStatic{Float64}, LineSearches.BackTracking{Float64, Int64}, Nothing, Float64, Optim.Flat}, Float64, Vector{Float64}, Float64, Float64, Vector{Optim.OptimizationState{Float64, Optim.BFGS{LineSearches.InitialStatic{Float64}, LineSearches.BackTracking{Float64, Int64}, Nothing, Float64, Optim.Flat}}}, Bool, NamedTuple{(:f_limit_reached, :g_limit_reached, :h_limit_reached, :time_limit, :callback, :f_increased), NTuple{6, Bool}}}, Pumas.FOCE, Vector{Vector{Float64}}, NamedTuple{(:optimize_fn, :constantcoef, :omegas, :ensemblealg, :diffeq_options), Tuple{Pumas.DefaultOptimizeFN{Nothing, NamedTuple{(:show_trace, :store_trace, :extended_trace, :g_tol, :allow_f_increases), Tuple{Bool, Bool, Bool, Float64, Bool}}}, NamedTuple{(), Tuple{}}, Tuple{}, EnsembleThreads, NamedTuple{(), Tuple{}}}}, ParamSet{NamedTuple{(:tvtr_ratio, :tvCL, :tvVc, :tvDuration, :ωtr_ratio, :ωCL, :ωVc, :σ_add, :σ_prop), NTuple{9, RealDomain{Float64, TransformVariables.Infinity{true}, Float64}}}}}, vcov::Pumas.CovarianceEstimate{Float64, Matrix{Float64}}, randeffs::Nothing; samples::Int64, rng::MersenneTwister)
   @ Pumas /builds/PumasAI/PumasSystemImages-jl/.julia/packages/Pumas/HDuXQ/src/estimation/inference.jl:616
 [4] top-level scope
   @ REPL[75]:1

Hello Donald,

Can you please try the following instead?

test_sims = simobs(model_fit, vcov(model_fit).Σ, samples=2)

Hi Mohamed,

I tried that and got the following error msg:

Progress: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| Time: 0:01:21
ERROR: PosDefException: matrix is not Hermitian; Cholesky factorization failed.
Stacktrace:
 [1] checkpositivedefinite(info::Int64)
   @ LinearAlgebra /usr/local/julia/share/julia/stdlib/v1.6/LinearAlgebra/src/factorization.jl:18
 [2] cholesky!(A::Matrix{Float64}, ::Val{false}; check::Bool)
   @ LinearAlgebra /usr/local/julia/share/julia/stdlib/v1.6/LinearAlgebra/src/cholesky.jl:282
 [3] #cholesky#130
   @ /usr/local/julia/share/julia/stdlib/v1.6/LinearAlgebra/src/cholesky.jl:378 [inlined]
 [4] cholesky (repeats 2 times)
   @ /usr/local/julia/share/julia/stdlib/v1.6/LinearAlgebra/src/cholesky.jl:378 [inlined]
 [5] PDMat
   @ /builds/PumasAI/PumasSystemImages-jl/.julia/packages/PDMats/vgpB7/src/pdmat.jl:19 [inlined]
 [6] MvNormal(μ::Vector{Float64}, Σ::Matrix{Float64})
   @ Distributions /builds/PumasAI/PumasSystemImages-jl/.julia/packages/Distributions/Xrm9e/src/multivariate/mvnormal.jl:211
 [7] simobs(fpm::Pumas.FittedPumasModel{PumasModel{ParamSet{NamedTuple{(:tvtr_ratio, :tvCL, :tvVc, :tvDuration, :ωtr_ratio, :ωCL, :ωVc, :σ_add, :σ_prop), NTuple{9, RealDomain{Float64, TransformVariables.Infinity{true}, Float64}}}}, var"#39#55", var"#40#56", var"#42#58", var"#44#60", Pumas.LinearODE, var"#45#61", var"#50#66", ModelingToolkit.ODESystem}, Vector{Subject{NamedTuple{(:dv,), Tuple{Vector{Union{Missing, Float64}}}}, Pumas.ConstantCovar{NamedTuple{(:rep, :params_num, :NSubj, :occ, :period, :sequence, :seq_n, :formulation, :isT, :input_tr_ratio), Tuple{Int64, Int64, Int64, Int64, Int64, String, Int64, String, Int64, Float64}}}, Vector{Pumas.Event{Float64, Float64, Float64, Float64, Float64, Float64, Int64}}, Vector{Float64}}}, Optim.MultivariateOptimizationResults{Optim.BFGS{LineSearches.InitialStatic{Float64}, LineSearches.BackTracking{Float64, Int64}, Nothing, Float64, Optim.Flat}, Float64, Vector{Float64}, Float64, Float64, Vector{Optim.OptimizationState{Float64, Optim.BFGS{LineSearches.InitialStatic{Float64}, LineSearches.BackTracking{Float64, Int64}, Nothing, Float64, Optim.Flat}}}, Bool, NamedTuple{(:f_limit_reached, :g_limit_reached, :h_limit_reached, :time_limit, :callback, :f_increased), NTuple{6, Bool}}}, Pumas.FOCE, Vector{Vector{Float64}}, NamedTuple{(:optimize_fn, :constantcoef, :omegas, :ensemblealg, :diffeq_options), Tuple{Pumas.DefaultOptimizeFN{Nothing, NamedTuple{(:show_trace, :store_trace, :extended_trace, :g_tol, :allow_f_increases), Tuple{Bool, Bool, Bool, Float64, Bool}}}, NamedTuple{(), Tuple{}}, Tuple{}, EnsembleThreads, NamedTuple{(), Tuple{}}}}, ParamSet{NamedTuple{(:tvtr_ratio, :tvCL, :tvVc, :tvDuration, :ωtr_ratio, :ωCL, :ωVc, :σ_add, :σ_prop), NTuple{9, RealDomain{Float64, TransformVariables.Infinity{true}, Float64}}}}}, population::Vector{Subject{NamedTuple{(:dv,), Tuple{Vector{Union{Missing, Float64}}}}, Pumas.ConstantCovar{NamedTuple{(:rep, :params_num, :NSubj, :occ, :period, :sequence, :seq_n, :formulation, :isT, :input_tr_ratio), Tuple{Int64, Int64, Int64, Int64, Int64, String, Int64, String, Int64, Float64}}}, Vector{Pumas.Event{Float64, Float64, Float64, Float64, Float64, Float64, Int64}}, Vector{Float64}}}, vcov::Matrix{Float64}, randeffs::Nothing; samples::Int64, rng::MersenneTwister)
   @ Pumas /builds/PumasAI/PumasSystemImages-jl/.julia/packages/Pumas/HDuXQ/src/estimation/inference.jl:630
 [8] #simobs#634
   @ /builds/PumasAI/PumasSystemImages-jl/.julia/packages/Pumas/HDuXQ/src/estimation/inference.jl:616 [inlined]
 [9] top-level scope
   @ REPL[96]:1

Could you please display the matrix here? It is possible that the matrix is slightly not symmetric due to numerical error. If that’s the only issue,

test_sims = simobs(model_fit, Symmetric(vcov(model_fit).Σ), samples=2)

should run fine (note I added Symmetric above).

Thanks, Mohamed! Looks like adding Symmetric fixed it.

This is the vcov matrix:

9×9 Pumas.CovarianceEstimate{Float64, Matrix{Float64}}:
   0.0129921        3.4533       602.888       -0.000103089    0.00126979     -0.00336832     7.22671e-5  -4.44173e-6    3.23847e-5
   3.4533        7898.16       47458.3          0.0570391     -3.00166         3.07869        0.264192     0.0657525    -0.0180367
 602.888        47458.3            1.01233e8  -33.397        213.61         -346.286        232.472       -3.27684       5.01936
  -0.000103089      0.0570391    -33.397        3.19186e-5    -5.42398e-5      6.51985e-5     6.85552e-5   1.80607e-6   -1.81526e-6
   0.00126979      -3.00166      213.61        -5.42398e-5     0.0342586      -0.0113754     -0.00108963  -0.000136951   7.22875e-6
  -0.00336832       3.07869     -346.286        6.51985e-5    -0.0113754       0.010411      -1.68559e-5   0.000110317  -5.04867e-5
   7.22671e-5       0.264192     232.472        6.85552e-5    -0.00108963     -1.68559e-5     0.00407851   8.05854e-6   -1.85646e-5
  -4.44173e-6       0.0657525     -3.27684      1.80607e-6    -0.000136951     0.000110317    8.05854e-6   1.6085e-6    -7.38658e-7
   3.23847e-5      -0.0180367      5.01936     -1.81526e-6     7.22875e-6     -5.04867e-5    -1.85646e-5  -7.38658e-7    3.99894e-6

Dr Mohamed- Could you please enlighten on what might have been the numerical error that caused the original error of Dr Lee? Understanding the root cause might help avoid this error in the future. I am assuming vcov stands for variance-covariance matrix representing the SE of each estimated parameter. Thank you sir!
Bob

It is not uncommon in floating point operations that 2 numbers calculated in 2 different ways won’t be “exactly” equal to each other up to the last decimal place even if theoretically in exact arithmetic they should be equal. In Julia, we have the Symmetric wrapper which you can wrap an almost symmetric matrix with to tell Julia to only worry about one side of the matrix because the other side is identical to it for all practical purposes. We should probably make use of this wrapper internally so users don’t have to think about it.

Yes it’s the variance-covariance matrix estimate. The square root of the diagonal of the vcov matrix gives an estimate of the standard errors of the parameters.

Dr Mohamed. Thank you. In order for me to understand this clearly, consider a simple one compartment model for an intravenous bolus dosing. This model would have 5 parameters: tvcl, tvvc, bsv_cl, bsv_vc, sigma_add. The vcov would be:
se_tvcl 0 0 0 0
0 se_tvvc 0 0 0
0 0 se_bsv_cl 0 0
0 0 0 se_bsv_vc 0
0 0 0 0 se_sigma_add
is this correct? If so, what does ‘other side’ in your reply mean ?
Bob

The off-diagonal terms will not be 0. They will give an estimate of the covariance between the parameter estimates. So the estimate of the covariance between the tvcl and tvvc parameter estimates will be in the (1, 2) element or (2, 1) element of the matrix since covariance is symmetric. If vcov[1,2] and vcov[2,1] are not 100% equal to each other but only because of numerical roundoff errors in intermediate computations, the Symmetric wrapper will tell Julia to only use one of them and consider the other one equal. The same is true for vcov[1,3] and vcov[3,1] and all such pairs of off-diagonal entries in the vcov matrix.

Additionally the diagonal will be an estimate of se_tvcl^2, se_tvvc^2, se_bsv_cl^2, se_bsv_vc^2 and se_sigma_add^2. Note the squares since they are variances and the standard error estimate is the square root of the variance estimate.

Thank you very much. I was reviewing one of your tutorials on your web page, and found this example output.

Did not find the off-diagonal SEs. How do I output the full var-covariance matrix please?
Bob

Let’s say you have this

pkfit_1cmp = fit(pk_1cmp,
                 pop,
                 pkparams_1cmp,
                 Pumas.FOCE())

you can then make an inference of your parameters using

# make an inference
infer_1cmp = infer(pkfit_1cmp)

you can then query the variance-covariance from the inferred result by typing in infer_1cmp.vcov

see below for an example

1 Like

To clarify when I mention a symmetric matrix, I am referring to a matrix V where V[i,j] == V[j,i] for all i and j. For example the following matrix is symmetric:

V1 = [
1.0   -0.1;
-0.1   1.0;
]

but this one is not:

V2 = [
1.0        -0.1;
-0.100001   1.0;
]

even if the difference between V2[1,2] and V2[2,1] is tiny. The matrix Symmetric(V2) will be treated as a symmetric matrix ignoring the slight difference between the V2[1,2] and V2[2,1].

1 Like

Thank you. I will review your tutorials again, I might have missed infer_mymodel.vcov command.
Bob

Dr Mohamed. Yes this is consistent with my understanding regarding a symmetric matrix. Two questions:

  1. Does Pumas allow simulations directly reading from infer_mymodel.vcov() ? Does it take the off-diagonal elements as well? This is what @donaldlee3 was attempting I presume.
  2. some of my friends use NONMEM software. I attended a training 25-30 years ago, do not remember a thing. Do you or one of the other experts know if NONMEM does this as well. I ask because, predicting dose in patients by taking the uncertainty is something that scientists have not performed. I am assuming perhaps this is something modern in Pumas?
    Bob
1 Like

Yes and yes. And indeed this is what Donald was attempting above.

Hi @bobbrown. Certainly, NONMEM outputs this matrix as well, so indeed the same approach can be used with NONMEM.

@benjaminrich Thank you sir. Can NONMEM perform this type of simulation with uncertainty automatically or seamlessly, compared to Pumas? If I can achieve this with less overhead, I would rather use Pumas.
Bob

not as seamless in Pumas, but there are certain routines that can be used. 99% of folks take the variance-covariance matrix to R, and simulate it there after writing the model into mrgsolve.