Hello all,
I’m trying to build out a custom model to use in Pumas and am wondering if it is possible. I’m ultimately trying to build out a custom longitudinal Graded Response Model. First however I’m just trying out a custom static Graded response model (GRM). The logpdf for a GRM is relatively easy to code (see below). The difficulty I’m having is in figuring out how to get it into the Pumas NLME estimation framework (FOCE). I have this all working in a Turing.jl estimation framework and am now trying to port things over to Pumas. For a little background, a GRM predicts the discrete probabilities for a collection of ordinal (polytomous) domain scores and estimates a latent trait parameter for each individual (theta, this has IIV, hence NLME) as well as some domain parameters (no IIV).
This is a simple model with a simple logpdf. I have this running in Turing with @addlogprob! and have also built a custom GRM distribution within the Distributions.jl type system (along with rand, pdf, and logpdf functions defined on it). However, when I try to use this in Pumas, I get an error saying this distribution is (not supported for this approximation method). To be clear, Turing.jl is ok with this distribution, so it is not a construction issue. I’m assuming Pumas only works with a subset of Distributions.jl.
Is there any way to get a custom probabilistic model into a Pumas model??? Either a way to get it to accept the custom distribution or some equivalent of @addlogprob!?
struct GradedResponseDist{T<:Real, V<:AbstractVector{<:Real}} <: DiscreteUnivariateDistribution
θ::T
b::V
function GradedResponseDist(θ::T, b::V) where {T<:Real, V<:AbstractVector{<:Real}}
length(b) ≥ 1 || throw(ArgumentError("b must have ≥ 1 threshold"))
@inbounds for k in 1:length(b)-1
if !(b[k] < b[k+1])
println("b vector: ", b)
throw(DomainError(b, "thresholds must be strictly increasing"))
end
end
new{T, V}(θ, b)
end
end
# log-mass with stable difference of sigmoids
function logpdf(d::GradedResponseDist, y::Int)
insupport(d, y) || return -Inf
θ, b = d.θ, d.b
K = length(b)
if y == 0
# log(1 - σ(x)) = -softplus(x)
return -softplus(θ - b[1])
elseif y == K
# log(σ(x)) = -softplus(-x)
return -softplus(-(θ - b[K]))
else
x1 = θ - b[y]
x2 = θ - b[y + 1]
la = -softplus(-x1) # log σ(x1)
lb = -softplus(-x2) # log σ(x2)
return la + log1mexp(lb - la) # log(σ(x1) - σ(x2))
end
end
# This derived block does not work. Full model not provided for brevity and the error is coming from the distribution anyway.
@derived begin
D1 ~ @. GradedResponseDist(θ, b1)
D2 ~ @. GradedResponseDist(θ, b2)
D3 ~ @. GradedResponseDist(θ, b3)
D4 ~ @. GradedResponseDist(θ, b4)
D5 ~ @. GradedResponseDist(θ, b5)
D6 ~ @. GradedResponseDist(θ, b6)
D7 ~ @. GradedResponseDist(θ, b7)
end
