Hi!
I have built the same model with 3 kinds of observations in NONMEM and Pumas, the NONMEM OFV with constant is 9573, while the -2 x loglikelihood in Pumas is -28666, which is approximately three times as large as NONMEM (if we ignore the minus sign).
By the way, I also noticed that, in the printed iteration process, the loglikelihood is negative while in the final output, it has become a positive value, is the negative value the right one?
And should I always divide the number of observation types when comparing the OFV in two software?
Thanks!
Please see Comparing NONMEM and Pumas models
The number in the optimization trace is indeed -loglikelihood because most numerical optimization software works in terms of minimization. So to maximize the loglikelihood we minimize -loglikelihood as you can see when comparing the trace with the final output.
If -2loglikelihood in Pumas == NONMEM OFV - 19093
then Pumas found a parameter vector where the negative loglikelihood is significantly lower or equivalently the loglikelihood is much higher. This leaves us with a couple of options and you should at least consider:
- Pumas and NONMEM found local minima that are different and one has a higher likelihood
- Only Pumas found a local solution and NONMEM stopped early without converging
- The models are not exactly identical between the two implementations
- The data is not interpreted exactly the same between the two implementations
There are a couple of things you can do to debug. The first step is to do what the tutorial Andreas points to does: grab the final iterations in nonmem and evaluate the loglikelihood manually in Pumas at those same fixed effects. These two likelihoods should be very similar. If that is not the case, the two models or data sets used in NONMEM and Pumas are probably not identically formulated and we should look there to compare.
Thank you! I have solved this issue