# Information criteria in Analysis manager (AIC, CAIC, BIC)

Hi,

I run a standard logit in the embedded Analysis Manager of Lighthouse Studio. Then I exported effects-coded data using SMRT and imported it into Stata. In Stata I run the same model (as a starting point) and got same results and fit criteria (AIC, Chi-square, Pseudo R², LL...). Only the Bayesian Information Criterion in Stata differs a bit from BIC in Lighthouse.

How is the BIC and the CAIC calculated in Lighthouse?

Thank you very much.

Kind regards,
Andrew
asked Jul 6, 2017
retagged Jul 6, 2017

+1 vote
For both criterion:

let LL be the loglikelihood computed for the solution across all respondents and tasks
let NP be equal to NumberOfGroups*NumberCodedVariables + NumberOfGroups - 1
Let NO be the total number of coded choice tasks with responses (constant sum tasks are coded into multiple tasks so it is not always the same as the raw data's count)

With these:

CAIC = -2 * LL * (ln(NO)+1) + NP
BIC = -2 * LL * ln(NO) + NP
answered Jul 6, 2017 by Gold (17,705 points)
Thank you very much!
Unfortunately I still can't replicate BIC and CAIC I got in Lighthouse Analysis Manager using your function. In Stata the BIC is calculated as follows: BIC = -2 * LL + Log(Obs) * k; k =  parameters to be estimated.
For my model in Stata I got following BIC and CAIC:
BIC = -2 *  (-2947.14) + Log(9648) * 15 = 6031.90
CAIC =  -2 * (-2947.14) + (Log(9648)+1) * 15 = 6046.90
whereas Lighthouse comes up with a BIC of 6021.50 and a CAIC of 6036.50
In contrast, the AIC is exactly the same in both programs:
AIC = -2 * (-2947.14) + 2 * 15 = 5924.28
Any idea?
Andrew
AIC does not use the number of observations, and what I'm guessing is that Stata is counting raw tasks seen by the respondent, while Latent Class is coding the tasks and removing those without responses, etc., resulting in a different number of observations.

If I had the data I could tell you the numbers used to compute the BIC and CAIC.
Walter,

thank you very much. I guess I got it. You are right about the number of observations. BIC in Lighthouse does not use number of obs (where one obs = one alternative in the dce represented by one line in the long-formatted data file) but the number of all choice sets across respondents which is in our case 9648/2=4824. Thanks again.

Kind regards,
Andrew
Glad you were able to figure it out.  As an aside, the effects-coded design export from SMRT has somewhat of a bug in that the Answer column doesn't quite have the correct value.  For example, if a respondent saw a choice task with 5 alternatives and chose alternative 3, each alternative in the export file will have 3 as the answer.  It would have been more correct to have a value of 1 for alternative 3 and 0's for those not chosen.
Yes, that is exactly what we are doing, recoding the answer variable to a binary dummy variable. Thanks again.