Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

T-ratios for changing dummy-coded base levels?

Hi,

I have a client who wants to be able to base their dummy coded estimates (represented as WTP/WTA which I know is a contentious issues) off different levels than the standard sawtooth output which is to set the last level of an attribute as the reference level. I'm perfectly happy to do this by simply subtracting/or adding (depending on the sign) the desired estimate value from all the other estimates so that the reference level becomes zero wherever it is located in the level order - they wish to do this to match up with a contingent valuation question they have also posed in the survey. However, if I am using simple aggregate logit estimation how do I establish the t-ratio for the level that was arbitrarily set to zero by the software? and how do the other t-ratios change as the estimates change in value. This is more a reporting issue than anything else but I believe it's important to be able to show if the level(s) are or remain significant (which I would expect them to) but also that the arbitrary zero level is also significant. Is there a way to achieve this?

Thanks in advance
asked Nov 14, 2017 by Jasha Bowe Bronze (1,680 points)

1 Answer

+1 vote
 
Best answer
Jasha,

I usually assume that the standard error of the missing level of an attribute is the average of the standard errors of the other levels of that attribute.  

I know of at least one person who likes to be conservative and who assumes that the standard error of the missing level is equal to the highest of the standard errors of the other levels (again, that's one person and what I think is a minority view).  

I'll be interested to see if other folks have different practices.
answered Nov 15, 2017 by Keith Chrzan Platinum Sawtooth Software, Inc. (55,525 points)
selected Nov 15, 2017 by Jasha Bowe
Keith, perfect answer. Exactly what I was after. I can work between both positions to achieve an optimal outcome for the client. In my case all the SEs are very reasonable and not going to threaten the validity of the approach.

Many thanks as always for your great help.

Jasha
Jasha, we're glad to help.  We learn so much ourselves from the things that come into this Forum!
That's such a wonderfully positive view Keith and very much appreciated by all who use this valuable forum. However there is a small sting in this tail... can you think of a way of estimating confidence intervals for rebased estimates? That is if all the estimates that were effects coded and had CIs and are then converted manually to dummy codes which changes their value is there a way of estimating what the CIs become for the rebased estimates (I know this is a real curveball).
Jasha,

Maybe there's someone reading the Forum who's mathy enough to figure out how to do this, but it isn't me.  I'd just change the coding of the design matrix and re-run the analysis with "user specified coding" to get the standard errors.  But a simple mathematical transformation would be even nicer.  Part of me wonders why we should expect the radius of the confidence interval to change much at all.
Fair call Keith. Clearly I'm not mathy enough either!! However when you say change the coding in the design matrix are you just talking about ACBC functionality. Currently I'm just doing straight CBC. Is there a way I can play with the design matrix if I'm just using standard CBC oe HB/CBC?? And then use user specified coding or is this just for ACBC?
Jasha,

It's much easier in CBC.  Just export your data as a single .csv file, manually code the design matrix then analyze with MNL (latent class with one class) or HB, making sure to tell the analysis software you're using user-specified coding for you independent variables.
Keith, this is amazing. I thought I knew a few tricks in the software but this has taken it to the next level (I could be nerding out now!). So good. One final question for today I promise. My DCE/Conjoint training came from the old school, where the design matrix, either dummy or effects coded, was all 1 and 0s (I used limdep and matlab and even Jodan's sneaky work around to use cox regression for individual level modelling)... so when I think of a design matrix i think of a two way contingency table where everything runs on diagonals (or dropped levels and a row of -1s) and the dataset is very wide in terms of columns, however in Sawtooth I know things are different. when I export a design matrix it has attributes as 1 column and the level as an integer such as 1 or 5 to represent the level. all very sensible. I guess my question is when custom coding this type of matrix is it just a matter of manipulating the levels as they appear? for example if I wanted to manually dummy code an attribute and set level 5 of an attribute as the base level could I just change 5 to a zero and it would then become the reference level? or am I totally missing the point?
Jasha, AS IF nerding out is a bad thing!

Behind the scenes our coding (e.g. 1-5 for a 5 level variable) converts those 1-5 codes to 4 effects-coded variables for analysis.  And to use the user- specified coding I described, you would to the same.  Export the design (which has our shorthand 1-5 codes for a 5 level variable) and then in Excel recode the design (e.g. turn that single 5-level variable into 4 effects coded variables).  Then delete all the original codes and use your recoded variables as your IVs for the analysis.  So your old school way of doing things is what our software does behind the scenes and what you can make LC or CBC/HB do yourself using your effects codes (or dummy codes or whatever) and user-specified coding.
...