I usually get a RLH for ACBC somewhere in the 600s. That is normal.
I always recommend people constrain summed price to be negative. Check on that within your HB estimation.
I also answered your questions via email, but I will repeat my answer here for the benefit of other readers...
At face value, hit rate of 42% (when there are three concepts to be predicted) seems poor, and Mean Absolute Deviation of 9 also seems poor. So, we should retrace some issues:
1. First, old versions of ACBC (version 7 and previous) could have problems with computing utilities under built-in HB analysis (built-in meaning running HB within SSI Web from the Analysis menu) when attributes were dropped as completely unimportant using constructed lists. I have sent a few messages over the last couple of years to our ACBC users warning them of potential troubles and giving them hints how to significantly reduce the problems. Notably, if the vast majority of respondents dropped a particular attribute as unimportant and if you have hundreds of respondents, then a new respondent who thought that attribute was absolutely critical would not be able to get that attribute to have very much importance for him/her personally, even if the respondent answered the ACBC questionnaire as if that attribute was the only thing that mattered. We fixed this problem in v8. For v7 and earlier, I recommended to people that they change the HB advanced prior settings, so that very little Bayesian smoothing could occur, by giving a prior degrees of freedom of something like one-half or equal to the sample size and prior variance of something really large, such as 10 or 20.
2. If seems a bit strange to try to compute individual-level hit rates when using an ACBC conjoint task that showed a subset of the attributes, but using a holdout CBC task that included the full set of attributes. You are putting a lot of faith in the early operation of asking respondents to state upfront which attributes are utterly unimportant and then dropping them...but then assuming that if you show tasks later that include the dropped attributes that the respondents will continue to completely ignore those attributes when making choices. In theory, if respondents can indeed tell you truthfully which attributes have zero importance and if later in the CBC task they indeed continue to ignore those attributes, then this should work. But, in practice, I'm not sure how best to ask that stated importance question upfront and I'm not sure respondents will actually totally ignore the attributes in the holdout task that they previously reported were unimportant.
3. Regarding the MAD of 9... is the order of preference for the concepts from the market simulations across the sample equal to the average order (via counts) of the holdout concepts? If so, the big deviation of MAD=9 could just be due to getting the "Scale Factor" (also known as the Exponent) not right. I have consistently found that the Scale Factor (which is directly related to the amount of information vs. noise in the conjoint judgments) is much larger from ACBC compared to CBC judgments. I usually need to set my Exponent to about a .15 or .25 in the market simulator to best predict the holdout choices. So, check on that.
But, you should be on more firm ground when using respondents who received a partial-profile display (because these respondents upfront dropped the least important attributes) in ACBC via a market simulation mechanism to predict the shares of preference (for the sample) for full-profile holdout tasks, than trying to use partial-profile ACBC tasks to try to predict individual-level CBC choices when you are using a full-profile implementation of CBC holdouts.
Hope these ideas help!