RLH is difficult to interpret from ACBC, because the BYO tasks, the Screener Tasks, the Choice Tournament Tasks involve different number of alternatives per task. And, the number of Screener and Choice Tournament tasks differs across people. It's difficult to interpret the RLH unless you know the RLH due to chance, and the software doesn't automatically report that for you. You would need to compute this manually by individual, and that means cracking into the data file and figuring out how many choice tasks of each type each respondent saw in the questionnaire, and how many alternatives there are per choice task.
Academics will usually like to see LL or Percent Certainty. But, unfortunately these are very difficult to get out of the software. That's because they are not automatically provided. And to compute them manually is challenging, because each person can get a different number of choice tasks, where the composition of the choice tasks (in terms of number or concepts available within each task) is different.
Sample size choices are challenging. They depend a lot on how large the effect is you are trying to detect. They depend on the heterogeneity of the data. They depend on the reliability of respondents. You cannot know any of these things ahead of time.
Sample size also depends on your budget, how much it costs per complete, and whether you need to conduct analysis and derive insights by different segments.
All that said, a recommended "rule of thumb" approach for ACBC is to use the software's automatic capability to generate random-responding robotic respondents. The software will do this for you in the "Test Design" capability of ACBC software. It will automatically compute aggregate logit utilities for you. And, we recommend that you obtain a sample size such that the standard errors of the part-worth utilities are 0.05 or less.