If you are interested in whether individual respondents are reliable (consistent), then I recommend a combination of:
1. Examine the responses to choice tasks for any evidence of straightlining (e.g., answering all 12 questions as the first concept in the questionnaire, when the information in the concepts is randomized such that no pattern would be expected).
2. Examine the time to respond to the CBC questions. If 12 questions are answered in 20 to 30 total seconds, that doesn't seem to indicate much thought.
3. Examine the fit statistic produced by individual-level CBC/HB estimation. The software reports RLH for each respondent. If respondents are answering randomly, their RLH should be no much greater than 1/k, where k is the number of concepts shown per task.
4. Responses to other non-CBC questions in the survey that might indicate a random-responder, such as straightlining or failure to complete reliability check questions correctly.
As for testing the reliability of a particular latent class solution, you should repeat the solution from multiple starting points. If the same solution comes up nearly every time, no matter the random starting point, then this is some indication of stability. But, a more formal test is to look at the BIC or CAIC (both measures of fit) for different solutions: such as 2-group, 3-group, 4-group, 5-group, etc. You are looking for lower numbers for BIC and CAIC. If the BIC or CAIC is minimized at 4 groups, then this is some statistical evidence about the justification of the 4-group solution relative to the other solutions you tested.