With the latest versions of ACA (over the last 15 years or so) The design of the Calibration Concepts in ACA depends on just the respondent's answers to the Priors section in ACA (the self-explicated rankings or the a priori settings for rank order of levels within attributes; and the self-explicated importances).
The first calibration concept shows the worst levels of all attributes. If two respondents pick the same levels from each of the attributes as worst, then the first calibration concept will be identical for those two respondents.
The last calibration concept shows the best levels of all attributes. If two respondents pick the same levels from each of the attributes as best, then the last calibration concept will be identical for those two respondents.
The middle concept(s) are chosen to have random combinations of either best or worst levels. I just tested a 4-attribute ACA questionnaire in Lighthouse Studio v9.3 and I got a different middle calibration concept for my first two respondents, even though I answered the self-explicated priors the same.
So, I cannot see the result you are seeing where you report that all three calibration concepts are the same. Perhaps you can tell me more about your ACA study and the version of ACA you are using?
Another thing to say about the calibration concepts section: if the researcher says that only a subset (n) of the total attributes should appear in the calibration concept section, then only the n most important attributes (as rated in the ACA Importances questions) will be shown to the respondent.