I assume you are indeed using data that are appropriate for HB-Reg, meaning that for each respondent there are multiple observations (cases).
HB certainly involves some Bayesian shrinkage (smoothing) across respondents, meaning that folks tend to be smoothed to some degree or another toward population means. So, that process reduces the amount of differentiation across people and tends to reduce the variance and tends to smooth out the troughs in multimodal distributions that represent different segments. But, if you have more observations than parameters to estimate for each individual, that smoothing is probably fairly minimal, and segments will still emerge quite decently from clustering.
For logit-based models, discrete choices, and CBC/HB, there is a big worry about differences in "scale factor" of the betas between people, making it difficult to cluster well on the raw betas.
But, using HB-Reg and continuous dependent variables I don't believe suffers so much from these "scale factor" issues across respondents (assuming the scale of the dependent variable is the same across respondents). There is a long history of using ratings-based conjoint results (from OLS at the individual level) in cluster. So, decades of practice seem to support the practice.
As long as your designs aren't especially sparse at the individual level (the observations relative to parameters to estimate is reasonable), then the Bayesian smoothing will be less influential, and the cluster results should be robust. That's my opinion.