Yes, for methodological studies, researchers often do out-of-sample validation for HB models. Typically the holdout respondents share a single design, or a small number of blocks/versions so that the number of simulations needed to predict their choices would be small. In this case though you cannot match predictions at the individual respondent level, of course, you can use your HB model to predict shares of the holdout respondents' choices and you can compare actual to predicted shares in terms of mean absolute errors or correlations (the latter is nice because it allows for statistical testing).
If you used a design for your holdout respondents that had a large number of blocks/versions then you could still use your HB sample to make predictions about the choices of all those holdout respondents. The simulation work would be much more complicated, however. I suppose you could compute log likelihoods to assess the fit of your predicted choice probabilities to 0/1 actual choices, though I've not done validation work this way before. Perhaps someone else has who would have a better idea of how best for you to do this and what challenges you might face.