This is a really good and difficult question to answer.
If you are trying to estimate the utility of one level vs. another (within the same attribute) for the population, then this is population-level estimate (an aggregate estimate). Reducing sampling error is a concern and thus larger sample sizes would be desired. HB wouldn't seem to offer much of a benefit over aggregate logit for stabilizing this population estimate (though there is at least some small argument that HB separates heterogeneity from noise, thus potentially improving the precision of population estimates). So, this runs counter to your assumption: HB should provide population estimates equally well or even slightly better than aggregate logit. Remember, estimate population means (the alpha vector) is an output from HB and one of the critical components within the MCMC process.
But, if you are setting up market simulations wherein you are trying to figure out if the population prefers one product or another (among a set of >2 product concepts), then IIA troubles can plague aggregate models. Modeling heterogeneity via HB not only reduces errors due to IIA, but also can automatically incorporate higher-level interaction and substitution effects due to patterns of preferences across respondents. So, simulated share results for >2 products in a market simulator should be more accurate for HB than aggregate logit given the same sample size. Then, it follows, that one could get away with lower sample size for HB and still obtain equally good simulated shares results compared to aggregate logit.