First, these utilities you are showing use "zero-centered diffs" rescaling, also known in our latent class procedure as "utilities rescaled for comparability" which is a transformation that blows the utilities up to a larger scale and makes the scale very similar between respondents or groups of respondents to be able to make better comparisons.
The None utility is a threshold utility that gets estimated and scaled to be compared to the sum of the utilities across all the other part-worths. For example, if the None utility is higher than the sum of part-worth utilities across all the attributes that define a product alternative (take one level from each attribute and sum their part-worths), then we would think that this person or that this segment of respondents has a higher likelihood of choosing None than choosing the product alternative.
Latent Class Clustering in my mind is when the analyst uses an array of basis variables to cluster respondents into groups, where there is no dependent variable.
Latent Class MNL in my mind is what Sawtooth Software's Latent Class procedure does, where the algorithm fits multiple group (class) vectors of utilities that provide better fit to the data (where the fit is the likelihood of choice, so there is a dependent variable involved), and for which each respondent has a continuous probability of belonging to each group (class).
You cannot use HB utilities in Latent Class MNL automatically in Sawtooth Software's programs, though a researcher could try to do that on their own using a more flexible HB routine (though I would think this would be a strange thing to do). If you used normalized HB utilities (such as from "zero-centered diffs" rescaling), you could submit those normalized utilities to a Latent Class clustering routine. However, this approach is viewed as not as good from a statistical standpoint as directly using Latent Class MNL (assuming we're referring to a choice experiment, such as CBC or MaxDiff).