Our software makes a guess about the theoretical possible minimum and maximum that could be shown to respondents, and sets those as the two default endpoints in that pricing grid for utility estimation. But, it sometimes is wrong in that default guess, as your analysis via counts shows. No matter. Just change the endpoints to encompass the full possible range that was actually shown to respondents. And, prepare for price to be extremely important (in terms of the importance calculation, because the way you have set up your experiment, price takes into account the full possible range of prices that could be implied by changes to all the other attributes.
I like piecewise price functions as well.
If you fail to constrain price (negative), then the None parameter can wander around quite a bit, since with piecewise price function the implied utilities for the price points are not zero-centered in each iteration. With constrained price, the convergence of the None parameter as well as the implied price utilities at the different cut and end points tends to be more stable. I see this all the time in the HB history of iterations plot. But, even with constraining piecewise price as negative, the implied utilities of the cut and end points still does not average to zero, so the None parameter needs to adjust with a shift in response. The predictions are still appropriate, as you've seen. The implied None choice % should remain appropriate too.
Of course, don't constrain price unless you are absolutely sure respondents would always prefer lower prices over higher prices (all else held equal). Sometimes with some product categories where price can be a signal of quality, if price becomes too low, utility actually can fall (holding all other features equal).