Different random seeds can produce different results in latent class, because the algorithm does not guarantee a globally optimal solution each time. Therefore, it is a good idea to run it multiple times (multiple replications) and select the one replication (for each group size) that obtains the best fit. Although you specify a starting seed for the first replication, the software chooses a different seed for each replication (otherwise, you'd just get the same result as the previous replication).
Researchers usually only pay attention to the best replication reported (the summary of best replications) per group size. And, if you are really concerned about finding the near-optimal solution for each number of groups, I'd recommend you increase the number of replications from 5 to 10 for each solution. 5 is just a default in the software to help things run faster while you are doing preliminary investigation with a data set. Once you get really serious about securing the most optimal answer, you should probably increase the number of replications. If you can afford the time, 30 replications wouldn't hurt.
A reason to pay attention to how much the fit statistics vary across all replications would be to see how stable the solutions are (how often the same fit will occur from different random starting points). If very different fits often happen from different starting points, that can be some indication that the data do not naturally break out well over that number of dimensions (groups) in your latent class solution.