The standard design report for our MaxDiff software assumes a standard (Case 1) MaxDiff study, where the researcher isn't setting up mutually exclusive (conjoint-style, best-worst case 2) type prohibitions.
Standard MaxDiff (case 1) is so robust to prohibitions (and also because we wanted to make the software easy for beginning researchers to use) that we decided not to build in a more advanced experimental design testing capability. We didn't want to make the software interface confusing for a junior researcher.
If more advanced researchers want to go further in terms of evaluating the statistical efficiency of their MaxDiff designs (beyond the simple counts reporting with standard deviations of one-way and two-way frequencies), then we recommend people use the random responses data generator to generate a dataset of random responders of essentially the same sample size you expect to be collecting. Then, run aggregate logit and examine the standard errors. Compare those standard errors to another data generation run of random responders who receive the same MaxDiff questionnaire setup...except for no prohibitions. This allows you to see the loss in design efficiency for a design with prohibitions compared to a design without prohibitions.
Lack of connectivity is a potential concern in what you are describing. The software complains if it finds lack of connectivity within any of the unique versions (blocks) of the design. Sometimes, researchers purposefully create designs that lack connectivity within each specific version; as long as such designs have connectivity when the multiple versions are pooled for the analysis (such as for aggregate logit or latent class MNL with relatively few classes). For example, Sparse MaxDiff, Express MaxDiff, and Bandit MaxDiff approaches all lack connectivity within versions, but have connectivity when multiple versions are pooled and considered together.