This is something that the software cannot automatically do, because statistically things work better if you allow everything to combine freely. However, some clients do get bent out of shape when they see highest prices with all the worst features, or lowest prices with all the best features. So, sometimes we do customized things to make the client happy (though the data usually don't get much better by pruning the cases of worst domination). Setting prohibitions is not the way we recommend modifying the design to avoid the worst offending cases of domination.
First generate a design with 300 versions (the default) and run the Test Design (specifying your expected sample size when it asks you), and save the report. You will be comparing the customized design to this design to make sure you haven't lost much efficiency.
When we do this internally for our own consulting group (if the client pushes us to remove the worst cases of domination), then we generate the design in the usual way as per Balanced Overlap (typically). We might generate 30 total versions of the questionnaire (which is really quite enough versions in most all applications). Then, we use the Export button on the Design tab in CBC to export the design to a .CSV file.
We open the .CSV file in Excel (or similar) and create some formulas to identify the choice tasks that are the worst offending choice tasks (the choice tasks that demonstrate the most dominance in terms of having concepts that are logically superior to all the other concepts within the same task). Then, we delete those entire choice tasks from the file (delete those rows). Afterward, we renumber the Versions, Tasks, and Concepts so that each version has the appropriate number of choice tasks. But, after deleting many rows representing the choice tasks involving domination, we may have only 27 or 28 total versions of the questionnaire remaining. We change the settings in the Design tab to expect receiving those 27 or 28 versions, and we click the Import button to import the new design with 27 or 28 versions.
The important thing is to not be too aggressive in deleting choice tasks, or the resulting design will not be very efficient.
After importing the "cleaned" 27 or 28 versions, you should re-run the Test Design procedure and compare the standard errors per attribute level and the overall Strength of Design to the original design with 300 versions prior to doing the pruning of choice tasks to eliminate the worst cases of domination. You should hopefully see that you lost no more than 3% to 5% of total efficiency, as characterized by Strength of Design. And, you should hope to see that (given the expected sample size you plan to collect) your standard errors from the aggregate logit report (on the robotic random responders) are 0.05 or less for each attribute level in your CBC study.