First, I should say that crossing respondent characteristics by conjoint or MaxDiff preferences using aggregate logit is an approach that few practitioners would do (I think you recognize this as you indicate this is an academic study). Most practitioners would think about estimating individual-level utilities via HB and then using the respondent characteristics as filters to summarize and do statistical tests between respondent groups on the individual-level utilities.
So, being a tool aimed quite at practitioners, our software doesn't directly support the idea of interacting respondent characteristics with conjoint or MaxDiff attributes in a utility estimation model. However, like many things in our software, it's often possible to trick our software into doing such things. But, it can require manual effort and manual reformatting of data files.
To do the power trick to code respondent characteristics as new attributes in the data file, you need to be using the standalone latent class or standalone HB tools. (You cannot do this within the Lighthouse Studio interface).
The data file that contains the design matrix and respondent choices is usually a .CSV or a .CHO file. The .CSV file is easier to work with and modify. The trick is to add new columns to the file to accommodate attributes representing respondent characteristics. Because you are doing BW Case 2, I'm thinking you have tricked Lighthouse Studio into fielding a BW Case 2 via MaxDiff (via conjoint-style prohibitions in MaxDiff). That means, I think, you'll be working with a .CHO file that you exported from Lighthouse Studio. This is a text-only file with a somewhat difficult format to work with unless you are a scripter and are able to process and modify the data file with something like Python, SPSS scripting, R, Java, or C.
The format of the .CHO file is described in the following documentation: https://www.sawtoothsoftware.com/download/techpap/lclass_manual.pdf
in Appendix B, section 7.2.
Regarding weighting respondent data in aggregate logit or latent class, this will cause some respondents to have more influence on the utility weights than other respondents. It's difficult to do statistical testing properly between weighted and unweighted results as the standard errors reported in aggregate logit or latent class MNL are not then proper.