You're right that you can go the free format question route with adaptive MaxDiff. However, my colleague in our Sawtooth Analytics division (Keith Chrzan) recommended what I think turns out to be an easier path to victory with adaptive MaxDiff within Lighthouse Studio (SSI Web).
Think of each phase of Adaptive MaxDiff as a separate MaxDiff exercise that uses a constructed list with a fixed number of items (though the specific items for each respondent vary for phases 2 and later).
So, phase I MaxDiff exercise takes all items into it and has the typical default of 300 versions in it.
Phase II MaxDiff exercise (also can use the default 300 versions) uses a constructed list where the worst items are dropped (as judged in phase I). I think you'll need to use some unverified Perl and an IF statement to drop the worst items from the master item list to move the appropriate items into Phase II. Perhaps Keith can give you some hints on that if you contact him directly. Using SSI Script he probably has to refer to the specific question name and also refer to the MaxDiff design so that he knows which item was shown in each position in each task.
To do data analysis, you need to export a .CHO file for each separate exercise. Then (assuming you want to do HB analysis), using your own data processing tools, you need to collect and stack the multiple tasks from those separate exercises from the different phases together for each respondent record into a single master .CHO file. Then, you submit that assembled .CHO file to CBC/HB Standalone software for HB analysis. (Or, submit to Latent Class standalone module for latent class analysis if you want that too.)