you mentioned that inserting big chunks of data can be painful as you want to minimize the deployment time and have your system up and running in the shortest amount of time. Beside having static data captured in INSERT scripts, I also had truck loads of
data needed to be synced / updated during a delta-deployment / initial deployment. There are several approaches we took, but this is the one where we made sure that we have the least impact on the deployment and the existing database. (We imported and synced
street data to support a street catalog)
-Insert the data in a seperate database using methods like BULK insert, SSIS loading, bcp. Make sure that you are able to place the data files of the reference catalog on another spindle than the operational database for performance reasons.
-Update your database schema
-Use Merge (where accessible due to compatibility) and sync the data with your database
-Drop the database with the reference data to clean the house again.
This can be either than within the deployment process using script in the preprocessor commands (we had a sample for that in the HOL where we needed to cope with the NULLability) or as a seperate process. Using a separate manual process is in most cases
the better thing as in our case the reference data did not change that often and therefore we could get rid of the costly import and syncs most of the time only doing the sync when needed.
Does this make sense to you ?