This is a serious issue we observed several times now:
- We have users that work on a model with a custom MPS RCP.
- We develop our languages further and deploy a new RCP at some point
- We run the migration scripts on the user model (everything migrates as expected)
- Local commits of a user lead to a merge conflict
- We resolve the conflicts by accepting the local, semantic changes (values) of the user version and accept all other server changes to make sure the models are correct
The merge concludes and for a moment everything looks as expected in the RCP, but then, seemingly automatically, the local changes "win" over the remote changes and node instances of deprected or non-exisitng Concepts are brought back to life, which, of course, leads to model errors.
We tried to run the migration scripts on a different branch first and just do a git merge, we tried to run the migration scripts locally first (as described above) and perform the git merge afterwards, we pulled with the options "no fast forward" and tried and many different ways to perform this merge, always making sure that we use end up with models that adhere to the new language versions, and in the MPS git addon, the merged version in the middle projects everything correctly, as we expect it.
But, as soon as the merge concludes, MPS seems to throw away our manual resolution and uses instances of deprecated and/or none-existing concepts again (basically re-using the "old" instances from before the migration.
We don't know how this can happen and are not sure what a best practice is to avoid this. Any information is appreciated.
see also: https://youtrack.jetbrains.com/issue/MPS-27350