Incomplete generation (model to model, not model to text) - is it possible in MPS?

Example:
I have language L1 (entity model) and language L2 (entity model manipulation). I want to be able to generate new model written in L1 from old model written in L1 and model written in L2. Is it possible to generate model in L1 (not text) using MPS generator?

Also, more general question (problem): does anyone have an approach related to management and maintenance of persistent data scheme using only MPS? Is it even possible? Or should I just let an RDBMS admin to design a scheme and MPS application end user just to translate that scheme into MPS model every time there are any changes?
5 comments
Yes it is possible. Something similar is done in MPS when you click "Preview Generated Text".
This is the code, that is executed:
MakeSession session = new MakeSession(((IOperationContext) MapSequence.fromMap(_params).get("context")), null, true);
if (IMakeService.INSTANCE.get().openNewSession(session)) {
  TextPreviewUtil.previewModelText(session, ((IOperationContext) MapSequence.fromMap(_params).get("context")), TextPreviewModel_Action.this.modelToGenerate(_params));
}
prettyPrint();
In TextPreviewUtil you can see, that it is possible to tell the generator when to stop:
IScript scr = new ScriptBuilder().withFacetNames(&Generate, &TextGen, &Make).withFinalTarget(&TextGen:textGenToMemory).toScript();
prettyPrint();
You can stop the generation before textGen an get the output model.
I actually did that some time ago, but cannot find the code now.

Alternatively, you can always create and modify models using baseLanguage.

My approach to your second problem:

I created a language for modeling the database schema and an SQL language with the ability to simulate the execution against a schema model. This allowed me to write SQL scripts for the incremental changes of my schema. I had a third language for storing schema revisions together with the SQL script to move from one revision to the next. The scripts were validated automatically, if they produce the correct result. For simple cases as adding/removing columns/tables these scripts were generated automatically, but I was always able to manually change the result. So it was a half-automated process.
I generated the database schema from an entities model (as in your case). My final result was an changelog File for LiquiBase that I used to deploy my changes.

The final process after a change of my entities model:
  1. Update of database schema model (I implemented the generator in baseLanguage and triggered the update via an intention)
  2. Storage of a copy of the schema model as a new revision
  3. Review/Modification of the automatically generated SQL script for the changes since the last revision
  4. Generation of the LiquiBase changelog file
0
Sascha, thank you for your answer!
I understood that what I need is not a Generator, but a baseLanguage transformer for my model.
Regarding to your answer on my second problem, do you reference database schema entities in another models? What's happening if you delete column that another rule from another model is referenced to? Do you store revisions only for db schema, or you do this for all models (fragments?)?
0
If I delete a column then the reference is broken, of course. This would also happen after regenerating the schema and replacing the current model with the new one. Therefor, I diff and merge the changes of the new model into the existing one, so that I don't replace it but modify/update it to be equal to the new one.

I only store revisions of the database schema, because I need to know the state of eventually existing databases that I want to migrate via SQL scripts.
0
Take a look at https://github.com/inspirer/mps-core, jetbrains.mps.core.gen.transform language.

Here MPS make is extended with a model-to-model transformation pre-step.
Models with the name structure_new in languages are converted into pair of models (structure and behavior), which are saved to disk and generated by the main generation step as usual.

1. structure_new is processed only if it is modified and it is in the make scope (or we are rebuilding it)
2. the separate language is used for the transformation
3. output models are merged into existing ones to preserve node ids (and therefore external references are kept alive)
4. output models are marked as modified to be processed by the default generation step

It works pretty well.
0
Evgeny, thank you for your answer. I'll now stick with simple baseLanguage modifier intention. I have another question which I wanted to publish as a separate topic, but may be this will not be needed. In MPS the smallest compilation unit is a model, is it? If so, I can't perform incremental compilation on per-node basis, is it right? If it is, approach with using one large model that contains different aspects of a DSL program is not encoraged, and at the same time intermodel references are avaliable only as identifier references to java classes and fields, and with known model identifier, and thus couldn't be validated. I think it is a big inconvenience that will directly or indirectly influence design-time decisions.
0

Please sign in to leave a comment.