Using two generator models to generate different GText output

Hi,

although this question seems to have already been asked a few times in the forum, so far, none of the threads could help me further (I already tried the various suggestions). So, coming to the problem I'm facing, I need to generate output from my language into two different languages (Scala and Datalog). So far, for Datalog, the generation worked (fairly trivial), however, as soon as I tried adding a second generator model for generating Scala source, problems arose. For the root mapping rules, they are applied properly, however, the reduction rules get messed up completely (to be more clear, the reduction rules of the Scala generator model are completely ignored but instead the ones for the Datalog code are taken). Any chance anybody in here has a solution to this problem? Setting generator priorities or using branch mappings somehow did not work (http://forum.jetbrains.com/thread/Meta-Programming-System-740). I also tried using conditions for the various rules on when to apply them, yet I did not succeed in doing so. My attempt was to kinda query for the applied root mapping rule and use the respective reduction rules according to the applied root mapping, yet I could not figure out on how to do that or whether it is possible...any help would be greatly appreciated. Thanks a lot!

cheers,

phil
22 comments
Comment actions Permalink
As i specified in this thread, I was able to make this use case working using following steps + explicit generator priorities to run 1-st step before 2-nd:
1. branch input model by creating Scala/Datalog roots
2. reduce model to Scala/Datalog by reduction rules with the condition: containing root is instance of Scala/Datalog container.

I can suggest you debugging code generation process by:
1. Saving transient models and checking results of 1-st/2-nd step reductions
2. modifying conditions in order to switch-off one to let another one run and check results in transient models

It seems like Scala is ignored because Datalog works first (instead) and there is nothing to transform to Scala anymore..
0
Comment actions Permalink
hi Alex,

thanks for the reply and the suggestions, I'll give it another try on the weekend.

cheers,

phil
0
Comment actions Permalink
Hi,

I tried as suggested (branching and priorities), but now I always get an error when the second root mapping for Scala is applied (error message is "– – was input: null"). Also, it doesn't change if I set "keep input root" to true in whichever root mapping. I also don't really get how to check for the instance type of the containing root (I think the check is supposed to be a condition for the reduction rule in the respective mapping configuration, but I can't figure out how to access a container there to perform the instance of check).
Thanks!

phil
0
Comment actions Permalink
error is resolved, still figuring out how to check the containing root
0
Comment actions Permalink
Something like: node.containing root.isInstanceOf(<Concept>) can be used there.
0
Comment actions Permalink
Hi,

it seems that your suggestion is not applicable here, as despite which node of the input model is processed, both reduction rules need to be applied. So what I was trying is to add some kind of condition, which, depending on the root mapping rule, triggers the application of the proper reduction rule, but it seems this doesn't work. With your suggestion, as both rules always need to be applied to a node, a condition performing checks on the input model is not what I need. Anyway, thanks for your suggestions!
Any other suggestions out there on how to add conditions, that constrain the application of a reduction rule based on the root mapping rule (if this is possible) or a work around?
Thanks!

cheers,

phil
0
Comment actions Permalink
It seems like you misunderstood me.

I was trying to suggest to "branch" root node on a separate generation step. In this case two different root nodes (one for Scala and one for Datalog) will be created for the original node. Both nodes will contain a copy of original node contents (invalid in context of Scala or Datalog language) as a result of this step. In other words, Scala- or Datalog- specific reductions were not applied on this step, so resulting model contains mixture of Scala/Datalog roots and original language constructions.

On the next generation step you can perform context-specific reductions on root node "internals". In this case you can use suggested condition - original node was cloned on previous step and currently there are two root nodes in the input model, so you can apply different reductions based on the context (concept of containing root) and create different output for specific target language.

HTH.
0
Comment actions Permalink
I think I know what you meant. I have the mapping and the two resulting roots (I can actually see them when saving the transient models and I also get all the resulting output files, yet with no Scala reduction rules applied). However, what I don't know is how to perform those conditions for the reduction rules. Checking against concepts of my own language is futile, as they contain no internal information concerning which reduction rule to apply. So I kinda need to check against such a Scala or Datalog container which is the root of the respective output. And this is exactly what I do not know how to do - checking against the context of the containing root, whether it's the Scala or Datalog container (supposedly that it's possible to check).

thanks for any help!

cheers,

phil
0
Comment actions Permalink
You need to split your mapping configuration into (at least) two configurations. In the first one you put only the root mapping rules. The second one contains the reduction rules. In the mapping priorities tab of the generator create a rule to generate the first configuration strictly before the second one.
I guess this is what Alex meant with "branch root node on a separate generation step".

By doing this the input model for your reduction rules will contain your scala and datalog roots. Then the condition node.containing root.isInstanceOf(...) should work.
0
Comment actions Permalink
Hi Sascha,

thanks for your hints. However, I forgot mentioning it before, when I was describing the steps I did, but I already have added the mapping priorities, enforcing, that the branching is done prior to applying the reduction rules, which are also in separate mappings (as the branching configuration; the branching works, I can see everything in the transient models). So it is really only this node.containingRoot.isInstanceOf(...) or another kind of condition where I am stuck. I don't know how to access the Scala or Datalog container to apply the specific reduction rule. So far only the Datalog rules are applied, no Scala rules, except the root mapping rule.
Thanks again for any help or hints.

cheers,

phil
0
Comment actions Permalink
I created a working sample project that hopefully will help you.

multioutput-demo.zip (125KB)
0
Comment actions Permalink
thank you so much! i actually unpacked the zip and somehow suddenly seemed to get...nevertheless, your example put my back on track!

cheers,

phil
0
Comment actions Permalink
Hi again,

any chance using references the above solution does not work? the references from the Scala container currently point to nodes of the Datalog container...

Thanks!

cheers,

phil
0
Comment actions Permalink
The Problem with references is, that the generator has to update them to point to the copied node in the output model. In this case there are two output nodes for the same input nodes (because two root mapping rules are applied to the same input root). The generator always takes the first one. As the result all references are pointing to the nodes produced by the first root mapping rule.

I solved this by cloning the input root in a pre-processing script using the snode-copy-operation, which creates a new (sub-)tree with new node IDs and resolves references correctly to the new nodes. Each root mapping rule is then applied to its own copy of the input root.

Have a look at the updated sample project for details.

multioutput-demo2.zip (173KB)

Please notice that for both root mapping rules "keep input root" is now set to "default".
0
Comment actions Permalink
Thanks for your hint, but it does not solve my concrete problem, as it's not the InputChilds that may contain references, but instead the InputRoot can contain other elements, that may contain references. Besides, I'm also dealing with inter model references (if this needs special treatment). But I think starting for your hints I'll find a solution. Thanks a lot!
By the way, where is it possible to find documentation, that allows somebody to find a solution like yours? Either I'm blind or the tutorials and screencasts do not really provide those answers? It's just, I'm looking for more documentation, to not always bother people in here...thanks for any hint!

cheers,

phil
0
Comment actions Permalink
The solution should be the same. Important is the pre-processing script and the condition for the root mapping rules.

There is a very extensive and complete documentation with lots of examples. Its called source code ;-). Seriously, this is where I get my knowledge from.
0
Comment actions Permalink
Thought about that, but yeah, I just wanted to ask and clarify whether there is nothing else ;) anyway, thanks a lot!
0
Comment actions Permalink
Unfortunately, documentation is far from the complete state now..

Concerning the problems you have, as Sascha properly mentioned, you have to create a copy of original node to avoid the problem with incorrect link target resolving. You can do it directly in an inspector of corresponding $COPY_SRC$ macros: instead of passing node as an argument of this macro you can use node.copy

I suppose, now you have a separate generator "major" step performing branching. On this step you are creating two new root nodes (Scala + Datalog) for each input one. Inside each root node template you have to perform $COPY_SRC$ for all child elements from the original node, so you can modify $COPY_SRC$ to use node.copy instead of node there.
0
Comment actions Permalink
It's important that the referencing and the referenced node are copied in the same operation. Only then the references are updated correctly. If there are cross-root references you have to copy all roots at the same time. Instead of node.copy you need to use jetbrains.mps.smodel.CopyUtil.copy(List<SNode> nodes) in this case.
0
Comment actions Permalink
So far I only can tell that, Sascha's solution is the one I'm currently trying, though Scala still does no work (this time, all references are broken, but not for Datalog, although generation is identical for both...). Usingnode.copy or CopyUtils.copy()in the root mappings results in all references being broken. Still trying to get it somehow to work.

It seems during cloning in the pre processing script, the references don't get updated, they keep pointing to the nodes in the original model, which in turn becomes one of the Datalog containers. Hence, in the Scala container, no references can be resolved, as they keep pointing into the Datalog container (an original model).
0
Comment actions Permalink
Well, finally, good news! I used Sascha's code, in fact the pre-processing script, and extended it by the following means:

1) create and additional mapping(originalModel, copiedModel)
2) create an additional set containing the original models
3) for each copied model in copies, check whether it contains cross model references; if there are none, continue; otherwise, iterate through the list of cross model references and (the tricky part) do the following:
  • find the copied target model inside the mapping (1) by means of searching for the referenced node in the original model (the key in mapping)
  • search for the target node to be referenced (instead the one from the original model) in the copied target model (retrieved from mapping in the prior step) by using the name (uniqueness of names is assured, hence this works fine)
  • modify the cross model reference in the copied model (copy prior to be added to copies) to reference the target node in the copied target model (simple assignement)

thanks to MPS this is only a few lines of code, and now my generator, producing two-way output by resolving cross model references, works. Again, thanks to you guys Alex and Sascha for your hints!

Have a nice evening,

phil
0
Comment actions Permalink
Ok, yes - you are right. ;-)
0

Please sign in to leave a comment.