Using two generator models to generate different GText output
Hi,
although this question seems to have already been asked a few times in the forum, so far, none of the threads could help me further (I already tried the various suggestions). So, coming to the problem I'm facing, I need to generate output from my language into two different languages (Scala and Datalog). So far, for Datalog, the generation worked (fairly trivial), however, as soon as I tried adding a second generator model for generating Scala source, problems arose. For the root mapping rules, they are applied properly, however, the reduction rules get messed up completely (to be more clear, the reduction rules of the Scala generator model are completely ignored but instead the ones for the Datalog code are taken). Any chance anybody in here has a solution to this problem? Setting generator priorities or using branch mappings somehow did not work (http://forum.jetbrains.com/thread/Meta-Programming-System-740). I also tried using conditions for the various rules on when to apply them, yet I did not succeed in doing so. My attempt was to kinda query for the applied root mapping rule and use the respective reduction rules according to the applied root mapping, yet I could not figure out on how to do that or whether it is possible...any help would be greatly appreciated. Thanks a lot!
cheers,
phil
although this question seems to have already been asked a few times in the forum, so far, none of the threads could help me further (I already tried the various suggestions). So, coming to the problem I'm facing, I need to generate output from my language into two different languages (Scala and Datalog). So far, for Datalog, the generation worked (fairly trivial), however, as soon as I tried adding a second generator model for generating Scala source, problems arose. For the root mapping rules, they are applied properly, however, the reduction rules get messed up completely (to be more clear, the reduction rules of the Scala generator model are completely ignored but instead the ones for the Datalog code are taken). Any chance anybody in here has a solution to this problem? Setting generator priorities or using branch mappings somehow did not work (http://forum.jetbrains.com/thread/Meta-Programming-System-740). I also tried using conditions for the various rules on when to apply them, yet I did not succeed in doing so. My attempt was to kinda query for the applied root mapping rule and use the respective reduction rules according to the applied root mapping, yet I could not figure out on how to do that or whether it is possible...any help would be greatly appreciated. Thanks a lot!
cheers,
phil
Please sign in to leave a comment.
1. branch input model by creating Scala/Datalog roots
2. reduce model to Scala/Datalog by reduction rules with the condition: containing root is instance of Scala/Datalog container.
I can suggest you debugging code generation process by:
1. Saving transient models and checking results of 1-st/2-nd step reductions
2. modifying conditions in order to switch-off one to let another one run and check results in transient models
It seems like Scala is ignored because Datalog works first (instead) and there is nothing to transform to Scala anymore..
thanks for the reply and the suggestions, I'll give it another try on the weekend.
cheers,
phil
I tried as suggested (branching and priorities), but now I always get an error when the second root mapping for Scala is applied (error message is "– – was input: null"). Also, it doesn't change if I set "keep input root" to true in whichever root mapping. I also don't really get how to check for the instance type of the containing root (I think the check is supposed to be a condition for the reduction rule in the respective mapping configuration, but I can't figure out how to access a container there to perform the instance of check).
Thanks!
phil
it seems that your suggestion is not applicable here, as despite which node of the input model is processed, both reduction rules need to be applied. So what I was trying is to add some kind of condition, which, depending on the root mapping rule, triggers the application of the proper reduction rule, but it seems this doesn't work. With your suggestion, as both rules always need to be applied to a node, a condition performing checks on the input model is not what I need. Anyway, thanks for your suggestions!
Any other suggestions out there on how to add conditions, that constrain the application of a reduction rule based on the root mapping rule (if this is possible) or a work around?
Thanks!
cheers,
phil
I was trying to suggest to "branch" root node on a separate generation step. In this case two different root nodes (one for Scala and one for Datalog) will be created for the original node. Both nodes will contain a copy of original node contents (invalid in context of Scala or Datalog language) as a result of this step. In other words, Scala- or Datalog- specific reductions were not applied on this step, so resulting model contains mixture of Scala/Datalog roots and original language constructions.
On the next generation step you can perform context-specific reductions on root node "internals". In this case you can use suggested condition - original node was cloned on previous step and currently there are two root nodes in the input model, so you can apply different reductions based on the context (concept of containing root) and create different output for specific target language.
HTH.
thanks for any help!
cheers,
phil
I guess this is what Alex meant with "branch root node on a separate generation step".
By doing this the input model for your reduction rules will contain your scala and datalog roots. Then the condition node.containing root.isInstanceOf(...) should work.
thanks for your hints. However, I forgot mentioning it before, when I was describing the steps I did, but I already have added the mapping priorities, enforcing, that the branching is done prior to applying the reduction rules, which are also in separate mappings (as the branching configuration; the branching works, I can see everything in the transient models). So it is really only this node.containingRoot.isInstanceOf(...) or another kind of condition where I am stuck. I don't know how to access the Scala or Datalog container to apply the specific reduction rule. So far only the Datalog rules are applied, no Scala rules, except the root mapping rule.
Thanks again for any help or hints.
cheers,
phil
multioutput-demo.zip (125KB)
cheers,
phil
any chance using references the above solution does not work? the references from the Scala container currently point to nodes of the Datalog container...
Thanks!
cheers,
phil
I solved this by cloning the input root in a pre-processing script using the snode-copy-operation, which creates a new (sub-)tree with new node IDs and resolves references correctly to the new nodes. Each root mapping rule is then applied to its own copy of the input root.
Have a look at the updated sample project for details.
multioutput-demo2.zip (173KB)
Please notice that for both root mapping rules "keep input root" is now set to "default".
By the way, where is it possible to find documentation, that allows somebody to find a solution like yours? Either I'm blind or the tutorials and screencasts do not really provide those answers? It's just, I'm looking for more documentation, to not always bother people in here...thanks for any hint!
cheers,
phil
There is a very extensive and complete documentation with lots of examples. Its called source code ;-). Seriously, this is where I get my knowledge from.
Concerning the problems you have, as Sascha properly mentioned, you have to create a copy of original node to avoid the problem with incorrect link target resolving. You can do it directly in an inspector of corresponding $COPY_SRC$ macros: instead of passing node as an argument of this macro you can use node.copy
I suppose, now you have a separate generator "major" step performing branching. On this step you are creating two new root nodes (Scala + Datalog) for each input one. Inside each root node template you have to perform $COPY_SRC$ for all child elements from the original node, so you can modify $COPY_SRC$ to use node.copy instead of node there.
It seems during cloning in the pre processing script, the references don't get updated, they keep pointing to the nodes in the original model, which in turn becomes one of the Datalog containers. Hence, in the Scala container, no references can be resolved, as they keep pointing into the Datalog container (an original model).
1) create and additional mapping(originalModel, copiedModel)
2) create an additional set containing the original models
3) for each copied model in copies, check whether it contains cross model references; if there are none, continue; otherwise, iterate through the list of cross model references and (the tricky part) do the following:
thanks to MPS this is only a few lines of code, and now my generator, producing two-way output by resolving cross model references, works. Again, thanks to you guys Alex and Sascha for your hints!
Have a nice evening,
phil