Create test cases for generation exceptions

Hey,

In my generation process I check different things, which can't be constrained before. In the error case I throw an exception. Is there any way to build wrong models in a sandbox, which expect an exception? Only if no exception is thrown an error should be reported.
Of course I know how to do things like this in Java, but I don't know how to catch an exception from the generation process.

Many thanks
Fabian
8 comments
I found the answer by myself. Test aspects is the keyword, the description can be found here:
http://confluence.jetbrains.net/display/MPSD2/Language+tests+language
0
BTW use genContext.show error "error message" -> node; to report errors from generator
0
Thanks for this hint! This is very useful to know.
But at the moment I've modified my structure so that I have a None-Typesystem-Rule, which calls some behavior methods. Is there any chance to bind an error to a node from within the behavior methods? At the moment I give it back to the rule, where I use the error command. But this is not very convenient.
0
Though I've found a way to test for errors, but I haven't found the solution for comparing the output model for some input. So I will check if generator creates the right things. How can I do this?
0
We have a sandbox solution per language with sample models. On build server we validate it, using test generation Ant task (mps.test.generation). It generates models and compares the result with the files stored in the repository. I.e. you have to regenerate sandbox solution whenever you update your language.

See Generating MPS models from Ant
0
Behavior methods are just generic model queries. They can be used from anywhere, for example from the editor. If you need a generator-specific method, add a parameter of gencontext type into it. Logging language (which contains error "message" statement) is intended for debugging purposes.
0
Thanks for this advise! This will be a possible test method for me. But in early development stages and for unit test I think this is a bit difficult because I always compare whole classes. During further development there will be changes in generated infrastructures outside the tested code. So I have to adapt my test cases although they are not directly affected by changes.
At the moment it would be easier to compare things as models. Isn't there any chance to get the output model, or even the transient models, in test aspects? I think this would be a helpful test feature.
0
I'm calling my behavior methods from within an None-Typesystem checking rule. There I can use the
error <message> -> <node>
statement, but I don't see any object which I can pass to the behavior methods. At the moment I return my error nodes to the caller of the methods, but this is really cumbersome. I guess there is a better way.
0

Please sign in to leave a comment.