Using the dataflow aspect

I'm about to create some dataflow related tasks for my language. Therefore the dataflow aspect should be suitable. But there are some information missing in the users guide. Perhaps someone can help me here.
  1. How do I specify what's a variable and where it's declared in the context of the dataflow analysis?
  2. How can I work with the dataflow analysis in the type system? How can I, for example, write a rule which produces an error if a variable is written more than one time?
  3. Is the list of statements in the dataflow language in the user guide in complete? There is a return from subroutine statement, but no call subroutine. Is the code for statement equivalent to a call of a subroutine?

That's it for now. But I guess there will be more questions in the next days.

Kind regards!
Hi, fabma,

By your first question I can say that your understanding of how data flow works differs a bit from how it actually does in MPS. Let me say a bit on the topic.
1. In general we could produce a language which knows about "variables" and "assignments" etc., but we did something more common. The data flow aspect describes read-write accesses for *nodes*. It describes, how execution of a particular node in a program affects other nodes.
2. When you've written a data flow aspect, MPS will not find all the errors for you. You can only add some analyzers working on a program view provided by data flow subsystem. These analyzers can check whatever you want (multiple assignments, null- and not-null checking etc.)

Now, back to your questions:
1) you do not specify this in DataFlow aspect of a node, it's you analyzers who knows what to analyze (and where to look for your "variables")
2) please look at check_NullableStates rule in baseLanguage - there you can find a full example (analyzer + non-typesystem rule executing the analyzer and showing errors)

As for 3, I'm not sure, so I'll not confuse you with wrong info, but better call other MPS developers to help you to this topic.

Hi fabma,
the code for statement just executes the data flow builder for the node you're calling it.
As far as I know, the data flow analysis framework of MPS is currently not suited for interprocedural analyses.
Either you could extend it to handle interprocedural analyses, or you could take a closer look at the not null analysis in baseLanguage. There the interprocedural analyis is achieved by annotating methods with e.g. @notnull and then assume for the return value of this method to be not null. You can then check if this assumption holds in a separate intra procedural analysis for the called method. Another aproach would be to statically determine which method is called, and then execute the dataflow builder for the methods body, and in case of a return or end statement, jump back to the calling site. There you will run into the problem that a data flow builder of a node should be executed only once. Otherwise the jump/ifjumps to labels will break. If you follow this approach, you could use a copy of the called methods body instead of the original in the dataflow builder, and later restore the original elements, if you need to know to wich ast elements they correspond...

Thanks a lot for your replies!
I've taken some time and, I believe, I've understood the dataflow concepts now. But I've still problems with the implementation.
I'm thinking about a language with a parallel dataflow. The current approach assumes just sequential statements, it seems not to be possible to have to instructions with the same index. So think I have to extend the dataflow language and also the classes Program and Instruction. Since I reuse the baselanguages expression the normal dataflow descriptions should be valid, too.
Has anybody any experiences with something similar?

Please sign in to leave a comment.