When generating code from our model, we're sometimes hitting this timeout: <a href='https://github.com/JetBrains/MPS/blob/ccf01300ff0b5df8485ed1d5e335ff97c710980f/languages/core/core/source_gen/jetbra...

This is an auto-generated question from the MPS Slack community: When generating code from our model, we're sometimes hitting this timeout: https://github.com/JetBrains/MPS/blob/ccf01300ff0b5df8485ed1d5e335ff97c710980f/lan[…]re/source_gen/jetbrains/mps/lang/core/plugin/TextGen_Facet.java ('Timeout while waiting for model text outcome, model skipped') which causes the process to fail. With the timeout being hardcoded to 3 minutes, it seems intentional and maybe we're doing something horribly wrong... We're generating several thousand small classes, is this just outside of the limits of MPS? I might be able to refactor the generated code to have fewer classes but more commands, could that help? Any other common causes for this timeout? Otherwise, would it be possible to get this changed to a configurable timeout length?
0
6 comments
Sorry to hear you hit the limit. There’s no particular reason to set it to 3 minutes, just an attempt to deal with hang-up textgen scenarios (used to happen then occasionally), seemed better to fail textgen rather than to hang indefinitely and to shut the whole MPS instance eventually. The limit was set from the experience that none of known MPS model took more than few *seconds* to produce text output, limit of 3 minutes seemed fair enough not to wait too long for a hanging process. We could definitely make this limit configurable (could you please file an issue?), although I’d definitely look into textgen process to find out why does it take that much time. I’d say it would take thousands of roots with huge text output per each to reach the limit. Textgen process runs in parallel (don’t recall exactly, number of processes for textgen shall correlate to number of CPUs, IIRC). If the model is indeed that huge, perhaps, worth splitting it into few smaller, not only for the sake of textgen limit, but also to optimize whole transformation process?
0
Thanks, I've filed an issue to let me increase the limit here: A successful build of our model generates about 4000 class files with about 15MB total. Splitting the model might help. I would also like to see if I can figure out what it is that takes so long in our case. One way could be to build MPS from source, to run with debugger attached and break at the error message to maybe find more hints. Any other approach you could recommend?
0
I built MPS from source (was easier than expected) and ran it with a profiler attached. Turns out, the textgen threads spend the majority of their time in `AnonymousClass__BehaviourDescriptor.getIndexInContainingClass_id` . Our code actually generates a lot of anonymous classes - besides the 4000 top level classes in source_gen I can find about 14000 more inner class files in classes_gen. Some of these are nested several levels deep, e.g. `Classname$1$13$26$1.clas` or the number of classes in one level is high, e.g. `Classname$1$260.class` . Looking at the code for getIndexInContainingClass, it seems to me the time complexity of this method is at least quadratic for the number of anonymous classes inside an outer class.
0
So it seems we should reduce the number of anonymous classes we're generating. Cleanup of our models are a first step, since there's some duplication. Next I want to try if using lambda expressions instead makes a difference. Lastly, we might have to rewrite parts of our generators and the code around them to use different semantics.
0
TLDR for everybody: Don't end up generating thousands of anonymous classes, textgen will choke on it.
0
This is auto-generated question from the MPS Community Slack Workspace. If you want to comment on the question, do it from the Slack workspace
0

Post is closed for comments.