MPS, like any other language workbench, supports various forms of constraint checks that lead to errors or warnings, annotated directly to the element that fails the constraint check. However, there are some kinds of checks that are different in nature: they may be global and require expensive algorithms to compute. They may be used to create some kind of overview, or report, and using error annotations spread all over the code may not be suitable. You may also want to mark failed constraints as ok and ignore them in the future. To support these use cases, we have added Assessments to mbeddr.
Here is an example assessment. It highlights all requirements in a model that have no effort specified. Not having an effort is a problem, and you may want to keep track of the requirements where you have yet to perform your estimation. The assessment shown below shows an example result (of course, the results are references and you can navigate to the offending requirement):
Note the colors. Green results are those that are marked as ok, so they are judged by the user to not be a real problem. Red results are those that have been added during the last update of the assessment. Black ones have been there from previous updates. Using the colors, you can keep track of the current state of the assessment, as well as of its changes. Assessment results are part of the model, so they are persisted and shared. They are intended to be actively maintained and managed (in contrast to regular error or warning annotations in the code). You can also set the “must be ok” flat to true, in which case the Assessment itself gets an error annotation if the results contain non-ok entries.
The requirements thing above is of course just an example and the assessment facility is extensible. You can define new queries/analyses, you can define arbitrary results structures and you can define arbitrary summaries
(the one shown is the default and simply counts the entries).
We are currently using assessments in a customer project, where we use the facility to sum up the efforts for various project milestones and, at the same time, highlight those requirements as errors that have no effort specified. It has proven very useful.