Reverse semantic traceability

Summary

(Learn how and when to remove this template message)

Reverse semantic traceability (RST) is a quality control method for verification improvement that helps to insure high quality of artifacts by backward translation at each stage of the software development process.

Brief introduction edit

Each stage of development process can be treated as a series of “translations” from one language to another. At the very beginning a project team deals with customer’s requirements and expectations expressed in natural language. These customer requirements sometimes might be incomplete, vague or even contradictory to each other. The first step is specification and formalization of customer expectations, transition (“translation”) of them into a formal requirement document for the future system. Then requirements are translated into system architecture and step by step the project team generates code written in a very formal programming language. There is always a threat of inserting mistakes, misinterpreting or losing something during the translation. Even a small defect in requirement or design specifications can cause huge amounts of defects at the late stages of the project. Sometimes such misunderstandings can lead to project failure or complete customer dissatisfaction.

The highest usage scenarios of Reverse Semantic Traceability method can be:

  • Validating UML models: quality engineers restore a textual description of a domain, original and restored descriptions are compared.
  • Validating model changes for a new requirement: given an original and changed versions of a model, quality engineers restore the textual description of the requirement, original and restored descriptions are compared.
  • Validating a bug fix: given an original and modified source code, quality engineers restore a textual description of the bug that was fixed, original and restored descriptions are compared.
  • Integrating new software engineer into a team: a new team member gets an assignment to do Reverse Semantic Traceability for the key artifacts from the current projects.

RST roles edit

Main roles involved in RST session are:

  • authors of project artifacts (both input and output),
  • reverse engineers,
  • expert group,
  • project manager.

RST process edit

 

Define all project artifacts and their relationship edit

Reverse Semantic Traceability as a validation method can be applied to any project artifact, to any part of project artifact or even to a small piece of document or code. However, it is obvious that performing RST for all artifacts can create overhead and should be well justified (for example, for medical software where possible information loss is very critical).

It is a responsibility of company and project manager to decide what amount of project artifacts will be “reverse engineered”. This amount depends on project specific details: trade-off matrix, project and company quality assurance policies. Also it depends on importance of particular artifact for project success and level of quality control applied to this artifact.

Amount of RST sessions for project is defined at the project planning stage.

First of all project manager should create a list of all artifacts project team will have during the project. They could be presented as a tree with dependencies and relationships. Artifacts can be present in one occurrence (like Vision document) or in several occurrences (like risks or bugs). This list can be changed later during the project but the idea behind the decisions about RST activities will be the same.

Prioritize edit

The second step is to analyze deliverable importance for project and level of quality control for each project artifact.

Importance of document is the degree of artifact impact to project success and quality of final product. It’s measured by the scale:

  • Crucial (1): the quality of deliverable is very important for overall quality of project and even for project success. Examples: Functional requirements, System architecture, critical bug fixes (show stoppers), risks with high probability and critical impact.
  • High (2): the deliverable has an impact to quality of final product. Examples: Test cases, User interface requirements, major severity bug fixes, risks with high expose.
  • Medium (3): the artifact has a medium or indirect impact to quality of final product. Examples: Project plan, medium severity bug fixes, risks with medium expose.
  • Low (4): the artifact has insignificant impact to the final product quality. Example: employees’ tasks, cosmetic bugs, risks with low probability.

Level of quality control is a measure that defines amount of verification and validation activities applied to artifact, and probability of miscommunication during artifact creation.

  • Low (1): No review is supposed for the artifact, miscommunication and information loss are high probable, information channel is distributed, language barrier exists etc.
  • Medium (2): No review is supposed for the artifact, information channel is not distributed (e.g. creator of artifact and information provider are members of one team)
  • Sufficient (3): Pair development or peer review is done, information channel is not distributed.
  • Excellent (4): Pair development, peer review and/or testing are done, automation or unit testing is done, or there are some tools for artifact development and validation.

Define responsible people edit

Success of RST session strongly depends on correct assignment of responsible people.

Perform Reverse Semantic Traceability of artifact edit

Reverse Semantic Traceability starts when decision that RST should be performed is made and resources for it are available.

Project manager defines what documents will be an input for RST session. For example, it can be not only an artifact to restore but some background project information. It is recommended to give to reverse engineers number of words in original text so that they have an idea about amount of text they should get as a result: it can be one sentence or several pages of text. Though, the restored text may not contain the same number of words as original text nevertheless the values should be comparable.

After that reverse engineers take the artifact and restore the original text from it. RST itself takes about 1 hour for one page text (750 words).

Value the level of quality and make a decision edit

To complete RST session, restored and original texts of artifact should be compared and quality of artifact should be assessed. Decision about artifacts rework and its amount is made based on this assessment.

For assessment a group of experts is formed. Experts should be aware of project domain and be an experienced enough to assess quality level of compared artifacts. For example, business analysts will be experts for comparison of vision statement and restored vision statement from scenario.

RST assessment criteria:

  1. Restored and original texts have quite big differences in meaning and crucial information loss
  2. Restored and original texts have some differences in meaning, important information loss
  3. Restored and original texts have some differences in meaning, some insignificant information loss
  4. Restored and original texts are very close, some insignificant information loss
  5. Restored and original texts are very close, none information is lost

Each of experts gives his assessment, and then the average value is calculated. Depending on this value Project Manager makes a decision should both artifacts be corrected or one of them or rework is not required.

If the average RST quality level is in range from 1 to 2 the quality of artifact is poor and it is recommended not only rework of validated artifact to eliminate defects but corrections of original artifact to clear misunderstandings. In this case one more RST session after rework of artifacts is required. For artifacts that have more than 2 but less than 3 corrections of validated artifact to fix defects and eliminate information loss is required, however review of original artifact to find out if there any vague piece of information that cause misunderstandings is recommended. No additional RST sessions is needed. If the average mark is more than 3 but less than 4 then corrections of validated artifact to remove defects and insignificant information loss is supposed. If the mark is greater than 4 it means that artifact is of good quality and no special corrections or rework is required.

Obviously the final decision about rework of artifacts is made by project manager and should be based on analysis of reasons of differences in texts.

See also edit

References edit

  • Vladimir Pavlov and Anton Yatsenko, The Babel Experiment: An Advanced Pantomime-based Training in OOA&OOD with UML, 36th 'ACM Technical Symposium on Computer Science Education (SIG CSE 2005), St Louis (Missouri, USA).

External links edit

  • Vladimir L. Pavlov website
  • P-Modeling Framework Whitepaper
  • P-Modeling Framework