Deliverable 5.4 Fuzz and security testing driving tool published

The MIDAS project approach considers the scheduling (static and dynamic) of test campaign as an issue that is obviously related to test generation and execution, but that can be treated a problem per se in order to allow the emergence of more general approaches. Even when, with cloud technology, computational resources for testing can be considered highly scalable and elastic, the emergence of very large services architectures (in terms of number of nodes and connections) and of automatically generated very large test suites forces to take into account scheduling issues.

Dynamic (automatic) scheduling requests the presence, at test run time, of a separate actor – the scheduler – that is able to elaborate past test verdicts in order to suggest test case samples to be executed or to halt the test cycle. The test case samples to be executed can be already available or freshly generated at run time, in which case the test generator shall be callable on the fly by the scheduler.

The goal is to make testing more efficient and/or focused (on specific regions of very large services architectures and/or on specific operations), i.e. to improve the failure seeking and faulty component (node/port/interface/operation) localization efficiency of the test campaign. The proposed approach is to ground the scheduling strategy and tactics on probabilistic inference through Bayesian Networks.

Bayesian Networks are one of the most effective ways to implement probabilistic reasoning, but raise still computational complexity issues. The inference computation described in this approach is based on: (i) a model-driven approach that allows generating a virtual Bayesian Network which is the most adherent to the specific SUT architecture and the available test suite; (ii) and advanced compilation technique of the (virtual) Bayesian Network into an Arithmetic Circuit that improves initialization speed, execution speed and size.
The approach of dynamic scheduling proposed in the document is focused on functional testing, but can also be applied to others testing practices. Priority-based static scheduling is proposed by security fuzz testing and usage-based testing.

Static scheduling for security testing is based on priorities. The priority is substantially related to the importance of the requirement or of the feature that is the target of the test case and hence it is related to the validation process. The security data fuzz testing is based to the “good” semi-validity of the generated data (too correct data find no bugs, too incorrect are rejected by the system). The idea is to classify fuzzing operators with respect to their ability to generate semi-valid sequences, and augment this classification with a test design directive or by introducing priorities as formalized artefacts into the behavioural fuzz test generation process. The aim of the usage based approach is prioritizing functional test samples with a usage score. The procedure for obtaining this usage score demands the collection of usage information on the system in the field, the inference of usage profiles from this information, the assignment of a usage score to each test case and the scheduling of the test runs according to their usage score. Roughly speaking, this usage score reflects the likelihood that a user stimulates the same test case on the system in the field.

Download document