D2.1 Requirements for automatically testable services and services architectures
With the spreading of the digital economy, a growing number of applications, systems and devices are connected and collaborate without human intermediation, allowing the automation of business processes that support daily activities and services . The functional and non-functional dependability and security of such distributed architectures become more and more a critical issue. Dependability and security (functional and non-functional) are firstly the outcome of sound engineering practices.
Service Oriented Architecture (SOA) is a design and implementation style that allows organizations to put in practice dynamic collaborations of autonomous, heterogeneous and loosely coupled digital systems as direct service exchanges, in order to achieve flexible, reliable and secure business process automation. SOA style is currently practiced following approaches that are situated in a range between two extremities: (i) “weak” interface-based SOA and (ii) “strong” contract-based SOA .
The “weak” interface-based SOA style is based merely on the design of an Application Programming Interface (API) as a primary access device to some system functionalities and on the management of this interface as a “first-class” artifact that is separate from the system internal implementation.
In the contract-based “strong” SOA style, beyond the separation between interface and implementation and the hiding of systems’ internals that are a must in any case, a service is represented by an artifact referred to as a service contract that incorporates the formal definitions not only of the service interface, but also of the service function and the external behaviors of the parties, including security and performance aspects. The service contract acts as an agreement between the parties and as a collection of requirements and a specification for the implementers.
The spread of interface-based SOA is revealed by the impressive growth of the “API economy” . APIs are a key growth driver for hundreds of companies across a wide range of industry sectors and are going to be a primary access channel to technology-driven products and services. The spread of contract-based SOA is slower than one of interface-based SOA, but follows the diffusion of the model-driven engineering approach and concerns today a growing number of distributed system architectures in critical business sectors, where is often linked to standardization initiatives . In several services architecture, the contract-based style often gets along with the interface-based one.
In a sense, SOA engineering, as all other engineering activities, is always model-based. We always create a mental model of the reality that we have either to design and build (prescriptive model) or to analyze and assess (descriptive model). The model always exists; the only option is about its form: it may be mental – existing only in the head of the designer, analyst … – or explicit. An explicit model is potentially sharable between humans; an explicit and formal model of a system may be mechanically transformed until automatic generation of parts of the system. This is the objective of the OMG Model Driven Architecture initiative , and the goal of a more general trend named MDE (Model Driven Engineering).
The SOA approach is characterized by a sharp separation between the functional (in a broad sense), black-box model and the constructional, white-box model of a system. With respect to SOA, the inescapable limit of the MDA approach is that the white-box model of a system cannot be mechanically derived by its black-box model, even by the most detailed one. The black-box model (service model) acts as an agreement (service contract) between the supplier of the functionality (the service provider) and its user (the service consumer), while the white-box models are private and hidden. This is not an abstract issue: a published API is an agreement between the service provider and the consumers as the only means that allows interacting with the provider to coordinate the service provision/consumption.
How can the service consumer improve reasonably her/his confidence in the compliance of the actual service provision with exigencies and constraints stated in the service contract? The only answer is: by testing. Beyond service compliance, how can the service consumer improve reasonably her/his confidence that the provider is not vulnerable to malicious attacks that can jeopardize the resources handled in behalf of the consumer? The answer is still: by testing. Because of implementations are mutually hidden – even if they are not, it might be too complex and expensive to assess each other implementations by the deep analysis of the internals and white-box testing – black-box testing is the only means available to businesses to improve their trust in their partners’ service provisions.
But SOA testing has the paradoxical trait that the same peculiarities that make it necessary make also it hard due to: lack of observability of the involved systems; lack of trust in the employed engineering methods; lack of direct control of the implementation lifecycles; late binding of systems; fundamental uncertainty of the test verdicts; organizational complexity; elastic demand of computational resources; increasing scale factor of the services architectures; high costs and, last but not least, questionable efficacy of humans in performing manually a more and more hard and complex but boring and low rewarding activity such as testing of large distributed architectures.
In effects, hand-writing of test cases, manual configuration of test environments, manual scheduling of test runs and eyeball assessment of test outcomes are not only labor intensive and difficult to put in practice, but are also the least effective solution of the SOA testing problem, that cannot be overcome by mere methodological recommendations on fundamentally human-based engineering practices.
The solution of the SOA testing problem can be brought only by a disruptive innovation that drastically simplifies and routinizes the testing task by implementing and offering an automated, effective, accessible and affordable testing facility.
The goal of the MIDAS project is to design and build a test automation facility that targets SOA implementations. The MIDAS functionalities shall be available on demand, i.e. on self-provisioning, pay-per-use, elastic basis. The MIDAS facility shall be a multi-tenancy SaaS (Software as a Service) deployed on a cloud infrastructure and accessible in the Internet. The MIDAS testing approach is non-intrusive on the SUT (System Under Test): the SUT is deployed on its environment (on premise and/or on cloud) and the MIDAS facility interacts with it using the services that it publishes.
The targets of the MIDAS facility are both “weak” interface-based and “strong” contract-based SOAs. The MIDAS facility is intended to provide automation of the “core” testing activities: test generation, scheduling, execution and arbitration.
MIDAS test automation is based on models. Like SOA engineering, SOA testing, and, in general, testing is always model-based. We always create a mental model of the system behavior in order to test it. As for engineering, the model always exists; the only option is about its form: it may be mental – existing only in the head of the tester – or explicit. An explicit model is potentially sharable between humans; an explicit and formal model of a system can be mechanically transformed until the automatic generation of test cases and oracles and the automatic configuration of the test running environment.
The underlying idea is that the testing activity shall shift from test case hand-writing, manual configuration of test environments, manual scheduling of test runs and eyeball assessment of test outcomes – all these activities being conducted by professional testers – to model authoring by designers and architects. The burden of test generation, scheduling, execution and arbitration, until the production of the test report, is given up to the MIDAS facility. The involved models are on one side service and system models and on the other side models of the test goals, means and courses of actions. The general idea is that the deep testing knowledge of professional testers and of the research community can be embedded in the implemented automated testing methods.
What kind of testing methods, approaches and practices will be supplied by the MIDAS on demand automated test facility?
The MIDAS project shall put into operation a substantial collection of functional (unit testing, choreography, orchestration and composition testing) and security (fuzz, security policy compliance) testing methods. Moreover, the automated scheduling and even the dynamic automated test generation shall be managed by probabilistic (Bayesian) inference methods. Furthermore, the new promising usage-based testing approach that is a testing meta-method that considers the usage of the system in the field (in operation) a source of interesting data, information and knowledge (Markov models), shall be carried out. The idea is to use this knowledge, automatically concentrated in the usage profile by intelligent processing of usage data, for driving the strategy and planning of functional test. The MIDAS facility shall support usage observation on the system in the field with facilities that allow the generation and download of the usage observation software and the upload and arrangement of usage journals.
The MIDAS portfolio of automated test methods, practices and approaches is not closed. From the point of view of the test method designers and developers, the MIDAS on demand automated test facility is an open platform. MIDAS shall implement the concept of test scheme, which is an implemented testing method able to perform automated test generation, scheduling, execution and arbitration.
Hence, according to the already canonical SaaS approach, there are two categories of MIDAS “users”: the end user and the test scheme developer. The end user acts automated testing by (i) supplying models, (ii) deploying accessible systems under test (SUTs) and (iii) invoking against the SUTs the appropriate test schemes that s/he offered by the evolving MIDAS portfolio. The test scheme developer designs and builds test schemes in a format that is MIDAS-compatible , and uploads them as plug-ins on the MIDAS facility in a strictly controlled way. Test schemes are organized, built and formatted in such a way that not only they can be integrated in the MIDAS facility, but also reuse preexisting independent resources such as SUT models and test configuration models. The SUT model and the test configuration model that are already available on MIDAS are reusable by the test schemes. Eventually, the end-user shall provide only the specific information – supplied in the form of test scheme related models – needed by the testing method the invoked test scheme implements.
The MIDAS core functionalities for both end users and test scheme developers shall be presented through APIs. This access modality allows seamless integration with Integrated Development Environments (IDEs) and Application Lifecycle Management (ALM) platforms – for instance, the programmed invocation of automated non-regression test campaigns at specific milestones of a software engineering cycle – and, obviously, new engineering service compositions that are unforeseen at that time.
Not all services architectures being equally testable, in general and in particular with MIDAS, we are obliged to state a limited collection of requirements and recommendations that respectively must and should be fulfilled in order to employ the MIDAS facility for SOA testing.
MIDAS test compatibility requirements must be fulfilled by any SUT that is sought to be a MIDAS testing target. With respect to MIDAS compatibility requirements, our work has been driven by two methodological principles: (i) minimize the number of requirements and their enactment burden for the user; (ii) formulate only requirements whose fulfillment improves the general dependability, security, interoperability, conformance with standards and, last but not least, auditability and testability as a generally accepted criteria – non-specific to MIDAS – of single-node and multi-node services architectures.
MIDAS compatibility requirements are independent from any test scheme. Conversely, MIDAS compatibility recommendations are classified as general vs. test scheme specific. General recommendations are related to the use of optional features of the MIDAS facility that improve the testing process but are independent from specific test schemes. The use of these features is optional, but the fulfillment of the related recommendation is a prerequisite for the use of them.
Test scheme specific recommendations have to be adopted only if the user wants to invoke the specific test scheme. The invocation of a specific test scheme is optional for the user, but, whether the user wants to invoke it, the fulfillment of the related recommendations will be a prerequisite. Anyway, the MIDAS facility proposes some basic test schemes that will be operational without requesting the satisfaction of any specific recommendation – only the general compatibility requirements must be fulfilled.
All compatibility requirements and recommendations can be classified in two categories: (i) those that bear on models that are needed, in general or related to specific test schemes, to drive the automated test generation, scheduling, execution and arbitration, (ii) those that bear on the SUT configuration and deployment, in order to consent basic and enhanced binding, connection and interaction of the MIDAS facility with the SUT.
General requirements on models concern “architectural” models, i.e. system models and test configuration models. The system model is descriptive model of the MIDAS target services architecture. The test configuration model is a prescriptive model that is used by the model-driven automatic generation of the test execution system.
The system model includes a service model and a SUT model. The service model is a class model, while the SUT model is an instance model of a concrete, physically deployed SUT.
According to the MDA approach, we distinguish between the service platform independent model (service PIM) and the service platform specific model (service PSM), which is a model on a specific technical interoperability platform such as SOAP or REST.
The service PIM is a standard SoaML model, compliant with the OMG Service oriented architecture Modeling Language (SoaML) Specification. The service PIM is an abstract, disembodied model of the services implemented by the SUT, which uses two kinds of stereotypes: the Service Contract and the Participant. The Service Contract describes the service abstract specification, disembodied from specific provider and consumer systems, whereas the Participant describe a class of abstract systems that realizes a number of service roles (described within the Service Contract).
The MIDAS facility needs the availability of the essential service PIM, which includes only the minimal information that is necessary to characterize and classify the SUT nodes and ports (and also the test component elements of the test configuration model – see below). A service PIM may include more enhanced information about protocols and choreographies, which have been produced by a model-driven, contract-first SOA engineering cycle and may be used by specific test schemes.
The service PSM is the service implementable model (WSDL, WADL …) on the SUT interoperability platforms (SOAP, REST …) that allows the configuration of the needed connections between the MIDAS facility and the SUT. The accuracy and consistency of the service model (PIM and PSM) is crucial.
The SUT model is a model of a concrete system but is platform independent: it is a UML Deployment model that describes the topology of the SUT and the locations of its accessible nodes/ports. The elements of the SUT model are classified by the elements of the service model. The deployed SUT node/ports (and their locations) described by the SUT are made accessible by the MIDAS facility by means of this description. Deployed node/ports that are not described by the SUT model are not accessible by the MIDAS facility. The accuracy of the SUT model is crucial for sound testing practices.
On the basis of the SUT model, the test configuration model (TC model) prescribes the architectural configuration of a test scenario in a manner that is independent by the test scheme that utilizes it. It indicates: (i) the SUT accessible nodes that shall be the targets of stimuli, responses and observations in the course of the test run, (ii) the SUT accessible places – ports and communications paths – where the stimuli, responses and observations shall be acted and (iii) the connections that shall be established between SUT and the MIDAS test execution environment.
According to the MDA approach, we distinguish between the test configuration platform independent model (TC PIM) and the test configuration platform specific model (TC PSM), which is a model on a specific test platform and environment, such as the TTCN-3 platform .
The TC PIM is a UML Component model. TC PIM Components are stereotyped as Proxy, Emulator and Interceptor. Proxies prescribe components that represent the SUT node/ports being the targets of stimuli and responses by the MIDAS facility. Emulators prescribe components that emulate SUT nodes and virtual environment nodes – nodes that are not in the composition of the SUT and represent human or artificial actors that interact with the system. Interceptors are able to place themselves virtually on a communication path between two SUTs (in fact between the representative Proxies) and interact with them. Emulators and Interceptors are architectural placeholders whose testing operational semantics is determined by the test schemes that utilize them.
General compatibility requirements on the SUT bear on the interoperability platforms that are compatible with the MIDAS facility and the availability for a SUT of an initialization and recovery procedure that can be invoked by the MIDAS facility. General compatibility recommendations on the SUT suggest the implementation of ancillary services and tools that make easier the SUT deployment, the check of the models’ accuracy and of the connections.
The other compatibility recommendations are related to specific test schemes, in the fields of functional, security and usage-based testing.
Functional testing, practiced as unit (single node) and integration (multi-node cooperation) testing, can be enhanced by the availability of service model elements that specify, beyond the service interface, the function (what the provider does in behalf of the consumer) and the behavior (how the parties interact to coordinate the service provision/consumption). Function specifications that are compatible with the MIDAS facility are grounded on contract-based design: a function is specified by the operation signature and its pre/post-conditions . Function models allow the automated generation of tests and oracles.
Stateful service providers, i.e. providers that change durably the state of the resources they handle in behalf of their consumers, because of the SOA approach that forbids the direct inquiry of the aforementioned states, can be tested more effectively only by cross-checking. Cross-checking is put in practice by retrieving internal states through basic transparency services based on international standards that should be implemented by the SUT and matching this information with the SUT responses.
Choreography testing can be applied to end-to-end service exchanges on multi-node services architectures. Composition testing enhances choreography testing with the help, for each involved node, of the transfer function , i.e. the formal specification for the involved nodes of the correspondence between the input stimuli and the output interactions that are service composition effects of the stimuli.
Compatibility recommendations for functional testing, on models and on the SUT, are cumulative: the availability of more models (protocol, function, choreography and composition) and of more ancillary services on the SUT (state re/initialization, transparency services) allows more and more enhanced testing methods.
Security testing can be classified into two categories: (i) security-policy compliance testing and (ii) vulnerability testing. Security policies are included in the service contract and security-policy compliance testing is close to functional compliance testing. Vulnerability testing aims at seeking faults and weaknesses (the latter being not necessarily faults from the functional point of view) that have a security impact.
In order to look for such vulnerabilities, the testing perspective is moved from the system specifications to the attacker behavior. The main aspect of security testing is to stimulate the SUT with inputs that reveal vulnerabilities. Mostly, such inputs are invalid in the sense of the specification. Therefore, in contrast to functional testing, security testing is mostly negative testing and may be based on misuse cases instead of use cases.
We are going to develop two kinds of fuzzing approaches: data fuzzing and behavioral fuzzing. With data fuzzing the SUT is exercised with invalid input data while behavioral fuzzing consists of submitting invalid message sequences to the SUT.
The compatibility recommendations on models for security testing concern the availability of interaction protocol models, security policy models, encryption/decryption and signature/validation algorithms and keys utilized by the SUT. The compatibility recommendations on the SUT are the same as those for functional testing.
SOA usage-based testing is a new promising research and engineering approach about SOA testing. The first intent is to focus on highly used functions of the service and highly usual end-to-end service exchanges in services architectures. Test cases are generated such that the testing effort focuses on the highly used parts of the SUT. As a by-product of this approach, it is possible to determine the reliability (the rate of failures given a usage scenario) of the SUT with respect to the current usage and utilize this measure as a criterion within acceptance testing. Conversely, focusing on lowly used paths of interaction within the SUT is interesting for augmenting the functional test coverage. Usage-based testing is intended to support the functional testing activities of the MIDAS facility.
Usage-based testing is grounded on the concept of usage profiles. The usage profile describes the usage of a system in stochastic terms, such as the probability of the next interaction. Usage profile models are built from information extracted from usage journals that are issued from usage observation on the system in the field (SIF). The MIDAS facility shall support the user in producing easy-to-put-in-place mechanisms (observer software) that are able to perform usage observation with minimum overhead. These software components, which are placed by the SIF administrator on the chosen SIF usage observation points, are able to provide MIDAS-compatible usage journals, which shall be collected and uploaded on the MIDAS facility.
The crucial recommendation is that the usage data observation mechanisms shall engender usage observation data that should be not only compatible with the MIDAS facility but also accurate, which means that they should represent faithfully the actual usage of the SIF. The fulfillment of this recommendation is essential for sound usage profile modeling.
The first recipients of this document are the MIDAS early adopters, such as the MIDAS project partners that are in charge of the pilots (Healthcare Generic Services pilot / Supply Chain Management pilot). This document will constitute the support that allows them to start by trying the fulfillment of the MIDAS compatibility requirements and recommendations on their SUTs and SIFs . Relatively to these points, the feedback from the MIDAS pilots’ experience will be precious all along the project. If some of the MIDAS compatibility requirements and recommendations will change as a consequence of design and implementation choices and of pilots’ feedbacks, updated versions of this document including these changes will be made available.