Glossary: Terms, Definitions, and Acronyms

We have defined a glossary in order to clarify the Terms, Definitions, and Acronyms which will be used during the lifecycle of the project.

The document can be downloaded from here.

Glossary

Abstract test case: a complete and independent specification of the actions required to achieve a specific test purpose. An abstract test case may be represented as a set of informal instructions or as a formal specification like a TTCN-3 test case. (ETSI ES 202 951)

Accuracy: the capability of the software product to provide the right or agreed results or effects with the needed degree of precision. (ISO 9126)

Accuracy test case: a test case that determines the accuracy of a software product.

Abstract test suite: a test suite composed of abstract test cases. (ETSI ES 202 951)

All-pairs testing: see pairwise testing.

Automated test framework: a collection of software and test data configured to test a program unit by running it under varying conditions and monitoring its behaviour and outputs.

Application Programming Interface:a protocol intended to be used as an interface by software components to communicate with each other. (http://en.wikipedia.org/wiki/Application_programming_interface)

Black-box testing: testing, either functional or non-functional, without reference to the internal structure of the component or system. (ISTQB Glossary)

Black-box test design techniques: procedure to derive and/or select test cases based on an analysis of the specification, either functional or non-functional, of a component or system without reference to its internal structure

Black-box test execution: black-box test execution stimulates and observes the system under test solely at its public interfaces. Any utility concept that helps evaluating the internal state of the system under test except the publicly available interfaces must not be used during a strict black-box test execution.

Boundary-value: an input value or output value which is on the edge of an equivalence partition or at the smallest incremental distance on either side of an edge, for example the minimum or maximum value of a range.(ISTQB Glossary)

Boundary-value analysis: a black box test design technique in which test cases are designed based on boundary values. (ISTQB Glossary)

Cause-effect graph: a graphical representation of inputs and/or stimuli (causes) with their associated outputs (effects), which can be used to design test cases. (ISTQB Glossary)

Cloud computing: a computing model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. (NIST)

Cloud deployment model: model of the activities that make a Cloud available for use. Cloud providers may use one of four different deployment models: private cloud, community cloud, public cloud, and hybrid cloud.

Cloud provider: is a (service) provider that offers customers storage or software services available via a private (private cloud) or public network (public cloud). The storage and software are available for access via the Internet and the cloud provider manages the infrastructure and platforms on which the applications run.

Cloud service model: model of offers of Cloud services. Cloud providers offer their services according to three fundamental service models: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS).

Code coverage: an analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed, e.g. statement coverage, decision coverage or condition coverage. (ISTQB Glossary)

Combinatorial test design methods: test design methods whose purposes identify the minimum number of tests needed to get coverage.

Community Cloud: a cloud deployment model where the provider makes applications, storage, and other resources available to a specific community of consumers from organizations that have shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be owned, managed, and operated by one or more of the organizations in the community, a third party, or some combination of them, and it may exist on or off premises. (NIST)

Conformance testing: testing the extent to which an Implementation Under Test (IUT) satisfies both static and dynamic conformance requirements. (ETSI ES 202 237)

Coverage: the degree, expressed as a percentage, to which a test suite has exercised a specified coverage item. (ISTQB Glossary)

Decision table: a table showing combinations of inputs and/or stimuli (causes) with their associated outputs and/or actions (effects), which can be used to design test cases. (ISTQB Glossary)

Decision table testing: a black box test design technique in which test cases are designed to execute the combinations of inputs and/or stimuli (causes) shown in a decision table. (Veenendaal 2004)

Defect: a flaw in a component or system that can cause the component or system to fail to ATM perform its required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system. (ISTQB Glossary)

Design for testability: Increasing the degree of testability of a system by applying respective development process measures.

Emulator: a device, computer program, or system that accepts the same inputs and produces the same outputs as a given system. (IEEE Std. 610-12)

Equivalence partitioning: a black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once. (ISTQB Glossary)

Error: a human action that produces an incorrect result. (ISTQB Glossary, IEEE Std. 610-12)

Event: an observable action of a software that is characterized by its type and its target, e.g., a user interaction with a system with the type mouse click and the target Ok button.

Failure: deviation of the component or system from its expected delivery, service or result. (ISTQB Glossary)

False negative: fail test verdict in presence of no SUT failures.

False positive: pass test verdict in presence of SUT failures.

Fault: see defect

Functional Testing: testing based on an analysis of the specification of the functionality of a component or system. See also black-box testing. (ISTQB Glossary)

Fuzzing: see Fuzz testing

Fuzz testing: technique for intelligently and automatically generating and passing into a target system valid and invalid message sequences to see if the system breaks, and if it does, what it is that makes it break. (ETSI MTS Security Testing Terminology, Concepts and Lifecycle)

Generic service: a service whose operations are generic, i.e., designed without specification of input and output types and that can be instantiated with user-supplied types.

Graphical User Interface: a type of user interface that allows users to interact with electronic devices using images rather than text commands. (http://en.wikipedia.org/wiki/Graphical_user_interface)

Grey-box testing: testing that combines black-box testing and white-box testing principles.

Hybrid Cloud: a cloud deployment model combining two or more distinct cloud infrastructures (private, community, or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability. (NIST)

Hypervisor: also called virtual machine monitor (VMM), is a computer software, firmware or hardware that creates and runs system virtual machines. A computer on which a hypervisor is running one or more virtual machines is a host machine. Each of those virtual machines is called a guest machine. The hypervisor presents to the guest operating systems a virtual operating platform and manages the execution of the guest operating systems. Multiple instances of a variety of operating systems may share the virtualized hardware resources.

Implementation Conformance Statement (ICS): statement made by the supplier of an IUT claimed to conform to a given specification, stating which capabilities have been implemented (ETSI TR 102 840).

Implementation eXtra Information for Testing (IXIT): statement made by a supplier of an IUT which contains or references all of the information related to the IUT and its testing environment, which will enable the test laboratory to run an appropriate test suite against the IUT (ETSI TR 102 840).

Implementation Under Test (IUT): specific implementation of a system standard under test. See also system under test (SUT).

Infrastructure as a Service (IaaS): a cloud service model, in which cloud providers offer processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, and deployed applications; and possibly limited control of selectingnetworking components. (NIST)

Interface:a hardware or software component that connects two or more other components for the purpose of passing information from one to the other. (IEEE Std. 610-12)

Internet of Services: a deployment model of services architectures on the Internet that distinguishes between the roles of Service Developer, Service Provider and (cloud) Platform Provider. The Service Developers design and develop software that the Service Providers put in operation as a service providing component on a cloud platform handled by a Platform Provider.

Interoperability platform: technological platform allowing interoperability of systems whose implementations are a priori non-interoperable.

Interoperability testing: activity of proving that end-to-end functionality between (at least) two communicating systems is as required by the base standard(s) on which those systems are based. (ETSI TR 202 237)

Loosely coupled (systems): systems whose components have a minimum of interdependencies to prevent that changes in one component require adaptations in another component.

Middleware: software that mediates between applications and operating systems, consisting of a set of services that enable interoperability in support of distributed architectures by passing data between applications. So, for example, the data in one database can be accessed through another database.

Model-based testing: (1) testing based on a model of the component or system under test, e.g., reliability growth models, usage models such as operational profiles or behavioural models such as decision table or state transition diagram. (ISTQB Glossary)

(2) An umbrella of techniques that use (semi-)formal models as engineering artifacts in order to specify and/or generate test-relevant artifacts, such as test cases, test scripts, reports etc. (UTP)

Model checking: given a model of a system, exhaustively and automatically check whether this model meets a given meta-model or satisfies a specification (e.g a safety property).

Model Driven Architecture (MDA): software development strategy in which requirements and specifications are represented through formal models at different levels of abstraction that are transformed to system implementations (OMG).

Monitor: a software tool or hardware device that runs concurrently with the component or system under test, and supervises, records and/or analyses the behaviour of the component or system. (IEEE Std. 610-12)

Observation:test data reflecting the reactions from the SUT

Off-nominal testing: using test cases that are unlikely to be selected according to the usage-based model. Unlikely is a probability/selection issue, not an abnormality one.

Oracle: see test oracle

Pairwise testing: black box test design technique in which test cases are designed to execute all possible discrete combinations of each pair of input parameters. (ISTQB Glossary)

Platform as a Service (PaaS): a cloud service model, in which cloud providers offer to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages, libraries, services, and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly configuration settings for the application-hosting environment. (NIST)

Private cloud: a cloud deployment model where the infrastructure is provisioned for exclusive use by a single organization comprising multiple consumers (e.g., business units). It may be owned, managed, and operated by the organization, a third party, or some combination of them, and it may exist on or off premises. (NIST)

Process virtual machine: a virtual machine designed to run a single program, which means that it supports a single process. Such virtual machines are usually closely suited to one or more programming languages and built with the purpose of providing program portability and flexibility (amongst other things).

Public cloud: a cloud deployment model where the infrastructure is provisioned for open use by the general public. It may be owned, managed, and operated by a business, academic, or government organization, or some combination of them. It exists on the premises of the cloud provider. (NIST)

Regression testing: selective retesting of a system or component to verify that modifications have not caused unintended effects and that the system or component still complies with its specified requirements. (IEEE Std. 610-12)

Reliability: the ability of the software product to perform its required functions under stated conditions for a specified period of time, or for a specified number of operations. (ISO 9126)

Reliability testing: the process of testing to determine the reliability of a software product. (ISTQB Glossary)

Security testing: a process to determine that an information system protects data and maintains functionality as intended. The six basic security concepts that need to be covered by security testing are: (i) confidentiality, (ii) integrity, (iii) authentication, (iv) availability, (v) authorization and (vi) non-repudiation. Security testing is challenging the security related aspects of the application.

Service: an activity that has an effect in the real/digital world, carried out by a system acting as a service provider for or on behalf of another system acting as a service consumer.

Service application programming interface: an application programming interface implemented on an interoperability platform.

Service choreography: exchange of messages in a services architecture for which an interaction protocol among participants is defined from a global perspective, each participant implementing its role without central control.

Service choreography model: model of the exchange of messages in a services architecture for which an interaction protocol among participants is defined from a global perspective.

Service choreography testing: testing the control flow of a service component architecture against a service choreography model.

Service component: a deployed software component implementing one or more service provider and consumer roles.

Service component architecture: software technology providing an implementable model for composing applications that follow Service-Oriented Architecture principles (OASIS).

Service composition: relationship among service components of a service component architecture in which, in order to provide services, service components consume services provided by other service components.

Service composition model: control and data flow model of a service composition that conforms to a formal meta-model. 

Service composition testing: testing a service composition against a service composition model.

Service contract: a specification of a service including: (i) the service operations; (ii) the service interfaces of the parties; (iii) the service parties external behaviours, including security and performance aspects.

Service contract model: a model of a service contract conforming to a formal meta-model.

Service model: a service contract model or a services architecture model.

Service orchestration: exchange of messages among service components in a service component architecture that is conducted (orchestrated) through the execution of an (orchestration) script by a service component playing the orchestrator role.

Service orchestration script: script expressed in an interpretable language that conducts a service orchestration.

Service oriented architecture (SOA): a paradigm for organizing and utilizing distributed capabilities that may be under the control of different ownership domains. SOA is an architectural paradigm for defining how people, organizations, and systems provide and use services to achieve results.

Service Platform Independent Model: logical model, i.e. abstract computational model, of a service that is independent from any specific platform.

Service Platform Specific Model: implementable model of a service on a specific platform.

Service unit testing: testing of a service node of a deployed services architecture. If the service component is a service composition, it is tested as a whole.

Services architecture: network whose nodes are abstract system classes (participants) and whose links are services.

Services architecture model: model of a services architecture including (i) the collection of Participant models, (ii) the collection of Service Contract models, and eventually the collection of control and data flow models.

Services architecture under test (SAUT): a concrete services architecture in which some participants are systems under test and some channels relying participants are observable.

Smoke testing: a subset of all defined/planned test cases that cover the main functionality of a component or system, to ascertaining that the most crucial functions of a program work, but not bothering with finer details. A daily build and smoke test is among industry best practices. (ISTQB Glossary)

SOA testing: the set of testing approaches and methodologies focused on the verification and validation of SOA specific aspects.

Software as a Service (SaaS): a cloud service model in which cloud providers offer to use the provider’s applications running on a cloud infrastructure. The applications are accessible from various client devices through either a thin client interface, such as a web browser (e.g., web-based email), or a program interface. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings. (NIST)

Software quality: the degree to which a software product fulfils its functional and non-functional requirements (IEEE Std. 610-12 under the term quality)

Software testing: the process consisting of all lifecycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects. (ISTQB Glossary under the term testing)

Specification-based testing:  see black box testing

State Table: a grid showing the resulting transitions for each state combined with each possible event, showing both valid and invalid transitions. (ISTQB Glossary)

State Transition: a transition between two states of a component or system. (ISTQB Glossary)

State Transition Testing: a black box test design technique in which test cases are designed to execute valid and invalid state transitions. (ISTQB Glossary)

Structural Testing: see white-box testing.

System in the field:a single-node or multi-node instantiated services architecture that is deployed and run in an operational environment

System under test: the real open system in which the implementation under test resides (ETSI ES 202 951).

System virtual machine: a virtual machine providing a complete system platform, which supports the execution of a complete operating system (OS). It usually emulates an existing architecture, and is built with the purpose of providing a platform to run programs where the real hardware is not available for use or of having multiple instances of virtual machines, leading to more efficient use of computing resource, both in terms of energy consumption as well as cost effectiveness.

Test arbitration: testing activity that assigns a test verdict to a test execution run. Requires a test oracle.

Test basis: all documents from which the requirements of a component or system can be inferred. (ISTQB Glossary)

Test basis model: conceptual models that describe the test basis.

Test case: a set of input values, execution preconditions, expected results and execution post conditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement. (IEEE Std. 610-12) Test cases are owned by test contexts.

Test case generator: a software tool that accepts as input source code, test criteria, specifications, or data structure definitions; uses these inputs to generate test input data; and, sometimes, determines expected results. (IEEE Std. 610-12)

Test case model: a conceptual model that describes test cases.

Test component: test components are part of a test configuration and are used to communicate with the system under test (SUT) and other test components.

Test configuration: (1) Collection of test components and of connections between the test components and to the SUT. A test configuration is part of a test environment.

(2) The collection of test component objects and of connections between the test component objects and to the SUT. The test configuration defines both (1) test component objects and connections when a test case is started (the initial test configuration) and (2) the maximal number of test component objects and connections during the test execution. (UTP)[SH7] 

Test configuration model: model of the test configuration.

Test context: partial description of the test environment (UTP). Only those artefacts of the test environment are described that are pertinent to test automation. A test context consists of the test suite, test configuration and/or the test control. Alternative definition: the state of the internal resources handled by the provider, that are involved in the service operation definition (pre/post-conditions).

Test control: a specification for the invocation of test cases within a test context. It is a technical specification of how the SUT should be tested with the given test context. (UTP)

Test directive: a group of instructions for performing test generation.

Test design directive: a test design directives represents a generator-independent specification of how the test design activity shall be carried out by composing a set of test design strategies and referring to a set of inputs.

Test design strategy: a test design strategy specifies a single test design technique that shall be applied for doing test design via a test design directive. Example of test design strategies are, e.g., boundary-value analysis or transition coverage

Test environment: environment containing hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test (IEEE Std. 610-12). 

Test execution: the process of running a test on the component or system under test, producing actual result(s). (ISTQB Glossary)

Test factoring: converting a long-running system test into a collection of many small unit tests.

Test generation: Automated activity for deriving test-relevant artifacts such as test cases, test data, test oracle test code.

Test log: A chronological record of relevant details about the execution of tests [IEEE 829]. (ISTQB Glossary)

Test model: A model that specifies various testing aspects, such as test objectives, test plans, test architecture, test cases, test data, test directives etc. (UTP)

Test objective: a reason or purpose for designing and executing a test. (ISTQB Glossary)

Test outcome: The consequence/outcome of the execution of a test case. It includes outputs to screens, changes to data, reports and communication messages sent out. (ISTQB Glossary)

Test oracle: A source to determine expected results to compare with the actual result of the software under test. An oracle may be the existing system (for a benchmark), other software, a user manual, or an individual’s specialized knowledge, but should not be the code. (ISTQB Glossary)

Test Oracle Automation: an oracle automated mechanism. Oracle automation is the first step towards test automation.

Test requirement: an item or event of a component or system that could be verified by one or more test cases, e.g. a function, transaction, feature, quality attribute, or structural element. (ISTQB Glossary)

Test run: execution of a test on a specific version of the test object. (ISTQB Glossary)

Test schedule: a list of activities, tasks or events of the test process, identifying their intended start and finish dates and/or times, and interdependencies. (ISTQB Glossary)

Test step: the smallest atomic (i.e., indivisible) part of a test case specification that is executed by a test execution system during test case execution (UTP).

Test suite: a set of several test cases for a component or system under test, where the post condition of one test is often used as the precondition for the next one. (ISTQB Glossary)

Testability: the capability of the software product to enable modified software to be tested. (ISTQB Glossary)

Test-based modelling: instrumenting design models with information useful for testing. It is different from design for testability. Contract design or design by contract is typically test-based modelling.

Testing as a Service (TaaS): a cloud service that offers functionality for software testing in form of a Web service.

Usage-based test generation: a way to generate test cases that mimic the behaviour of users in order to test highly used parts of a system intensively. The test case generation is based on usage profiles.

Usage-based test scheduling: a scheduling mechanism that prioritizes the execution of test cases such that highly used parts of systems are tested first. The scheduling is based on usage profiles.

Usage-based testing: a software testing approach that aims at optimizing the user-based quality through focusing on highly used parts of the system under test. The usage of a system is described with usage profiles.

Usage monitor: a monitor that records all events that describe the usage of a system.

Usage monitoring: the process of collecting data with a usage monitor.

Usage profile: a model of a system that describes how probable user interactions depending on the current state of the system are. The model is a stochastic process over an automaton of the system.

Usage profile inference: the process of determining an accurate usage profile of a system. The inference consists of three stages: (1) usage monitoring, (2) usage trace mapping, and (3) usage profile training.

Usage profile training: calculating the probabilities of events based on monitored data.

Usage trace mapping: the mapping the recorded events during the usage monitoring to modelling elements of available system under test models.

User-based quality: a view of quality, wherein quality is the capacity to satisfy needs, wants and desires of the user(s). A product or service that does not fulfill user needs is unlikely to find any users. This is a context dependent, contingent approach to quality since different business characteristics require different qualities of a product. (ISTQB Glossary)

Validation: confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled. (ISO 9000)

Verification: confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled. (ISO 9000)

Virtual Machine (VM): a software implementation of a machine (i.e., a computer) that executes programs like a physical machine. It can be seen as a simulation of a machine (abstract or real) that is usually different from the target machine (where it is being simulated on). Virtual machines are separated into two major categories: system and process virtual machines.

Virtual Machine Image (VMI): a software application combined with just enough operating system for it to run optimally in a virtual machine. VMIs are intended to eliminate the installation, configuration and maintenance costs associated with running complex stacks of software.

Virtualization: a means of uncoupling the execution of a software environment from the underlying physical resources (CPUs, Storage, networking, etc.) and their location, by presenting them as logical resources to applications. Virtualization technologies create and perform a mapping between physical and logical resources.

White-box testing:testing activity based on an analysis of the internal structure of the component or system. (ISTQB Glossary)

 

Acronyms

                  API     Application Programming Interface

ATS     Abstract Test Suite

CIM     Computation Independent Model

ETSI     European Telecommunications Standards Institute

GUI     Graphical User Interface

IaaS     Infrastructure as a Service

ICS     Implementation Conformance Statement

IEEE     Institute of Electrical and Electronics Engineers

ISO     International Standards Organization

              ISTQB     International Software Testing Qualifications Board

IUT     Implementation Under Test

IXIT     Implementation eXtra Information for Testing

MDA     Model Driven Architecture

NIST     National Institute of Standards and Technology

OMG     Object Management Group

PaaS     Platform as a Service

PIM     Platform Independent Model

PSM     Platform Specific Model

SaaS     Software as a Service

SAUT     Services Architecture Under Test

SOA     Service Oriented Architecture

SOAP     Simple Object Access Protocol

SUT     System Under Test

TaaS     Testing as a Service

TP     Test Purposes

TPaaS     Testing Platform as a Service

TSS     Test Suite Structure

TTCN-3     Testing and Test Control Notation Version 3

UML     Unified Modelling Language

UTP     UML Test Profile

VM     Virtual Machine

VMI     Virtual Machine Image

VMM     Virtual Machine Monitor

WSDL     Web Service Description Language

XML     Extensible Markup Language

XSD     XML Schema Definition