input domain: The set from
which valid input values can be selected.
inspection: A type of peer
review that relies on visual examination of documents to detect
defects, e.g. violations of
development standards and non-conformance to higher level
documentation. The most formal
review technique and therefore always based on a
documented procedure.
installability: The capability
of the software product to be installed in a specified
environment.
installability
testing: The
process of testing the installability of a software product.
installation
guide: Supplied
instructions on any suitable media, which guides the installer
through the installation process.
This may be a manual guide, step-by-step procedure,
installation wizard, or any other
similar process description.
installation
wizard: Supplied
software on any suitable media, which leads the installer
through the installation process.
It normally runs the installation process, provides
feedback on installation results,
and prompts for options.
instrumentation:
The
insertion of additional code into the program in order to collect
information about program
behavior during execution, e.g. for measuring code coverage.
instrumenter: A software tool
used to carry out instrumentation.
intake test: A special
instance of a smoke test to decide if the component or system is ready
for detailed and further testing.
An intake test is typically carried out at the start of the test
execution phase.
integration: The process of
combining components or systems into larger assemblies.
integration
testing: Testing
performed to expose defects in the interfaces and in the
interactions between integrated
components or systems.
interface
testing: An
integration test type that is concerned with testing the interfaces
between components or systems.
interoperability:
The
capability of the software product to interact with one or more
specified components or systems.
interoperability
testing: The
process of testing to determine the interoperability of a
software product.
invalid testing:
Testing
using input values that should be rejected by the component or
system.
isolation
testing: Testing
of individual components in isolation from surrounding
components, with surrounding
components being simulated by stubs and drivers, if needed.
iterative
development model: A
development life cycle where a project is broken into a
usually large number of
iterations. An iteration is a complete development loop resulting in
a release (internal or external)
of an executable product, a subset of the final product under
development, which grows from
iteration to iteration to become the final product.
keyword driven
testing: A
scripting technique that uses data files to contain not only test
data and expected results, but
also keywords related to the application being tested. The
keywords are interpreted by
special supporting scripts that are called by the control script
for the test.
LCSAJ: A Linear Code
Sequence And Jump, consisting of the following three items
(conventionally identified by
line numbers in a source code listing): the start of the linear
sequence of executable
statements, the end of the linear sequence, and the target line to
which control flow is transferred
at the end of the linear sequence.
LCSAJ coverage: The percentage
of LCSAJs of a component that have been exercised by a
test suite. 100% LCSAJ coverage
implies 100% decision coverage.
LCSAJ testing: A white box test
design technique in which test cases are designed to execute
LCSAJs.
learnability: The capability
of the software product to enable the user to learn its application.
level test plan: A test plan that
typically addresses one test level.
load profile: A specification
of the activity which a component or system being tested may
experience in production. A load
profile consists of a designated number of virtual users
who process a defined set of
transactions in a specified time period and according to a
predefined operational profile.
load testing: A type of
performance testing conducted to evaluate the behavior of a
component or system with
increasing load, e.g. numbers of parallel users and/or numbers
of transactions, to determine
what load can be handled by the component or system.
low level test
case: A
test case with concrete (implementation level) values for input data and
expected results. Logical
operators from high level test cases are replaced by actual values
that correspond to the objectives
of the logical operators. See also high level test case.
maintenance: Modification of
a software product after delivery to correct defects, to improve
performance or other attributes,
or to adapt the product to a modified environment.
maintenance
testing: Testing
the changes to an operational system or the impact of a
changed environment to an
operational system.
maintainability:
The
ease with which a software product can be modified to correct defects,
modified to meet new
requirements, modified to make future maintenance easier, or
adapted to a changed environment.
maintainability
testing: The
process of testing to determine the maintainability of a software
product.
management
review: A
systematic evaluation of software acquisition, supply, development,
operation, or maintenance
process, performed by or on behalf of management that
monitors progress, determines the
status of plans and schedules, confirms requirements and
their system allocation, or
evaluates the effectiveness of management approaches to
achieve fitness for purpose.
master test
plan: A
test plan that typically addresses multiple test levels.
maturity: (1) The
capability of an organization with respect to the effectiveness and
efficiency of its processes and
work practices.
measure: The number or
category assigned to an attribute of an entity by making a
measurement.
measurement: The process of
assigning a number or category to an entity to describe an
attribute of that entity.
measurement
scale: A
scale that constrains the type of data analysis that can be performed
on it.
memory leak: A defect in a
program's dynamic store allocation logic that causes it to fail to
reclaim memory after it has
finished using it, eventually causing the program to fail due to
lack of memory.
metric: A measurement
scale and the method used for measurement.
milestone: A point in time
in a project at which defined (intermediate) deliverables and
results should be ready.
modelling tool: A tool that
supports the validation of models of the software or system
moderator: The leader and
main person responsible for an inspection or other review
process.
monitor: A software tool
or hardware device that runs concurrently with the component or
system under test and supervises,
records and/or analyses the behavior of the component or
system.
monkey testing: Testing by means
of a random selection from a large range of inputs and by
randomly pushing buttons,
ignorant on how the product is being used.
multiple
condition coverage: The
percentage of combinations of all single condition
outcomes within one statement
that have been exercised by a test suite. 100% multiple
condition coverage implies 100%
condition determination coverage.
multiple
condition testing: A
white box test design technique in which test cases are
designed to execute combinations
of single condition outcomes (within one statement).
mutation
analysis: A
method to determine test suite thoroughness by measuring the extent to
which a test suite can
discriminate the program from slight variants (mutants) of the
program.
N-switch
coverage: The
percentage of sequences of N+1 transitions that have been exercised
by a test suite.
N-switch
testing: A
form of state transition testing in which test cases are designed to execute
all valid sequences of N+1
transitions.
negative
testing: Tests
aimed at showing that a component or system does not work.
Negative testing is related to
the testers’ attitude rather than a specific test approach or test
design technique, e.g. testing
with invalid input values or exceptions.
non-conformity: Non fulfillment
of a specified requirement.
non-functional
requirement: A
requirement that does not relate to functionality, but to
attributes such as reliability,
efficiency, usability, maintainability and portability.
non-functional
testing: Testing
the attributes of a component or system that do not relate to
functionality, e.g. reliability,
efficiency, usability, maintainability and portability.
non-functional
test design techniques: Procedure to derive and/or select test cases for
nonfunctional testing based on an analysis of the specification of a component
or system
without reference to its internal
structure.
off-the-shelf
software: A
software product that is developed for the general market, i.e. for a
large number of customers, and
that is delivered to many customers in identical format.
operability: The capability
of the software product to enable the user to operate and control it.
operational
acceptance testing: Operational
testing in the acceptance test phase, typically
performed in a simulated
real-life operational environment by operator and/or
administrator focusing on
operational aspects, e.g. recoverability, resource-behavior,
installability and technical
compliance.
operational
environment: Hardware
and software products installed at users’ or customers’
sites where the component or
system under test will be used. The software may include
operating systems, database
management systems, and other applications.
operational
profile:
The representation of a distinct set of tasks performed by the component
or system, possibly based on user
behavior when interacting with the component or
system, and their probabilities
of occurance. A task is logical rather that physical and can
be executed over several machines
or be executed in non-contiguous time segments.
operational
profile testing: Statistical
testing using a model of system operations (short
duration tasks) and their probability
of typical use.
operational
testing: Testing
conducted to evaluate a component or system in its operational
environment.
orthogonal array: A
2-dimensional array constructed with special mathematical properties,
such that choosing any two
columns in the array provides every pair combination of each
number in the array.
orthogonal array
testing: A
systematic way of testing all-pair combinations of variables
using orthogonal arrays. It
significantly reduces the number of all combinations of
variables to test all pair
combinations.
output: A variable
(whether stored within a component or outside) that is written by a
component.
output domain: The set from
which valid output values can be selected.
output value: An instance of
an output.
pair
programming: A
software development approach whereby lines of code (production
and/or test) of a component are
written by two programmers sitting at a single computer.
This implicitly means ongoing
real-time code reviews are performed.
pair testing: Two persons,
e.g. two testers, a developer and a tester, or an end-user and a
tester, working together to find
defects. Typically, they share one computer and trade
control of it while testing.
pairwise
testing: A
black box test design technique in which test cases are designed to
execute all possbile discrete
combinations of each pair of input parameters.
pass: A test is deemed
to pass if its actual result matches its expected result.
pass/fail
criteria: Decision
rules used to determine whether a test item (function) or feature
has passed or failed a test.
path: A sequence of
events, e.g. executable statements, of a component or system from an
entry point to an exit point.
path coverage: The percentage
of paths that have been exercised by a test suite. 100% path
coverage implies 100% LCSAJ
coverage.
path
sensitizing: Choosing
a set of input values to force the execution of a given path.
path testing: A white box test
design technique in which test cases are designed to execute
paths.
peer review: A review of a
software work product by colleagues of the producer of the
product for the purpose of
identifying defects and improvements. Examples are inspection,
technical review and walkthrough.
performance: The degree to
which a system or component accomplishes its designated
functions within given
constraints regarding processing time and throughput rate.
performance
indicator: A
high level metric of effectiveness and/or efficiency used to guide
and control progressive
development, e.g. lead-time slip for software development.
performance
profiling: Definition
of user profiles in performance, load and/or stress testing.
Profiles should reflect
anticipated or actual usage based on an operational profile of a
component or system, and hence
the expected workload.
performance
testing: The
process of testing to determine the performance of a software
product.
performance
testing tool: A
tool to support performance testing and that usually has two
main facilities: load generation
and test transaction measurement. Load generation can
simulate either multiple users or
high volumes of input data. During execution, response
time measurements are taken from
selected transactions and these are logged. Performance
testing tools normally provide
reports based on test logs and graphs of load against
response times.
phase test plan:
A
test plan that typically addresses one test phase.
pointer: A data item that
specifies the location of another data item; for example, a data item
that specifies the address of the
next employee record to be processed.
portability: The ease with
which the software product can be transferred from one hardware
or software environment to
another.
portability
testing: The
process of testing to determine the portability of a software product.
postcondition: Environmental
and state conditions that must be fulfilled after the execution
of a test or test procedure.
إرسال تعليق