test manager: The person
responsible for project management of testing activities and resources, and evaluation of a
test object. The individual who directs, controls, administers, plans and regulates the
evaluation of a test object.
test management:
The
planning, estimating, monitoring and control of test activities, typically carried out by a test
manager.
test management
tool: A
tool that provides support to the test management and control part of a test process. It often has
several capabilities, such as testware management, scheduling of tests, the logging of results,
progress tracking, incident management and test reporting.
Test Maturity
Model (TMM): A
five level staged framework for test process improvement, related to the Capability
Maturity Model (CMM), that describes the key elements of an effective test process.
Test Maturity
Model Integrated (TMMi): A five level staged framework for test process improvement, related to the
Capability Maturity Model Integration (CMMI), that describes the key elements of an effective
test process.
test monitoring:
A
test management task that deals with the activities related to periodically checking the status of a test
project. Reports are prepared that compare the actuals to that which was planned.
test object: The component or
system to be tested.
test objective: A reason or
purpose for designing and executing a test.
test oracle: A source to
determine expected results to compare with the actual result of the software under test. An oracle
may be the existing system (for a benchmark), a user manual, or an individual’s
specialized knowledge, but should not be the code.
test performance
indicator: A
high level metric of effectiveness and/or efficiency used to guide and control progressive
test development, e.g. Defect Detection Percentage (DDP).
test phase: A distinct set
of test activities collected into a manageable phase of a project, e.g. the execution activities of a
test level.
test plan: A document
describing the scope, approach, resources and schedule of intended test activities. It identifies
amongst others test items, the features to be tested, the testing tasks, who will do each task,
degree of tester independence, the test environment, the test design techniques and entry and
exit criteria to be used, and the rationale for their choice, and any risks requiring
contingency planning. It is a record of the test planning process.
test planning: The activity of
establishing or updating a test plan.
test policy: A high level
document describing the principles, approach and major objectives of the organization regarding
testing.
Test Point
Analysis (TPA): A
formula based test estimation method based on function point
Analysis.
test procedure
specification: A
document specifying a sequence of actions for the execution of a test. Also known as test
script or manual test script.
test process: The fundamental
test process comprises test planning and control, test analysis and design, test implementation
and execution, evaluating exit criteria and reporting, and
test closure activities.
Test Process
Improvement (TPI): A
continuous framework for test process improvement that describes the key elements
of an effective test process, especially targeted at system
testing and acceptance testing.
test progress
report:
A document summarizing testing activities and results, produced at regular intervals, to report
progress of testing activities against a baseline (such as the original test plan) and to
communicate risks and alternatives requiring a decision to
management.
test
reproduceability: An
attribute of a test indicating whether the same results are produced each time the test is executed.
test run: Execution of a
test on a specific version of the test object.
test schedule: A list of
activities, tasks or events of the test process, identifying their intended start and finish dates and/or
times, and interdependencies.
test script: Commonly used to
refer to a test procedure specification, especially an automated
one.
test session: An uninterrupted
period of time spent in executing tests. In exploratory testing, each test session is focused on a
charter, but testers can also explore new opportunities or issues during a session. The
tester creates and executes test cases on the fly and records
their progress.
test
specification: A
document that consists of a test design specification, test case specification and/or test
procedure specification.
test strategy: A high-level
description of the test levels to be performed and the testing within those levels for an organization
or programme (one or more projects).
test suite: A set of several
test cases for a component or system under test, where the post condition of one test is often
used as the precondition for the next one.
test summary
report: A
document summarizing testing activities and results. It also contains an evaluation of the
corresponding test items against exit criteria
test target: A set of exit
criteria.
test tool: A software
product that supports one or more test activities, such as planning and control, specification, building
initial files and data, test execution and test analysis.
test type: A group of test
activities aimed at testing a component or system focused on a specific test objective, i.e.
functional test, usability test, regression test etc. A test type may take place on one or more test
levels or test phases.
testability: The capability
of the software product to enable modified software to be tested.
testability
review: A
detailed check of the test basis to determine whether the test basis is at an adequate quality level to act
as an input document for the test process.
testable
requirements: The
degree to which a requirement is stated in terms that permit establishment of test designs
(and subsequently test cases) and execution of tests to determine whether the
requirements have been met.
tester: A skilled
professional who is involved in the testing of a component or system.
testing: The process
consisting of all life cycle activities, both static and dynamic, concerned with planning, preparation and
evaluation of software products and related work products to determine that they satisfy
specified requirements, to demonstrate that they are fit for purpose and to detect defects.
testware: Artifacts
produced during the test process required to plan, design, and execute tests, such as documentation,
scripts, inputs, expected results, set-up and clear-up procedures, files, databases,
environment, and any additional software or utilities used in
testing.
thread testing: A version of
component integration testing where the progressive integration of components follows the
implementation of subsets of the requirements, as opposed to the integration of components by
levels of a hierarchy.
top-down
testing: An
incremental approach to integration testing where the component at the top of the component hierarchy is
tested first, with lower level components being simulated by stubs. Tested components are
then used to test lower level components. The process is repeated until the lowest level
components have been tested.
traceability: The ability to
identify related items in documentation and software, such as requirements with associated
tests. See also horizontal traceability, vertical traceability.
understandability:
The
capability of the software product to enable the user to understand whether the software is suitable,
and how it can be used for particular tasks and conditions of
use.
unreachable
code: Code
that cannot be reached and therefore is impossible to execute.
usability: The capability
of the software to be understood, learned, used and attractive to the user when used under specified conditions.
usability
testing: Testing
to determine the extent to which the software product is understood, easy to learn, easy
to operate and attractive to the users under specified
conditions.
use case: A sequence of
transactions in a dialogue between a user and the system with a tangible result.
use case
testing: A
black box test design technique in which test cases are designed to execute user scenarios.
user test: A test whereby
real-life users are involved to evaluate the usability of a component or system.
unit test
framework: A
tool that provides an environment for unit or component testing in which a component can be tested
in isolation or with suitable stubs and drivers. It also provides other support for the
developer, such as debugging capabilities.
V-model: A framework to
describe the software development life cycle activities from requirements specification to
maintenance. The V-model illustrates how testing activities can be integrated into each phase
of the software development life cycle.
validation: Confirmation by
examination and through provision of objective evidence that the requirements for a specific
intended use or application have been fulfilled.
variable: An element of
storage in a computer that is accessible by a software program by referring to it by a name.
verification: Confirmation by
examination and through provision of objective evidence that specified requirements have been
fulfilled.
vertical
traceability: The
tracing of requirements through the layers of development documentation to components.
volume testing: Testing where
the system is subjected to large volumes of data.
walkthrough: A step-by-step
presentation by the author of a document in order to gather information and to establish a
common understanding of its content.
white-box test
design technique: Procedure
to derive and/or select test cases based on an analysis of the internal
structure of a component or system.
white-box
testing: Testing
based on an analysis of the internal structure of the component or
system.
Wide Band
Delphi: An
expert based test estimation technique that aims at making an accurate estimation using the
collective wisdom of the team members.
wild pointer: A pointer that
references a location that is out of scope for that pointer or that
does not exist.
إرسال تعليق