Software Fault
Tree Analysis (SFTA): See Fault Tree Analysis (FTA).
software life
cycle: The
period of time that begins when a software product is conceived and
ends when the software is no
longer available for use. The software life cycle typically includes a concept phase,
requirements phase, design phase, implementation phase, test phase, installation and checkout
phase, operation and maintenance phase, and sometimes, retirement phase. Note these
phases may overlap or be performed iteratively.
software
quality: The
totality of functionality and features of a software product that bear on its ability to satisfy stated or
implied needs.
Software
Usability Measurement Inventory (SUMI): A questionnaire based usability test
technique to evaluate the
usability, e.g. user-satisfaction, of a component or system.
specification: A document that
specifies, ideally in a complete, precise and verifiable manner,
the requirements, design,
behavior, or other characteristics of a component or system, and, often, the procedures for
determining whether these provisions have been satisfied.
specified input:
An
input for which the specification predicts a result.
stability: The capability
of the software product to avoid unexpected effects from modifications
in the software.
staged
representation: A
model structure wherein attaining the goals of a set of process areas
establishes a maturity level;
each level builds a foundation for subsequent levels.
state diagram: A diagram that
depicts the states that a component or system can assume, and
shows the events or circumstances
that cause and/or result from a change from one state to
another.
state table: A grid showing
the resulting transitions for each state combined with each possible event, showing both
valid and invalid transitions.
state
transition: A
transition between two states of a component or system.
state transition
testing: A
black box test design technique in which test cases are designed to
execute valid and invalid state transitions.
statement: An entity in a
programming language, which is typically the smallest indivisible unit of execution.
statement
coverage: The
percentage of executable statements that have been exercised by a test suite.
statement
testing: A
white box test design technique in which test cases are designed to execute statements.
static analysis:
Analysis
of software artifacts, e.g. requirements or code, carried out without execution of these software
artifacts.
static analyzer:
A
tool that carries out static analysis.
static code
analysis: Analysis
of source code carried out without execution of that software.
static code
analyzer: A
tool that carries out static code analysis. The tool checks source code, for certain properties such as
conformance to coding standards, quality metrics or data flow anomalies.
static testing: Testing of a
component or system at specification or implementation level without execution of that
software, e.g. reviews or static code analysis.
statistical
testing: A
test design technique in which a model of the statistical distribution of the input is used to construct
representative test cases.
status
accounting: An
element of configuration management, consisting of the recording and reporting of information needed
to manage a configuration effectively. This information includes a listing of the
approved configuration identification, the status of proposed changes to the configuration, and
the implementation status of the approved changes.
stress testing: A type of
performance testing conducted to evaluate a system or component at or beyond the limits of its
anticipated or specified work loads, or with reduced availability of resources such as access to
memory or servers.
stress testing
tool: A
tool that supports stress testing.
.
structural
coverage: Coverage
measures based on the internal structure of a component or system.
stub: A skeletal or
special-purpose implementation of a software component, used to develop or test a component that calls or
is otherwise dependent on it. It replaces a called component.
subpath: A sequence of
executable statements within a component.
suitability: The capability
of the software product to provide an appropriate set of functions for specified tasks and user
objectives.
suspension
criteria: The
criteria used to (temporarily) stop all or a portion of the testing activities on the test items.
syntax testing: A black box test
design technique in which test cases are designed based upon the definition of the input
domain and/or output domain.
system: A collection of
components organized to accomplish a specific function or set of functions.
system of
systems: Multiple
heterogeneous, distributed systems that are embedded in networks at multiple levels and
in multiple domains interconnected addressing large-scale
inter-disciplinary common
problems and purposes.
system
integration testing: Testing
the integration of systems and packages; testing interfaces to external
organizations (e.g. Electronic Data Interchange, Internet).
system testing: The process of
testing an integrated system to verify that it meets specified requirements.
technical
review: A
peer group discussion activity that focuses on achieving consensus on the technical approach to be
taken.
test: A set of one or
more test cases.
test approach: The
implementation of the test strategy for a specific project. It typically
includes the decisions made that
follow based on the (test) project’s goal and the risk assessment carried out, starting
points regarding the test process, the test design techniques to be applied, exit criteria and
test types to be performed.
test automation:
The
use of software to perform or support test activities, e.g. test management, test design, test
execution and results checking.
test basis: All documents
from which the requirements of a component or system can be inferred. The documentation on
which the test cases are based. If a document can be amended only by way of formal
amendment procedure, then the test basis is called a frozen test basis.
test case: A set of input
values, execution preconditions, expected results and execution postconditions, developed for a
particular objective or test condition, such as to exercise a particular program path or to
verify compliance with a specific requirement.
test case
specification: A
document specifying a set of test cases (objective, inputs, test actions, expected results, and
execution preconditions) for a test item.
test charter: A statement of
test objectives, and possibly test ideas about how to test. Test charters are used in exploratory
testing.
test closure: During the test
closure phase of a test process data is collected from completed activities to consolidate
experience, testware, facts and numbers. The test closure phase consists of finalizing and
archiving the testware and evaluating the test process, including preparation of a test evaluation
report.
test comparator:
A
test tool to perform automated test comparison of actual results with expected results.
test comparison:
The
process of identifying differences between the actual results produced
by the component or system under
test and the expected results for a test. Test comparison can be performed during test
execution (dynamic comparison) or after test execution.
test condition: An item or event
of a component or system that could be verified by one or more test cases, e.g. a function,
transaction, feature, quality attribute, or structural element.
test control: A test
management task that deals with developing and applying a set of corrective actions to get a test
project on track when monitoring shows a deviation from what was planned.
test cycle: Execution of the
test process against a single identifiable release of the test object.
test data: Data that exists
(for example, in a database) before a test is executed, and that affects or is affected by the
component or system under test.
test data
preparation tool: A
type of test tool that enables data to be selected from existing
databases or created, generated,
manipulated and edited for use in testing.
test design: (1) See test
design specification.
(2) The process of transforming
general testing objectives into tangible test conditions and
test cases.
test design
specification: A
document specifying the test conditions (coverage items) for a
test item, the detailed test
approach and identifying the associated high level test cases.
test design
technique: Procedure
used to derive and/or select test cases.
test design
tool: A
tool that supports the test design activity by generating test inputs from a
specification that may be held in
a CASE tool repository, e.g. requirements management
tool, from specified test
conditions held in the tool itself, or from code.
test driven
development: A
way of developing software where the test cases are developed,
and often automated, before the
software is developed to run those test cases.
test
environment: An
environment containing hardware, instrumentation, simulators, software tools, and other support
elements needed to conduct a test.
test estimation:
The
calculated approximation of a result (e.g. effort spent, completion date, costs involved, number of test
cases, etc.) which is usable even if input data may be incomplete, uncertain, or noisy.
test evaluation
report: A
document produced at the end of the test process summarizing all testing activities and results.
It also contains an evaluation of the test process and lessons learned.
test execution: The process of
running a test on the component or system under test, producing actual result(s).
test execution
automation: The
use of software, e.g. capture/playback tools, to control the execution of tests, the
comparison of actual results to expected results, the setting up of test preconditions, and other test
control and reporting functions.
test execution
phase: The
period of time in a software development life cycle during which the components of a software
product are executed, and the software product is evaluated to determine whether or not
requirements have been satisfied.
test execution
schedule: A
scheme for the execution of test procedures. The test procedures are included in the test
execution schedule in their context and in the order in which they are to be executed.
test execution
technique: The
method used to perform the actual test execution, either manual or automated.
test execution
tool: A
type of test tool that is able to execute other software using an automated test script, e.g.
capture/playback.
test harness: A test
environment comprised of stubs and drivers needed to execute a test.
test
implementation: The
process of developing and prioritizing test procedures, creating test
data and, optionally, preparing
test harnesses and writing automated test scripts.
test
infrastructure: The
organizational artifacts needed to perform testing, consisting of test environments, test tools, office
environment and procedures.
test input: The data
received from an external source by the test object during test execution. The external source can be
hardware, software or human.
test item: The individual
element to be tested. There usually is one test object and many test items.
test level: A group of test
activities that are organized and managed together. A test level is linked to the responsibilities in
a project. Examples of test levels are component test, integration test, system test and
acceptance test.
test log: A chronological
record of relevant details about the execution of tests.
test logging: The process of
recording information about tests executed into a test log.
إرسال تعليق