post-execution
comparison: Comparison
of actual and expected results, performed after the
software has finished running.
precondition: Environmental
and state conditions that must be fulfilled before the component
or system can be executed with a
particular test or test procedure.
priority: The level of
(business) importance assigned to an item, e.g. defect.
procedure
testing: Testing
aimed at ensuring that the component or system can operate in
conjunction with new or existing
users’ business procedures or operational procedures.
probe effect: The effect on
the component or system by the measurement instrument when
the component or system is being
measured, e.g. by a performance testing tool or monitor.
For example performance may be
slightly worse when performance testing tools are being
used.
process: A set of
interrelated activities, which transform inputs into outputs.
process cycle
test: A
black box test design technique in which test cases are designed to
execute business procedures and
processes.
process
improvement: A
program of activities designed to improve the performance and
maturity of the organization’s
processes, and the result of such a program.
product risk: A risk directly
related to the test object.
project: A project is a
unique set of coordinated and controlled activities with start and finish
dates undertaken to achieve an
objective conforming to specific requirements, including
the constraints of time, cost and
resources.
project risk: A risk related
to management and control of the (test) project, e.g. lack of
staffing, strict deadlines, changing
requirements, etc.
pseudo-random: A series which
appears to be random but is in fact generated according to
some prearranged sequence.
qualification: The process of
demonstrating the ability to fulfill specified requirements. Note
the term ‘qualified’ is used to
designate the corresponding status.
quality: The degree to
which a component, system or process meets specified requirements
and/or user/customer needs and
expectations.
quality
assurance: Part
of quality management focused on providing confidence that quality
requirements will be fulfilled.
quality
attribute: A
feature or characteristic that affects an item’s quality.
quality
management: Coordinated
activities to direct and control an organization with regard
to quality. Direction and control
with regard to quality generally includes the establishment
of the quality policy and quality
objectives, quality planning, quality control, quality
assurance and quality improvement.
random testing: A black box test
design technique where test cases are selected, possibly
using a pseudo-random generation
algorithm, to match an operational profile. This
technique can be used for testing
non-functional attributes such as reliability and
performance.
recoverability: The capability
of the software product to re-establish a specified level of
performance and recover the data
directly affected in case of failure.
recoverability
testing: The
process of testing to determine the recoverability of a software
product.
regression
testing: Testing
of a previously tested program following modification to ensure
that defects have not been
introduced or uncovered in unchanged areas of the software, as a
result of the changes made. It is
performed when the software or its environment is
changed.
release note: A document
identifying test items, their configuration, current status and other
delivery information delivered by
development to testing, and possibly other stakeholders,
at the start of a test execution
phase.
reliability: The ability of
the software product to perform its required functions under stated
conditions for a specified period
of time, or for a specified number of operations.
reliability
growth model: A
model that shows the growth in reliability over time during
continuous testing of a component
or system as a result of the removal of defects that result
in reliability failures.
reliability
testing: The
process of testing to determine the reliability of a software product.
replaceability: The capability
of the software product to be used in place of another specified
software product for the same
purpose in the same environment.
requirement: A condition or
capability needed by a user to solve a problem or achieve an
objective that must be met or
possessed by a system or system component to satisfy a
contract, standard,
specification, or other formally imposed document.
requirements-based
testing: An
approach to testing in which test cases are designed based
on test objectives and test
conditions derived from requirements, e.g. tests that exercise
specific functions or probe
non-functional attributes such as reliability or usability.
requirements
management tool: A
tool that supports the recording of requirements,
requirements attributes (e.g.
priority, knowledge responsible) and annotation, and
facilitates traceability through
layers of requirements and requirements change
management. Some requirements
management tools also provide facilities for static
analysis, such as consistency
checking and violations to pre-defined requirements rules.
requirements
phase: The
period of time in the software life cycle during which the
requirements for a software
product are defined and documented.
resource
utilization: The
capability of the software product to use appropriate amounts and
types of resources, for example
the amounts of main and secondary memory used by the
program and the sizes of required
temporary or overflow files, when the software performs
its function under stated conditions.
resource
utilization testing: The
process of testing to determine the resource-utilization of a
software product.
result: The consequence/outcome
of the execution of a test. It includes outputs to screens,
changes to data, reports, and
communication messages sent out.
resumption
criteria: The
testing activities that must be repeated when testing is re-started
after a suspension.
re-testing: Testing that
runs test cases that failed the last time they were run, in order to
verify the success of corrective
actions.
retrospective
meeting: A
meeting at the end of a project during which the project team
members evaluate the project and
learn lessons that can be applied to the next project.
review: An evaluation of
a product or project status to ascertain discrepancies from planned
results and to recommend
improvements. Examples include management review, informal
review, technical review,
inspection, and walkthrough.
reviewer: The person
involved in the review that identifies and describes anomalies in the
product or project under review.
Reviewers can be chosen to represent different viewpoints
and roles in the review process.
review tool: A tool that
provides support to the review process. Typical features include
review planning and tracking
support, communication support, collaborative reviews and a
repository for collecting and
reporting of metrics.
risk: A factor that
could result in future negative consequences; usually expressed as impact
and likelihood.
risk analysis: The process of
assessing identified risks to estimate their impact and
probability of occurrence
(likelihood).
risk-based
testing: An
approach to testing to reduce the level of product risks and inform
stakeholders on their status,
starting in the initial stages of a project. It involves the
identification of product risks
and their use in guiding the test process.
risk control: The process
through which decisions are reached and protective measures are
implemented for reducing risks
to, or maintaining risks within, specified levels.
risk
identification: The
process of identifying risks using techniques such as brainstorming,
checklists and failure history.
risk level: The importance
of a risk as defined by its characteristics impact and likelihood.
The level of risk can be used to
determine the intensity of testing to be performed. A risk
level can be expressed either
qualitatively (e.g. high, medium, low) or quantitatively.
risk management:
Systematic
application of procedures and practices to the tasks of
identifying, analyzing,
prioritizing, and controlling risk.
risk type: A specific
category of risk related to the type of testing that can mitigate (control)
that category. For example the
risk of user-interactions being misunderstood can be
mitigated by usability testing.
robustness: The degree to
which a component or system can function correctly in the
presence of invalid inputs or
stressful environmental conditions.
robustness
testing: Testing
to determine the robustness of the software product.
root cause: A source of a
defect such that if it is removed, the occurance of the defect type is
decreased or removed.
root cause
analysis: An
analysis technique aimed at identifying the root causes of defects. By
directing corrective measures at
root causes, it is hoped that the likelihood of defect
recurrence will be minimized.
safety: The capability
of the software product to achieve acceptable levels of risk of harm to
people, business, software,
property or the environment in a specified context of use.
safety critical
system: A
system whose failure or malfunction may result in death or serious
injury to people, or loss or
severe damage to equipment, or environmental harm.
safety testing: Testing to
determine the safety of a software product.
scalability: The capability
of the software product to be upgraded to accommodate increased
loads.
scalability
testing: Testing
to determine the scalability of the software product.
scribe: The person who
records each defect mentioned and any suggestions for process
improvement during a review
meeting, on a logging form. The scribe has to ensure that the
logging form is readable and
understandable.
scripted
testing: Test
execution carried out by following a previously documented sequence
of tests.
scripting
language: A
programming language in which executable test scripts are written,
used by a test execution tool
(e.g. a capture/playback tool).
security: Attributes of
software products that bear on its ability to prevent unauthorized
access, whether accidental or
deliberate, to programs and data.
security
testing: Testing
to determine the security of the software product.
security testing
tool: A
tool that provides support for testing security characteristics and
vulnerabilities.
security tool: A tool that
supports operational security.
severity: The degree of
impact that a defect has on the development or operation of a
component or system.
simulation: The
representation of selected behavioral characteristics of one physical or
abstract system by another
system.
simulator: A device,
computer program or system used during testing, which behaves or
operates like a given system when
provided with a set of controlled inputs.
site acceptance
testing: Acceptance
testing by users/customers at their site, to determine
whether or not a component or
system satisfies the user/customer needs and fits within the
business processes, normally
including hardware as well as software.
smoke test: A subset of all
defined/planned test cases that cover the main functionality of a
component or system, to
ascertaining that the most crucial functions of a program work,
but not bothering with finer
details. A daily build and smoke test is among industry best
practices.
software: Computer
programs, procedures, and possibly associated documentation and data
pertaining to the operation of a
computer system.
Software Failure
Mode and Effect Analysis (SFMEA): See Failure Mode and Effect
Analysis (FMEA).
Software Failure
Mode Effect, and Criticality Analysis (SFMECA): See Failure
Mode
and Effect, and
Criticality Analysis (FMECA).
إرسال تعليق