본문 바로가기
QA & TEST/테스트 근황

소프트웨어 테스트 용어(Living Glossary)

by 코드네임피터 2016. 8. 10.
반응형


A B C D E F G H I J K L M
N O P Q R S T U V W X Y Z

A

acceptance testing: Formal testing conducted to enable a user, customer, or other authorized entity to determine whether to accept a system or component. [IEEE]

actual outcome: The behaviour actually produced when the object is tested under specified conditions.

ad hoc testing: Testing carried out using no recognised test case design technique.

alpha testing: Simulated or actual operational testing at an in-house site not otherwise involved with the software developers.

arc testing: See branch testing.

B

Backus-Naur form: A meta language used to formally describe the syntax of a language. See BS .

basic block: A sequence of one or more consecutive, executable statements containing no branches.

basis test set: A set of test cases derived from the code logic which ensure that \% branch coverage is achieved.

bebugging: See error seeding. [Abbott]

behavior: The combination of input values and preconditions and the required response for a function of a system. The full specification of a function would normally comprise one or more behaviors.

beta testing: Operational testing at a site not otherwise involved with the software developers.

big-bang testing: Integration testing where no incremental testing takes place prior to all the system's components being combined to form the system.

black box testing: See functional test case design.

bottom-up testing: An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.

boundary value: An input value or output value which is on the boundary between equivalence classes, or an incremental distance either side of the boundary.

boundary value analysis: A test case design technique for a component in which test cases are designed which include representatives of boundary values.

boundary value coverage: The percentage of boundary values of the component's equivalence classes which have been exercised by a test case suite.

boundary value testing: See boundary value analysis.

branch: A conditional transfer of control from any statement to any other statement in a component, or an unconditional transfer of control from any statement to any other statement in the component except the next statement, or when a component has more than one entry point, a transfer of control to an entry point of the component.

branch condition: See decision condition.

branch condition combination coverage: The percentage of combinations of all branch condition outcomes in every decision that have been exercised by a test case suite.

branch condition combination testing: A test case design technique in which test cases are designed to execute combinations of branch condition outcomes.

branch condition coverage: The percentage of branch condition outcomes in every decision that have been exercised by a test case suite.

branch condition testing: A test case design technique in which test cases are designed to execute branch condition outcomes.

branch coverage: The percentage of branches that have been exercised by a test case suite

branch outcome: See decision outcome.

branch point: See decision.

branch testing: A test case design technique for a component in which test cases are designed to execute branch outcomes.

bug: See fault.

bug seeding: See error seeding.

C

C-use: See computation data use.

capture/playback tool: A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time.

capture/replay tool: See capture/playback tool.

CAST: Acronym for computer-aided software testing.

cause-effect graph: A graphical representation of inputs or stimuli (causes) with their associated outputs (effects), which can be used to design test cases.

cause-effect graphing: A test case design technique in which test cases are designed by consideration of cause-effect graphs.

certification: The process of confirming that a system or component complies with its specified requirements and is acceptable for operational use. From [IEEE].

Chow's coverage metrics: See N-switch coverage. [Chow]

code coverage: An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.

code-based testing: Designing tests based on objectives derived from the implementation (e.g., tests that execute specific control flow paths or use specific data items).

compatibility testing: Testing whether the system is compatible with other systems with which it should communicate.

complete path testing: See exhaustive testing.

component: A minimal software item for which a separate specification is available.

component testing: The testing of individual software components. After [IEEE].

component specification: A description of a component's function in terms of its output values for specified input values under specified preconditions.

computation data use: A data use not in a condition. Also called C-use.

condition: A Boolean expression containing no Boolean operators. For instance, A<B is a condition but A and B is not. [DO-B]

condition coverage: See branch condition coverage.

condition outcome: The evaluation of a condition to TRUE or FALSE.

conformance criterion: Some method of judging whether or not the component's action on a particular specified input value conforms to the specification.

conformance testing: The process of testing that an implementation conforms to the specification on which it is based.

control flow: An abstract representation of all possible sequences of events in a program's execution.

control flow graph: The diagrammatic representation of the possible alternative control flow paths through a component.

control flow path: See path.

conversion testing: Testing of programs or procedures used to convert data from existing systems for use in replacement systems.

correctness: The degree to which software conforms to its specification.

coverage: The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test case suite.

coverage item: An entity or property used as a basis for testing.

D

data definition: An executable statement where a variable is assigned a value.

data definition C-use coverage: The percentage of data definition C-use pairs in a component that are exercised by a test case suite.

data definition C-use pair: A data definition and computation data use, where the data use uses the value defined in the data definition.

data definition P-use coverage: The percentage of data definition P-use pairs in a component that are exercised by a test case suite.

data definition P-use pair: A data definition and predicate data use, where the data use uses the value defined in the data definition.

data definition-use coverage: The percentage of data definition-use pairs in a component that are exercised by a test case suite.

data definition-use pair: A data definition and data use, where the data use uses the value defined in the data definition.

data definition-use testing: A test case design technique for a component in which test cases are designed to execute data definition-use pairs.

data flow coverage: Test coverage measure based on variable usage within the code. Examples are data definition-use coverage, data definition P-use coverage, data definition C-use coverage, etc.

data flow testing: Testing in which test cases are designed based on variable usage within the code.

data use: An executable statement where the value of a variable is accessed.

debugging: The process of finding and removing the causes of failures in software.

decision: A program point at which the control flow has two or more alternative routes.

Decision condition: A condition within a decision.

decision coverage: The percentage of decision outcomes that have been exercised by a test case suite.

decision outcome: The result of a decision (which therefore determines the control flow alternative taken).

design-based testing: Designing tests based on objectives derived from the architectural or detail design of the software (e.g., tests that execute specific invocation paths or probe the worst case behaviour of algorithms).

desk checking: The testing of software by the manual simulation of its execution.

dirty testing: See negative testing. [Beizer]

documentation testing: Testing concerned with the accuracy of documentation.

domain: The set from which values are selected.

domain testing: See equivalence partition testing.

dynamic analysis: The process of evaluating a system or component based upon its behaviour during execution. [IEEE]

E

emulator: A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system. [IEEE,dob]

entry point: The first executable statement within a component.

equivalence class: A portion of the component's input or output domains for which the component's behaviour is assumed to be the same from the component's specification.

equivalence partition: See equivalence class.

equivalence partition coverage: The percentage of equivalence classes generated for the component, which have been exercised by a test case suite.

equivalence partition testing: A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.

error: A human action that produces an incorrect result. [IEEE]

error guessing: A test case design technique where the experience of the tester is used to postulate what faults might occur, and to design tests specifically to expose them.

error seeding: The process of intentionally adding known faults to those already in a computer program for the purpose of monitoring the rate of detection and removal, and estimating the number of faults remaining in the program. [IEEE]

 executable statement: A statement which, when compiled, is translated into object code, which will be executed procedurally when the program is running and may perform an action on program data.

exercised: A program element is exercised by a test case when the input value causes the execution of that element, such as a statement, branch, or other structural element.

 exhaustive testing: A test case design technique in which the test case suite comprises all combinations of input values and preconditions for component variables.

 exit point: The last executable statement within a component.

 expected outcome: See predicted outcome.

F

 facility testing: See functional test case design.

 failure: Deviation of the software from its expected delivery or service. [Fenton]

fault: A manifestation of an error in software. A fault, if encountered may cause a failure. [dob]

feasible path: A path for which there exists a set of input values and execution conditions which causes it to be executed.

feature testing: See functional test case design.

fit for purpose testing:  Validation carried out to demonstrate that the delivered system can be used to carry out the tasks for which it was acquired.

functional specification: The document that describes in detail the characteristics of the product with regard to its intended capability. [BS , Part]

 functional test case design: Test case selection that is based on an analysis of the specification of the component without reference to its internal workings.

G

glass box testing: See structural test case design.

H
I

 incremental testing: Integration testing where system components are integrated into the system one at a time until the entire system is integrated.

 independence: Separation of responsibilities which ensures the accomplishment of objective evaluation. After [dob].

 infeasible path: A path which cannot be exercised by any set of possible input values.

 input: A variable (whether stored within a component or outside it) that is read by the component.

 input domain: The set of all possible inputs.

 input value: An instance of an input.

 inspection: A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection). After [Graham]

installability:  The ability of a software component or system to be installed on a defined target platform allowing it to be run as required.  Installation includes both a new installation and an upgrade.

installability testing: Testing whether the software/system installation meets defined installation requirements.  Full definition.

installation guide:  Supplied instructions on any suitable media, which guides the installer through the installation process.  This may be a manual guide, step-by-step procedure, installation wizard, or any other similar process description.

installation wizard:  Supplied software on any suitable media, which leads the installer through the installation process.  It shall normally run the installation process, provide feedback on installation outcomes and prompt for options.

 instrumentation: The insertion of additional code into the program in order to collect information about program behaviour during program execution.

 instrumenter: A software tool used to carry out instrumentation.

 integration: The process of combining components into larger assemblies.

 integration testing: Testing performed to expose faults in the interfaces and in the interaction between integrated components.

 interface testing: Integration testing where the interfaces between system components are tested.

 isolation testing: Component testing of individual components in isolation from surrounding components, with surrounding components being simulated by stubs.

J
K
L

 LCSAJ: A Linear Code Sequence And Jump, consisting of the following three items (conventionally identified by line numbers in a source code listing): the start of the linear sequence of executable statements, the end of the linear sequence, and the target line to which control flow is transferred at the end of the linear sequence.

 LCSAJ coverage: The percentage of LCSAJs of a component which are exercised by a test case suite.

 LCSAJ testing: A test case design technique for a component in which test cases are designed to execute LCSAJs.

 logic-coverage testing: See structural test case design. [Myers]

 logic-driven testing: See structural test case design.

M

maintainability:  The ease with which the system/software can be modified to correct faults, modified to meet new requirements, modified to make future maintenance easier, or adapted to a changed environment.

maintainability requirements:  A specification of the required maintainability for the system/software.

maintainability testing: Testing to determine whether the system/software meets the specified maintainability requirementsFull definition.

 modified condition/decision coverage: The percentage of all branch condition outcomes that independently affect a decision outcome that have been exercised by a test case suite.

 modified condition/decision testing: A test case design technique in which test cases are designed to execute branch condition outcomes that independently affect a decision outcome.

 multiple condition coverage: See branch condition combination coverage.

 mutation analysis: A method to determine test case suite thoroughness by measuring the extent to which a test case suite can discriminate the program from slight variants (mutants) of the program. See also error seeding.

N

 N-switch coverage: The percentage of sequences of N-transitions that have been exercised by a test case suite.

 N-switch testing: A form of state transition testing in which test cases are designed to execute all valid sequences of N-transitions.

 N-transitions: A sequence of N+ transitions.

 negative testing: Testing aimed at showing software does not work. [Beizer]

 non-functional requirements testing: Testing of those requirements that do not relate to functionality. i.e. performance, usability, etc.

O

 operational testing:Testing conducted to evaluate a system or component in its operational environment. [IEEE]

 oracle: A mechanism to produce the predicted outcomes to compare with the actual outcomes of the software under test. After [adrion]

 outcome: Actual outcome or predicted outcome. This is the outcome of a test. See also branch outcome, condition outcome and decision outcome.

 output: A variable (whether stored within a component or outside it) that is written to by the component.

output domain: The set of all possible outputs.

 output value: An instance of an output.

P

 P-use: See predicate data use.

 partition testing: See equivalence partition testing. [Beizer]

 path: A sequence of executable statements of a component, from an entry point to an exit point.

 path coverage: The percentage of paths in a component exercised by a test case suite.

 path sensitizing: Choosing a set of input values to force the execution of a component to take a given path.

 path testing: A test case design technique in which test cases are designed to execute paths of a component.

 performance testing: Testing conducted to evaluate the compliance of a system or component with specified performance requirements. [IEEE]

portability:  The ease with which the system/software can be transferred from one hardware or software environment to another.

portability requirements:  A specification of the required portability for the system/software.

portability testing: Testing to determine whether the system/software meets the specified portability requirements.

 precondition: Environmental and state conditions which must be fulfilled before the component can be executed with a particular input value.

 predicate: A logical expression which evaluates to TRUE or FALSE, normally to direct the execution path in code.

 predicate data use: A data use in a predicate.

 predicted outcome: The behaviour predicted by the specification of an object under specified conditions.

 program instrumenter: See instrumenter.

 progressive testing: Testing of new features after regression testing of previous features. [Beizer]

 pseudo-random: A series which appears to be random but is in fact generated according to some prearranged sequence.

Q
R

 recovery testing: Testing aimed at verifying the system's ability to recover from varying degrees of failure.

regression testing: Retesting of a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.

reliability:  The ability of the system/software to perform its required functions under stated conditions for a specified period of time, or for a specified number of operations.

reliability requirements:  A specification of the required reliability for the system/software.

reliability testing:  Testing to determine whether the system/software meets the specified reliability requirements.

requirement:  A capability that must be met or possessed by the system/software (requirements may be functional or non-functional).

 requirements-based testing: Designing tests based on objectives derived from requirements for the software component (e.g., tests that exercise specific functions or probe the non-functional constraints such as performance or security). See functional test case design.

 result: See outcome.

 review: A process or meeting during which a work product, or set of work products, is presented to project personnel, managers, users or other interested parties for comment or approval. [ieee]

S

security:  Preservation of confidentiality, integrity and availability of information, where 

availability is ensuring that authorized users have access to information and associated assets when required, and 

integrity is safeguarding the accuracy and completeness of information and processing methods, and 

confidentiality is ensuring that information is accessible only to those authorized to have access.

security requirements:  a specification of the required security for the system/software.

security testing: Testing to determine whether the system/software meets the specified security requirements.

 serviceability testing: See maintainability testing.

 simple subpath: A subpath of the control flow graph in which no program part is executed more than necessary.

 simulation: The representation of selected behavioural characteristics of one physical or abstract system by another system. [ISO /].

 simulator: A device, computer program or system used during software verification, which behaves or operates like a given system when provided with a set of controlled inputs. [IEEE,dob]

 source statement: See statement.

 specification: A description, in any suitable form, of requirements.

specification testing:  An approach to testing wherein the testing is restricted to verifying the system/software meets the specification.

 specified input: An input for which the specification predicts an outcome.

 state transition: A transition between two allowable states of a system or component.

state transition testing: A test case design technique in which test cases are designed to execute state transitions.

statement: An entity in a programming language which is typically the smallest indivisible unit of execution.

 statement coverage: The percentage of executable statements in a component that have been exercised by a test case suite.

 statement testing: A test case design technique for a component in which test cases are designed to execute statements.

 static analysis: Analysis of a program carried out without executing the program.

 static analyzer: A tool that carries out static analysis.

 static testing: Testing of an object without execution on a computer.

statistical testing: A test case design technique in which a model is used of the statistical distribution of the input to construct representative test cases.

 storage testing: Testing whether the system meets its specified storage objectives.

 stress testing: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements. [IEEE]

 structural coverage: Coverage measures based on the internal structure of the component.

 structural test case design: Test case selection that is based on an analysis of the internal structure of the component.

 structural testing: See structural test case design.

 structured basis testing: A test case design technique in which test cases are derived from the code logic to achieve % branch coverage.

 structured walkthrough: See walkthrough.

 stub: A skeletal or special-purpose implementation of a software module, used to develop or test a component that calls or is otherwise dependent on it. After [IEEE].

 subpath: A sequence of executable statements within a component.

 symbolic evaluation: See symbolic execution.

symbolic execution: A static analysis technique that derives a symbolic expression for program paths.

 syntax testing: A test case design technique for a component or system in which test case design is based upon the syntax of the input.

system testing: The process of testing an integrated system to verify that it meets specified requirements. [Hetzel]

T

 technical requirements testing: See non-functional requirements testing.

 test automation: The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.

 test case: A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement. After [IEEE,dob]

 test case design technique: A method used to derive or select test cases.

 test case suite: A collection of one or more test cases for the software under test.

 test comparator: A test tool that compares the actual outputs produced by the software under test with the expected outputs for that test case.

 test completion criterion: A criterion for determining when planned testing is complete, defined in terms of a test measurement technique.

 test coverage: See coverage.

 test driver: A program or test tool used to execute software against a test case suite.

 test environment: A description of the hardware and software environment in which the tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers.

 test execution: The processing of a test case suite by the software under test, producing an outcome.

 test execution technique: The method used to perform the actual test execution, e.g. manual, capture/playback tool, etc.

 test generator: A program that generates test cases in accordance to a specified strategy or heuristic. After [Beizer].

 test harness: A testing tool that comprises a test driver and a test comparator.

 test measurement technique: A method used to measure test coverage items.

 test outcome: See outcome.

 test plan: A record of the test planning process detailing the degree of tester indedendence, the test environment, the test case design techniques and test measurement techniques to be used, and the rationale for their choice.

 test procedure: A document providing detailed instructions for the execution of one or more test cases.

 test records: For each test, an unambiguous record of the identities and versions of the component under test, the test specification, and actual outcome.

 test script: Commonly used to refer to the automated test procedure used with a test harness.

test specification: For each test case, the coverage item, the initial state of the software under test, the input, and the predicted outcome.

 test target: A set of test completion criteria.

testing: The process of exercising software to verify that it satisfies specified requirements and to detect errors. After [dob]

 thread testing: A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels.

 top-down testing: An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.

U

 unit testing: See component testing.

usability:  The capability of the software to be understood, learned, used and attractive to the user when used under specified conditions.

usability requirements:  A specification of the required usability for the system/software.

 usability testing: Testing to determine whether the system/software meets the specified usability requirementsFull definition.

V

 validation: Determination of the correctness of the products of software development with respect to the user needs and requirements.

verification: The process of evaluating a system or component to determine whether the products of the given development phase satisfy the conditions imposed at the start of that phase. [IEEE]

 volume testing: Testing where the system is subjected to large volumes of data.

W

walkthrough: A review of requirements, designs or code characterized by the author of the object under review guiding the progression of the review.

 white box testing: See structural test case design

X
Y
Z



출처 : http://www.testingstandards.co.uk/living_glossary.htm

반응형

댓글