본문 바로가기
QA & TEST

CSTE CBOK2006 목차

by 코드네임피터 2017. 1. 23.
반응형



[Introduction to the CSTE Program]

Intro.1. Software Certification Overview 

Intro.1.1. Contact Us 

Intro.1.2. Program History 

Intro.1.3. Why Become Certified? 

Intro.1.4. Benefits of Becoming a CSTE 

Intro.1.4.1. Value Provided to the Profession 

Intro.1.4.2. Value Provided to the Individual 

Intro.1.4.3. Value Provided to the Employer 

Intro.1.4.4. How to Improve Testing Effectiveness Through CSTE Certification 

Intro.2. Meeting the CSTE Qualifications 

Intro.2.1. Prerequisites for Candidacy 

Intro.2.1.1. Educational and Professional Prerequisites 

Intro.2.1.2. Non-U.S. Prerequisites 

Intro.2.1.3. Expectations of the CSTE 

Intro.2.2. Code of Ethics 

Intro.2.2.1. Purpose 

Intro.2.2.2. Responsibility 

Intro.2.2.3. Professional Code of Conduct 

Intro.2.2.4. Grounds for Decertification 

Intro.2.3. Submitting the Initial Application 

Intro.2.3.1. Correcting Application Errors 

Intro.2.3.2. Submitting Application Changes 

Intro.2.4. Application-Examination Eligibility Requirements 

Intro.2.4.1. Filing a Retake Application 

Intro.3. Arranging to Sit and Take the Examination 

Intro.3.1. Scheduling to Take the Examination 

Intro.3.1.1. Rescheduling the Examination Sitting 

Intro.3.2. Receiving the Confirmation Letter 

Intro.3.3. Checking Examination Arrangements 

Intro.3.4. Arriving at the Examination Site 

Intro.3.4.1. No-shows 

Intro.4. How to Maintain Competency and Improve Value 

Intro.4.1. Continuing Professional Education 

Intro.4.2. Advanced CSTE Designations 

Intro.4.2.1. What is the Certification Competency Emphasis? 

Intro.5. Assess Your CSTE 2006 CBOK Competency 

Intro.5.1. Complete the CSTE Skill Assessment Worksheet 

Intro.5.2. Calculate Your CSTE CBOK Competency Rating 

Intro.6. Understand the Key Principles Incorporated Into the Examination 

Intro.7. Review the List of References 

Intro.8. Initiate a Self-Study Program 

Intro.9. Take the Sample Examination 


[CSTE 2006 Skill Assessment Worksheet]

. Assess Your Skills against the CSTE 2006 CBOK

. Skill Category 1 – Software Testing Principles and Concepts

. Skill Category 2 – Building the Test Environment 

. Skill Category 3 – Managing the Test Project 

. Skill Category 4 – Test Planning 

. Skill Category 5 – Executing the Test Plan 

. Skill Category 6 – Test Reporting Process 

. Skill Category 7 – User Acceptance Testing

. Skill Category 8 – Testing Software Developed by Contractors

. Skill Category 9 – Testing Internal Control

. Skill Category 10 – Testing New Technologies

. CSTE 2006 CBOK Competency Rating Table 


[Software Testing Principles and Concepts]

1.1. Vocabulary 

1.1.1. Quality Assurance Versus Quality Control 

1.1.1.1. Quality Assurance 

1.1.1.2. Quality Control 

1.1.2. The Cost of Quality 

1.1.3. Software Quality Factors 

1.1.3.1. How to Identify Important Software Quality Factors 

1.1.3.2. Inventory Control System Example 

1.1.4. How Quality is Defined 

1.1.5. Definitions of Quality 

1.1.6. What is Quality Software? 

1.1.6.1. The Two Software Quality Gaps 

1.1.6.2. What is Excellence? 

1.2. What is Life Cycle Testing? 

1.2.1. Why Do We Test Software? 

1.2.2. Developers are not Good Testers 

1.2.3. What is a Defect? 

1.2.4. Software Process Defects 

1.2.4.1. What Does It Mean For a Process To Be In or Out of Control? 

1.2.4.2. Do Testers Need to Know SPC? 

1.2.5. Software Product Defects 

1.2.5.1. Software Design Defects 

1.2.5.2. Data Defects 

1.2.6. Finding Defects 

1.3. Reducing the Frequency of Defects in Software Development 

1.3.1. The Five Levels of Maturity 

1.3.1.1. Level 1 – Ad Hoc 

1.3.1.2. Level 2 – Control 

1.3.1.3. Level 3 – Core Competency 

1.3.1.4. Level 4 – Predictable 

1.3.1.5. Level 5 – Innovative 

1.3.2. Testers Need to Understand Process Maturity 

1.4. Factors Affecting Software Testing 

1.4.1. People Relationships 

1.4.2. Scope of Testing 

1.4.3. Misunderstanding Life Cycle Testing 

1.4.3.1. Requirements 

1.4.3.2. Design 

1.4.3.3. Program (Build/Construction) 

1.4.3.4. Test Process 

1.4.3.5. Installation 

1.4.3.6. Maintenance 

1.4.4. Poorly Developed Test Planning 

1.4.5. Testing Constraints 

1.4.5.1. Budget and Schedule Constraints 

1.4.5.2. Lacking or Poorly Written Requirements 

1.4.5.3. Changes in Technology 

1.4.5.4. Limited Tester Skills 

1.5. Life Cycle Testing 

1.6. Test Matrices 

1.6.1. Cascading Test Matrices 

1.7. Independent Testing 

1.8. Tester’s Workbench 

1.8.1. What is a Process? 

1.8.1.1. The PDCA View of a Process 

1.8.1.2. The Workbench View of a Process 

1.8.1.3. Workbenches are Incorporated into a Process 

1.9. Levels of Testing 

1.9.1. Verification versus Validation 

1.9.1.1. Computer System Verification and Validation Examples 

1.9.1.2. Functional and Structural Testing 

1.9.2. Static versus Dynamic Testing 

1.9.3. The “V” Concept of Testing 

1.9.3.1. An 1 

1.10. Testing Techniques 

1.10.1. Structural versus Functional Technique Categories 

1.10.1.1. Structural System Testing Technique Categories 

1.10.1.2. Functional System Testing Technique Categories 

1.10.2. Examples of Specific Testing Techniques 

1.10.2.1. White-Box Testing 

1.10.2.2. Black-Box Testing 

1.10.2.3. Incremental Testing 

1.10.2.4. Thread Testing 

1.10.2.5. Requirements Tracing 

1.10.2.6. Desk Checking and Peer Review 

1.10.2.7. Walkthroughs, Inspections, and Reviews 

1.10.2.8. Proof of Correctness Techniques 

1.10.2.9. Simulation 

1.10.2.10. Boundary Value Analysis 

1.10.2.11. Error Guessing and Special Value Analysis 

1.10.2.12. Cause-Effect Graphing 

1.10.2.13. Design-Based Functional Testing 

1.10.2.14. Coverage-Based Testing 

1.10.2.15. Complexity-Based Testing 

1.10.2.16. Statistical Analyses and Error Seeding 

1.10.2.17. Mutation Analysis 

1.10.2.18. Flow Analysis 

1.10.2.19. Symbolic Execution 

1.10.3. Combining Specific Testing Techniques 


[Building the Test Environment]

2.1. Management Support 

2.1.1. Management Tone 

2.1.2. Integrity and Ethical Values 

2.1.2.1. Incentives and Temptations 

2.1.2.2. Providing and Communicating Moral Guidance 

2.1.3. Commitment to Competence 

2.1.4. Management’s Philosophy and Operating Style 

2.1.5. Organizational Structure 

2.1.5.1. Assignment of Authority and Responsibility 

2.1.5.2. Human Resource Policies and Practices 

2.2. Test Work Processes 

2.2.1. The Importance of Work Processes 

2.2.2. Developing Work Processes 

2.2.2.1. Defining the Attributes of a Standard for a Standard 

2.2.2.2. Developing a Test Standard 

2.2.3. Tester’s Workbench 

2.2.4. Responsibility for Building Work Processes 

2.2.4.1. Responsibility for Policy 

2.2.4.2. Responsibility for Standards and Procedures 

2.2.4.3. Test Process Selection 

2.2.4.4. Building a Process Engineering Organization 

2.2.4.5. Professional Test Standards 

2.2.5. Analysis and Improvement of the Test Process 

2.2.5.1. Test Process Analysis 

2.2.5.2. Continuous Improvement 

2.2.5.3. Test Process Improvement Model 

2.2.5.4. Test Process Alignment 

2.2.5.5. Adapting the Test Process to Different Software Development Methodologies 

2.3. Test Tools 

2.3.1. Tool Development and Acquisition 

2.3.1.1. Sequence of Events to Select Testing Tools 

2.3.2. Classes of Test Tools 

2.4. Testers Competency 


[Managing the Test Project]

3.1. Test Administration 

3.1.1. Test Planning 

3.1.2. Budgeting 

3.1.2.1. Budgeting Techniques 

3.1.2.2. Tracking Budgeting Changes 

3.1.3. Scheduling 

3.1.4. Staffing 

3.1.4.1. Test Team Approaches 

3.1.5. Customization of the Test Process 

3.2. Test Supervision 

3.2.1. Communication Skills 

3.2.1.1. Written and Oral Communication 

3.2.1.2. Listening Skills 

3.2.1.3. The 

3.2.1.4. Interviewing Skills 

3.2.1.5. Analyzing Skills 

3.2.2. Negotiation and Complaint Resolution Skills 

3.2.2.1. Negotiation 

3.2.2.2. Resolving Complaints 

3.2.2.3. The 

3.2.3. Judgment 

3.2.4. Providing Constructive Criticism 

3.2.5. Project Relationships 

3.2.6. Motivation, Mentoring, and Recognition 

3.2.6.1. Motivation 

3.2.6.2. Mentoring 

3.2.6.3. Recognition 

3.3. Test Leadership 

3.3.1. Chairing Meetings 

3.3.2. Team Building 

3.3.2.1. Team Development 

3.3.2.2. Team Member Interaction 

3.3.2.3. Team Ethics 

3.3.2.4. Team Rewards 

3.3.3. Quality Management Organizational Structure 

3.3.4. Code of Ethics 

3.3.4.1. Responsibility 

3.4. Managing Change 

3.4.1. Software Configuration Management 

3.4.2. Software Change Management 

3.4.3. Software Version Control 

3.4.3.1. Example 


[Test Planning]

4.1. Risk Concepts and Vocabulary 

4.2. Risks Associated with Software Development 

4.2.1. Improper Use of Technology 

4.2.2. Repetition of Errors 

4.2.3. Cascading of Errors 

4.2.4. Illogical Processing 

4.2.5. Inability to Translate User Needs into Technical Requirements 

4.2.6. Inability to Control Technology 

4.2.7. Incorrect Entry of Data 

4.2.8. Concentration of Data 

4.2.9. Inability to React Quickly 

4.2.10. Inability to Substantiate Processing 

4.2.11. Concentration of Responsibilities 

4.2.12. Erroneous or Falsified Input Data 

4.2.13. Misuse by Authorized End Users 

4.2.14. Uncontrolled System Access 

4.2.15. Ineffective Security and Privacy Practices for the Application 

4.2.16. Procedural Errors during Operations 

4.2.16.1. Procedures and Controls 

4.2.16.2. Storage Media Handling 

4.2.17. Program Errors 

4.2.18. Operating System Flaws 

4.2.19. Communications System Failure 

4.2.19.1. Accidental Failures 

4.2.19.2. Intentional Acts 

4.3. Risks Associated with Software Testing 

4.3.1. Premature Release Risk 

4.4. Risk Analysis 

4.4.1. Risk Analysis Process 

4.4.1.1. Form the Risk Analysis Team 

4.4.1.2. Identify Risks 

4.4.1.3. Estimate the Magnitude of the Risk 

4.4.1.4. Select Testing Priorities 

4.5. Risk Management 

4.5.1. Risk Reduction Methods 

4.5.2. Contingency Planning 

4.6. Prerequisites to Test Planning 

4.6.1. Test Objectives 

4.6.2. Acceptance Criteria 

4.6.3. Assumptions 

4.6.4. People Issues 

4.6.5. Constraints 

4.7. Create the Test Plan 

4.7.1. Understand the Characteristics of the Software being Developed 

4.7.2. Build the Test Plan 

4.7.2.1. Set Test Objectives 

4.7.2.2. Develop the Test Matrix 

4.7.2.3. Define Test Administration 

4.7.2.4. State Test Plan General Information 

4.7.3. Write the Test Plan 

4.7.3.1. Guidelines to Writing the Test Plan 

4.7.3.2. Test Plan Standard 


[Executing the Test Plan]

5.1. Test Case Design 

5.1.1. Functional Test Cases 

5.1.1.1. Design Specific Tests for Testing Code 

5.1.1.2. Functional Testing Independent of the Specification Technique 

5.1.1.3. Functional Testing Based on the Interface 

5.1.1.4. Functional Testing Based on the Function to be Computed 

5.1.1.5. Functional Testing Dependent on the Specification Technique 

5.1.2. Structural Test Cases 

5.1.2.1. Structural Analysis 

5.1.2.2. Structural Testing 

5.1.3. Erroneous Test Cases 

5.1.3.1. Statistical Methods 

5.1.3.2. Error-Based Testing 

5.1.4. Stress Test Cases 

5.1.5. Test Scripts 

5.1.5.1. Determine Testing Levels 

5.1.5.2. Develop the Scripts 

5.1.5.3. Execute the Script 

5.1.5.4. Analyze the Results 

5.1.5.5. Maintain Scripts 

5.1.6. Use Cases 

5.1.6.1. Build a System Boundary Diagram 

5.1.6.2. Define Use Cases 

5.1.6.3. Develop Test Cases 

5.1.6.4. Test Objective 

5.1.7. Building Test Cases 

5.1.8. Process for Building Test Cases 

5.1.9. Example of Creating Test Cases for a Payroll Application 

5.2. Test Coverage 

5.3. Performing Tests 

5.3.1. Platforms 

5.3.2. Test Cycle Strategy 

5.3.3. Use of Tools in Testing 

5.3.3.1. Test Documentation 

5.3.3.2. Test Drivers 

5.3.3.3. Automatic Test Systems and Test Languages 

5.3.4. Perform Tests 

5.3.4.1. Perform Unit Testing 

5.3.4.2. Perform Integration Test 

5.3.4.3. Perform System Test 

5.3.5. When is Testing Complete? 

5.3.6. General Concerns 

5.4. Recording Test Results 

5.4.1. Problem Deviation 

5.4.2. Problem Effect 

5.4.3. Problem Cause 

5.4.4. Use of Test Results 

5.5. Defect Management 

5.5.1. Defect Naming Guidelines 

5.5.1.1. Name of the Defect 

5.5.1.2. Defect Severity 

5.5.1.3. Defect Type 

5.5.1.4. Defect Class 

5.5.2. The Defect Management Process 

5.5.2.1. Defect Prevention 

5.5.2.2. Deliverable Baseline 

5.5.2.3. Defect Discovery 

5.5.2.4. Defect Resolution 

5.5.2.5. Process Improvement 


[Test Reporting Process]

6.1. Prerequisites to Test Reporting 

6.1.1. Define and Collect Test Status Data 

6.1.1.1. Test Results Data 

6.1.1.2. Test Case Results and Test Verification Results 

6.1.1.3. Defects 

6.1.1.4. Efficiency 

6.1.2. Define Test Metrics used in Reporting 

6.1.3. Define Effective Test Metrics 

6.1.3.1. Objective versus Subjective Measures 

6.1.3.2. How Do You Know a Metric is Good? 

6.1.3.3. Standard Units of Measure 

6.1.3.4. Productivity versus Quality 

6.1.3.5. Test Metric Categories 

6.2. Test Tools used to Build Test Reports 

6.2.1. Pareto Charts 

6.2.1.1. Deployment 

6.2.1.2. Examples 

6.2.1.3. Results 

6.2.1.4. Recommendations 

6.2.2. Pareto Voting 

6.2.2.1. Deployment 

6.2.2.2. Example 

6.2.3. Cause and Effect Diagrams 

6.2.3.1. Deployment 

6.2.3.2. Results 

6.2.3.3. Examples 

6.2.3.4. Recommendation 

6.2.4. Check Sheets 

6.2.4.1. Deployment 

6.2.4.2. Results 

6.2.4.3. Examples 

6.2.4.4. Recommendations 

6.2.4.5. Example Check Sheet 

6.2.5. Histograms 

6.2.5.1. Variation of a Histogram 

6.2.5.2. Deployment 

6.2.5.3. Results 

6.2.5.4. Examples 

6.2.5.5. Recommendations 

6.2.6. Run Charts 

6.2.6.1. Deployment 

6.2.6.2. Results 

6.2.6.3. Examples 

6.2.6.4. Recommendations 

6.2.7. Scatter Plot Diagrams 

6.2.7.1. Deployment 

6.2.7.2. Results 

6.2.7.3. Examples 

6.2.8. Regression Analysis 

6.2.8.1. Deployment 

6.2.8.2. Results 

6.2.9. Multivariate Analysis 

6.2.9.1. Deployment 

6.2.9.2. Results 

6.2.10. Control Charts 

6.2.10.1. Deployment 

6.2.10.2. Results 

6.2.10.3. Examples 

6.3. Test Tools used to Enhance Test Reporting 

6.3.1. Benchmarking 

6.3.1.1. A Ten-Step Process to Collect Benchmark Data 

6.3.2. Quality Function Deployment 

6.4. Reporting Test Results 

6.4.1. Current Status Test Reports 

6.4.1.1. Function Test Matrix 

6.4.1.2. Defect Status Report 

6.4.1.3. Functional Testing Status Report 

6.4.1.4. Functions Working Timeline 

6.4.1.5. Expected versus Actual Defects Uncovered Timeline 

6.4.1.6. Defects Uncovered versus Corrected Gap Timeline 

6.4.1.7. Average Age of Uncorrected Defects by Type 

6.4.1.8. Defect Distribution Report 

6.4.1.9. Relative Defect Distribution Report 

6.4.1.10. Testing Action Report 

6.4.1.11. Individual Project Component Test Results 

6.4.1.12. Summary Project Status Report 

6.4.1.13. Individual Project Status Report 

6.4.2. Final Test Reports 

6.4.2.1. Description of Test Reports 

6.4.2.2. Integration Test Report 

6.4.2.3. System Test Report 

6.4.3. Guidelines for Report Writing 


[User Acceptance Testing]

7.1. Acceptance Testing Concepts 

7.1.1. Difference between Acceptance Test and System Test 

7.2. Roles and Responsibilities 

7.2.1. User’s Role 

7.2.2. Software Tester’s Role 

7.3. Acceptance Test Planning 

7.3.1. Acceptance Criteria 

7.3.2. Acceptance Test Plan 

7.3.3. Use Case Test Data 

7.4. Acceptance Test Execution 

7.4.1. Execute the Acceptance Test Plan 

7.4.2. Acceptance Decision 


[Testing Software Developed by Contractors] 

8.1. Challenges in Testing Acquired Software 

8.1.1. Purchased COTS software 

8.1.1.1. Evaluation versus Assessment 

8.1.2. Contracted Software 

8.1.2.1. Additional Differences with Contractors in another Country (Offshore) 

8.1.2.2. Software Tester’s Responsibility for Software Developed by a Contractor 

8.2. COTS Software Test Process 

8.2.1. Assure Completeness of Needs Specification 

8.2.1.1. Define Critical Success Factor 

8.2.1.2. Determine Compatibility with Your Computer Environment 

8.2.1.3. Assure the Software can be Integrated into Your Business System Work Flow 

8.2.1.4. Demonstrate the Software in Operation 

8.2.1.5. Evaluate the People Fit 

8.2.1.6. Acceptance Test the COTS Software 

8.3. Contracted Software Test Process 

8.3.1. Assure the Process for Contracting Software is Adequate 

8.3.2. Review the Adequacy of the Contractor’s Test Plan 

8.3.3. Assure Development is Effective and Efficient 

8.3.4. Perform Acceptance Testing on the Software 

8.3.5. Issue a Report on the Adequacy of the Software to Meet the Needs of the Organization 

8.3.6. Ensure Knowledge Transfer Occurs and Intellectual Property Rights are Protected 

8.3.7. Incorporate Copyrighted Material into the Contractor’s Manuals 

8.3.8. Assure the Ongoing Operation and Maintenance of the Contracted Software 

8.3.9. Assure the Effectiveness of Contractual Relations 


[Testing Software Controls and the Adequacy of Security Procedures]

9.1. Principles and Concepts of Internal Control 

9.1.1. Internal Control Responsibilities 

9.1.2. Software Tester’s Internal Control Responsibilities 

9.1.3. Internal Auditor’s Internal Control Responsibilities 

9.1.4. Risk versus Control 

9.1.5. Environmental versus Transaction Processing Controls 

9.1.5.1. Environmental or General Controls 

9.1.6. Transaction Processing Controls 

9.1.7. Preventive, Detective and Corrective Controls 

9.1.7.1. Preventive Controls 

9.1.7.2. Detective Controls 

9.2. Internal Control Models 

9.2.1. COSO Enterprise Risk Management (ERM) Model 

9.2.1.1. The ERM Process 

9.2.1.2. Components of ERM 

9.2.2. COSO Internal Control Framework Model 

9.2.2.1. Example of a Transaction Processing Internal Control System 

9.2.3. CobiT Model 

9.3. Testing Internal Controls 

9.3.1. Perform Risk Assessment 

9.3.2. Test Transaction Processing Controls 

9.3.2.1. Transaction Origination 

9.3.2.2. Transaction Entry 

9.3.2.3. Transaction Communications 

9.3.2.4. Transaction Processing 

9.3.2.5. Database Storage and Retrieval 

9.3.2.6. Transaction Output 

9.4. Testing Security Controls 

9.4.1. Task 1 –Where Security is Vulnerable to Penetration 

9.4.1.1. Accidental versus Intentional Losses 

9.4.2. Task 2 – Building a Penetration Point Matrix 

9.4.2.1. Controlling People by Controlling Activities 

9.4.2.2. Selecting Computer Security Activities 

9.4.2.3. Controlling Business Transactions 

9.4.2.4. Characteristics of Security Penetration 

9.4.3. Task 3 – Assess Security Awareness Training 

9.4.3.1. Step 1 – Create a Security Awareness Policy 

9.4.3.2. Step 2 – Develop a Security Awareness Strategy 

9.4.3.3. Step 3 – Assign the Roles for Security Awareness 

9.4.4. Task 4 – Understand the Attributes of an Effective Security Control 

9.4.5. Task 5 – Selecting Techniques to Test Security 

9.4.5.1. Step 1 – Understand Security Testing Techniques 

9.4.5.2. Step 2 – Select Security Testing Techniques Based on the Strengths and Weaknesses of Those Techniques 

9.4.5.3. Step 3 – Determine the Frequency of Use of Security Testing Techniques Based on the System Category 


[Testing New Technologies] 

10.1. Risks Associated with New Technology 

10.2. Newer IT Technologies that Impact Software Testing 

10.2.1. Web-Based Applications 

10.2.2. Distributed Application Architecture 

10.2.2.1. Traditional Client-Server Systems 

10.2.2.2. Thin- versus Thick-Client Systems 

10.2.3. Wireless Technologies 

10.2.3.1. Important Issues for Wireless 

10.2.4. New Application Business Models 

10.2.4.1. e-Commerce 

10.2.4.2. e-Business 

10.2.5. New Communication Methods 

10.2.5.1. Wireless Applications 

10.2.6. New Testing Tools 

10.2.6.1. Test Automation 

10.3. Testing the Effectiveness of Integrating New Technology 

10.3.1. Determine the Process Maturity Level of the Technology 

10.3.1.1. Level 1 – People-Dependent Technology 

10.3.1.2. Level 2 – Use Description-Dependent Technology Processes 

10.3.1.3. Level 3 – Use of Technology 

10.3.1.4. Level 4 – Quantitatively Measured Technology 

10.3.1.5. Level 5 – Optimized Use of Technology 

10.3.2. Test the Controls over Implementing the New Technology 

10.3.2.1. Test Actual Performance versus Stated Performance 

10.3.2.2. Test the Adequacy of the Current Processes to Control the Technology 

10.3.3. Test the Adequacy of Staff Skills to Use the Technology 


Appendix A Vocabulary 

Appendix B References 

How to Take the CSTE Examination 

C.1. CSTE Examination Overview 

C.1.1. Software Testing Theory 

C.1.2. Software Testing Practice 

C.2. Guidelines to Answer Questions 

C.3. Sample CSTE Examination 

C.3.1. Part 1 and Part 3 Multiple-Choice Questions 

C.3.2. Part 1 and Part 3 Multiple-Choice Answers 

C.3.3. Part 2 and Part 4 Essay Questions and Answers 

C.3.3.1. Part 2 – Software Testing Theory Essay Questions 

C.3.3.2. Part 2 – Software Testing Theory Essay Answers 

C.3.3.3. Part 4 – Software Testing Practice Essay Questions 

C.3.3.4. Part 4 – Quality Assurance Practice Essay Answers 


[다운로드 CBOK2006]


Written By 밤의카사노바

반응형

댓글