QA Services

Software Quality Assurance (SQA) at InSys is intended to prevent errors and defects The SQA activities are a part of the entire software development process and integrated closely into it. The Defects Prevention program at InSys is concerned with:

Software Testing services

Software Testing at InSys is oriented to 'detection of defects in the software'. Testing is basically concerned with the operation of a system or application under controlled conditions and evaluating the results.

Understanding - Business requirements (customer requirements)

InSys's consulting experience has demonstrated the importance of accurately capturing the Business and Project specific requirements. Our consultants work closely with the customer to understand the Testing Requirements, the levels of testing to be carried out, existing standards and test procedures. Technical Documentations (Requirements specifications, Change requests, Design documents, Installation manual, User manuals etc) are also reviewed and understood by InSys's consultants. After reviewing the customer requirements and standards, our consultants suggest improvements and/or enhancements. Once they are approved and signed off by the customer, they enter the configuration management process (appropriate version control tools are used). These standards and templates will be enforced by SQA during all levels of testing.

Different Levels of Testing:

InSys has expertise in testing in all of the below mentioned testing levels. At each test level, the results are documented, reviewed and signed off. This is done to ensure that quality testing is carried out according to Configuration Management (CM) procedures.

Each level of testing is either considered black or white box testing. A description of the various types of testing which are performed follows:

Black box testing: This is not based on any knowledge of internal design or code. The Tests are based on requirements and functionality.

White box testing: This based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, and conditions.

Unit Testing

Unit Testing is the first level of dynamic testing and is first the responsibility of the developers and then of the testers. However, when third party testing is undertaken by InSys, the unit level testing is carried out by the testing team.

Parallel/Audit Testing

This is the Testing where the user reconciles the output of the new system to the output of the current system in order to verify that the new system performs the operations correctly.

Functional Testing

This is a Black-box type of testing geared to functional requirements of an application. The testing team performs this type of testing.

Usability Testing

This Testing for is performed to check the 'user-friendliness' of the application. Clearly this is subjective and will depend on the targeted end user or customer. User interviews, surveys, video recording of user sessions, and other techniques may be used to understand the approach of the end user.

Integration Testing

Upon completion of unit testing, integration testing (which is basically a white box) testing commences. The purpose is to ensure distinct components of the application still work in accordance to customer requirements. Test sets will be developed with the express purpose of exercising the interfaces between the components. This activity is to be carried out by the Test Team. Integration test will be termed complete when actual results and expected results are either in line or differences are explainable/acceptable based on client input.

Integration testing follows various strategies. For example:

-Business process-based integration tests examine all the system components affected by a particular business process. For instance, the test can cover the processing of a customer order from acquisition to registration of the order, to delivery, and payment. Additional business processes are incorporated into the test until all system components or applications have been sufficiently tested;

-Test objective-based integration tests. For example, a test objective might be the integration of system components that use a common interface. The tests would then be defined using the interface.

System Testing

Upon completion of integration testing, the Testing Team will begin system testing. During system testing, which is a black box testing, the complete system is configured in a controlled environment to validate its accuracy and completeness in performing the functions as designed. The system testing simulates production environment and all of the functions of the system that will be required in production are tested. The Test Team carries out the system test. Prior to the system test, the unit and integration test results will be reviewed by SQA to ensure all problems have been resolved. System testing is deemed complete when actual results and expected results are either in line or differences are explainable/acceptable based on client input.

End-to-End Testing

Similar to System Testing, this type of testing involves testing of a complete application in an environment that mimics real-world use, such as interaction with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Regression Testing

The objective of Regression Testing is to ensure software remains intact. A baseline set of data and scripts will be maintained and executed to verify that the changes introduced during the release have not "made any unintended modifications" in the code. Expected results from the baseline are compared to results of the software being regression tested. All discrepancies will be highlighted and accounted for, before testing proceeds to the next level.

Sanity Testing

Sanity Testing will be performed whenever cursory testing is sufficient to prove that the application is functioning according to specifi­cations. This level of testing is a subset of regression testing. It will normally include a set of core tests of basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etc.

Performance Testing

Although performance testing is described as a part of system testing, it can be regarded as a distinct level of testing. Performance testing will verify the load, volume, and response times as defined by requirements.

Load Testing

Load Testing is concerned with testing an application under heavy loads, such as the testing of a web site under a range of loads to determine at what point the systems response time degrades or fails.

Installation Testing

Installation testing is concerned with verifying the full, partial, or upgrade install/uninstall processes. The installation test for a release will be conducted with the objective of demonstrating production readiness. This test is conducted after the application has been migrated to the client's site. It will encompass the inventory of configuration items (performed by the application's System Administration) and evaluation of data readiness, as well as dynamic tests focused on basic system functionality.

Security/Penetration Testing

Security Testing verifies as to how well the system protects itself against unauthorized internal or external access, willful damage, etc. This type of testing may require sophisticated testing tools and methods.

Recovery/Error Testing

This testing is carried out to check how well a system recovers from crashes, hardware failures, or other catastrophic problems.

Compatibility Testing

Compatibility Testing is concerned with verifying how well software performs in a particular hardware/software/operating system/network/ etc. environment.

Comparison Testing

This Testing compares software weaknesses and strengths to competing products.

Acceptance Testing

Acceptance testing, which is black box testing, provides the client the opportunity to verify the system functionality and usability prior to the system being moved to production. The acceptance test is usually the responsibility of the client, however, it will be conducted with full support from the testing team. The Testing Team will work with the client to develop the acceptance criteria.

InSys also performs user acceptance testing tasks, product acceptance on behalf of the customer. This is a critical activity involves a detailed test management process. Test management process involves among others:

Testing Process Model


Testing Process Details

Step 1 - Create Test Strategy and Test Plans
Inputs for this process:
a. A description of the required hardware and software components, including test tools (test environment, test tool data).

b. A description of roles and responsibilities of the resources required for the testing activities and schedule constraints (staff, schedules).

c. Testing methodology (standards), deliverables, acceptance criteria

d. Quality goals of the testing process

e. Functional and technical requirements of the application (Requirements Documents)

f. Change Requests, Technical and Functional Design.

g. List of requirements that cannot be provided (System Limitations)

Outputs for this process:
a. An approved and signed-off test strategy document, test plan, test cases.

b. Testing issues requiring resolution (usually requires the coordination of client project management).

Process:
a. The test strategy is a formal description of how a system will be tested.

b. A test strategy will be developed for all levels of testing, as required.

c. The Test Team will analyze the requirements, and create the test strategy and test plan.

Step 2 - Create Test Scripts and Data
Inputs for this process:
a. Approved Test Strategy Document.

b. Automated test ware and previously developed scripts, if applicable (Test Tools).

c. Test document problems uncovered as a result of testing (Test Document Problems).

d. Understanding of software complexity and module path coverage derived from General and Detailed Design documents (Software Design, Code, Complexity Data).

Outputs for this process:
a. Problems with the design fed back to the developers (Software Design, Code Issues).

b. Approved test scenarios, conditions and scripts (Test Design, Cases, Scripts).

c. Test data.

Process:
a. Test scenarios and cases will be prepared by reviewing the functional requirements of the release and preparing logical groups of business functions that can be further broken into test scripts.

b. Test scenarios will define test conditions, data to be used for testing, and expected results (database updates, file outputs, report results, etc.). Test scenarios will be designed to represent both typical and unusual situations that may occur in the application.
c. The project developers will define the unit test requirements and unit test scenarios/cases. The developer will also be responsible for executing the unit test cases, prior to the start of integration and system testing.

d. Test scenarios/cases for Integration and System tests will be developed by the Test Team with assistance from developers and clients. Acceptance test cases will be developed by the client with help from the project and Test Team.

e. Test scenarios will be executed through the use of test scripts. Scripts will define a series of steps necessary to perform one or more test scenarios. A test script usually represents a transaction or process that can occur during normal system operations. Test scripts will include the specific data that will be used for testing the process or transaction. Test scripts will cover multiple test scenarios and will include run/execution/cycle information. Test scripts will be mapped back to the requirements and traceability matrices to ensure each test is within scope.

f. Test data will be captured and baselined, prior to testing. This data will serve as the foundation for unit and system testing and will be used to exercise system functionality in a controlled environment. Some output data will also be baselined for future comparisons. Baselined data will be used to support future application maintenance via regression testing.

g. A pre-test meeting will be held to assess the "readiness" of the application, and the environment and data to be tested. A test readiness document will be created to indicate the status of the entrance criteria of the release.

Step 3 - Execute Tests
Inputs for this process:
a. Approved test documents (Test Plan, Cases, Procedures).

b. Automated testware, if applicable and developed scripts (Test Tools).

c. Changes to the design (Change Requests).

d. Test data.

e. Availability of the test teams.

f. General and Detailed Design Documents (Requirements, Software Design).

g. A complete development environment that has been migrated to the test environment (Unit Tested Code) via the Configuration Manager.

h. Test Readiness Document.

Outputs for this process:
a. Log and summary of the test results (Test Report).

b. Test document problems uncovered as a result of testing (Test Document Problems).

c. Approved and signed-off with revised testing deliverables (Updated Deliverables).

d. Problems with the design fed back to the developers and clients (Requirements, Design, Code Issues).

e. Formal record of test incidents (Problem Tracking - PT).

f. Baselined package ready for migration to the next level (Tested Source and Object Code).

Process:
a. Check point meetings will be held throughout the Test execution phase. Checkpoint meeting will be held daily (if required) to address and discuss testing issues, status, and activities.

b. Execution of tests is completed by following the test documents in a methodical manner. As each package of test procedures is performed, an entry is recorded in a test

c. Test results will be evaluated by the appropriate Test team leaders, in order to determine whether the expected results were obtained. All discrepancies/anomalies will be logged and discussed with the customer representative/Project manager and documented for further investigation and resolution. (Each client may have a different process for logging and reporting bugs/defects uncovered during testing,). Pass/Fail criteria will be used to determine the severity of the problem, and results will be recorded in a test summary report.

d. The severity of a problem found during system testing will be defined in accordance to the customer's risk assessment and recorded in their selected tracking tool.

e. Proposed fixes will be delivered to the testing environment based on the severity of the problem. Fixes will be regression tested and flawless fixes will be migrated to the new baseline. Following completion of the test, members of the Test Team will prepare a summary report. The summary report will be reviewed by the customer Project Manager, Software Quality Assurance (SQA) and/ or Test Team Lead.

f. After a particular level of testing has been certified, it will be the responsibility of the Configuration Manager (or the configuration controller) to coordinate the migration of the release software components to the next test level as documented in the Configuration Management Plan. The software will only be migrated to the production environment after the client's formal acceptance.

g. The Test Team will review test document problems identified during testing and update documents where appropriate.

About InSys Technologies

InSys Technologies Inc. helps businesses and institutions meet the complex challenges of the technology revolution. We provide a global source of services across the entire Information Technology (IT) spectrum.

Our founding team have more than 35 years of collective experience in providing IT staffing services, Quality Software Solutions, Professional Services, BPO and RPO services to various customers who range from small to big businesses.

Contact Form