Monday, October 22, 2007


Software testing is the process used to measure the quality of developed computer software. Usually, quality is constrained to such topics as correctness, completeness, security, but can also include more technical requirements as described under the ISO standard ISO 9126, such as capability, reliability, efficiency, portability, maintainability, compatibility, and usability. Testing is a process of technical investigation, performed on behalf of stakeholders, that is intended to reveal quality-related information about the product with respect to the context in which it is intended to operate. This includes, but is not limited to, the process of executing a program or application with the intent of finding errors. Quality is not an absolute; it is value to some person. With that in mind, testing can never completely establish the correctness of arbitrary computer software; testing furnishes a criticism or comparison that compares the state and behaviour of the product against a specification. An important point is that software testing should be distinguished from the separate discipline of Software Quality Assurance (SQA), which encompasses all business process areas, not just testing.
There are many approaches to software testing, but effective testing of complex products is essentially a process of investigation, not merely a matter of creating and following routine procedure. One definition of testing is "the process of questioning a product in order to evaluate it", where the "questions" are operations the tester attempts to execute with the product, and the product answers with its behavior in reaction to the probing of the tester. Although most of the intellectual processes of testing are nearly identical to that of review or inspection, the word testing is also used to connote the dynamic analysis of the product—putting the product through its paces. Sometimes one therefore refers to reviews, walkthroughs or inspections as "static testing", whereas actually running the program with a given set of test cases in a given development stage is often referred to as "dynamic testing", to emphasize the fact that formal review processes form part of the overall testing scope.

Introduction
The separation of debugging from testing was initially introduced by Glenford J. Myers in 1979.
until 1956 it was the debugging oriented period, where testing was often associated to debugging: there was no clear difference between testing and debugging. From 1957-1978 there was the demonstration oriented period where debugging and testing was distinguished now - in this period it was shown, that software satisfies the requirements. The time between 1979-1982 is announced as the destruction oriented period, where the goal was to find errors. 1983-1987 is classified as the evaluation oriented period: intention here is that during the software lifecycle a product evaluation is provided and measuring quality. From 1988 on it was seen as prevention oriented period where tests were to demonstrate that software satisfies its specification, to detect faults and to prevent faults.
Dr. Gelperin chaired the IEEE 829-1989 (Test Documentation Standard) with Dr. Hetzel writing the book The Complete Guide to Software Testing. Both works were pivotal in to today's testing culture and remain a consistent source of reference. Dr. Gelperin and Jerry E. Durant also went on to develop High Impact Inspection Technology that builds upon traditional Inspections but utilizes a test driven additive.

History
White box and black box testing are terms used to describe the point of view a test engineer takes when designing test cases. Black box testing assumes an external view of the test object; one inputs data and one sees only outputs from the test object. White box testing provides an internal view of the test object and its processes.
In recent years the term gray box testing has come into common usage. The typical gray box tester is permitted to set up or manipulate the testing environment, such as by seeding a database, and can view the state of the product after his actions, such as performing a SQL query on the database to be certain of the values of columns.
Gray box testing is used almost exclusively by client-server testers or others who use a database as a repository of information, but can also apply to a tester who has to manipulate input or configuration files directly, or perform testing like SQL injection. It can also be used by testers who know the internal workings or algorithm of the software under test and can write tests specifically for the anticipated results. For example, testing a data warehouse implementation involves loading the target database with information, and verifying the correctness of data population and loading of data into the correct tables.

Alpha test White-box, black-box, and gray-box testing
Software testing is used in association with verification and validation (V&V). Verification is the checking of or testing of items, including software, for conformance and consistency with an associated specification. Software testing is just one kind of verification, which also uses techniques such as reviews, inspections, and walkthroughs. Validation is the process of checking what has been specified is what the user actually wanted.

Verification: Are we doing the job right?
Validation: Have we done the right job? Verification and validation
It should be noted that although both Alpha and Beta are referred to as testing it is in fact use immersion. The rigors that are applied are often unsystematic and many of the basic tenets of testing process are not used. The Alpha and Beta period provides insight into environmental and utilization conditions that can impact the software.
After modifying software, either for a change in functionality or to fix defects, a regression test re-runs previously passing tests on the modified software to ensure that the modifications haven't unintentionally caused a regression of previous functionality. Regression testing can be performed at any or all of the above test levels. These regression tests are often automated.

Unit testing tests the minimal software component, or module. Each unit (basic component) of the software is tested to verify that the detailed design for the unit has been correctly implemented.
Integration testing exposes defects in the interfaces and interaction between integrated components (modules). Progressively larger groups of tested software components corresponding to elements of the architectural design are integrated and tested until the software works as a whole.
System testing tests an integrated system to verify that it meets its requirements.
System integration testing verifies that a system is integrated to any external or third party systems defined in the system requirements.
Acceptance testing can be conducted by the end-user, customer, or client to validate whether or not to accept the product. Acceptance testing may be performed after the testing and before the implementation phase. See also Development stage

  • Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing, before the software goes to beta testing.
    Beta testing comes after alpha testing. Versions of the software, known as beta versions, are released to a limited audience outside of the company. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Sometimes, beta versions are made available to the open public to increase the feedback field to a maximal number of future users. Levels of testing
    A test case is a software testing document,which consists of event, action, input, output, expected result, and actual result. Clinically defined (IEEE 829-1998) a test case is an input and an expected result. This can be as pragmatic as 'for condition x your derived result is y', whereas other test cases described in more detail the input scenario and what results might be expected. It can occasionally be a series of steps (but often steps are contained in a separate test procedure that can be exercised against multiple test cases, as a matter of economy) but with one expected result or expected outcome. The optional fields are a test case ID, test step or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database, or other common repository. In a database system, you may also be able to see past test results and who generated the results and the system configuration used to generate those results. These past results would usually be stored in a separate table.
    The term test script is the combination of a test case, test procedure, and test data. Initially the term was derived from the product of work created by automated regression test tools. Today, test scripts can be manual, automated, or a combination of both.
    The most common term for a collection of test cases is a test suite. The test suite often also contains more detailed instructions or goals for each collection of test cases. It definitely contains a section where the tester identifies the system configuration used during testing. A group of test cases may also contain prerequisite states or steps, and descriptions of the following tests.
    Collections of test cases are sometimes incorrectly termed a test plan. They might correctly be called a test specification. If sequence is specified, it can be called a test script, scenario, or procedure.

    Test cases, suites, scripts, and scenarios
    Although testing varies between organizations, there is a cycle to testing:
    During the design phase, testers work with developers in determining what aspects of a design are testable and under what parameter those tests work.
    Not all errors or defects reported must be fixed by a software development team. Some may be caused by errors in configuring the test software to match the development or production environment. Some defects can be handled by a workaround in the production environment. Others might be deferred to future releases of the software, or the deficiency might be accepted by the business user. There are yet other defects that may be rejected by the development team (of course, with due reason) if they deem it

    Requirements Analysis: Testing should begin in the requirements phase of the software development life cycle.

    During the design phase, testers work with developers in determining what aspects of a design are testable and under what parameter those tests work.



    Test Planning: Test Strategy, Test Plan(s), Test Bed creation.
    Test Development: Test Procedures, Test Scenarios, Test Cases, Test Scripts to use in testing software.
    Test Execution: Testers execute the software based on the plans and tests and report any errors found to the development team.
    Test Reporting: Once testing is completed, testers generate metrics and make final reports on their test effort and whether or not the software tested is ready for release.
    Retesting the Defects A sample testing cycle

    Main article: Code coverage Code coverage
    There is considerable controversy among testing writers and consultants about what constitutes responsible software testing. Members of the "context-driven" school of testing believe that there are no "best practices" of testing, but rather that testing is a set of skills that allow the tester to select or invent testing practices to suit each unique situation. In addition, prominent members of the community consider much of the writing about software testing to be doctrine, mythology, and folklore. Some might contend that this belief directly contradicts standards such as the IEEE 829 test documentation standard, and organizations such as the Food and Drug Administration who promote them. The context-driven school's retort is that Lessons Learned in Software Testing includes one lesson supporting the use IEEE 829 and another opposing it; that not all software testing occurs in a regulated environment and that practices appropriate for such environments would be ruinously expensive, unnecessary, and inappropriate for other contexts; and that in any case the FDA generally promotes the principle of the least burdensome approach.
    Some of the major controversies include:

    Controversy
    Starting around 1990, a new style of writing about testing began to challenge what had come before. The seminal work in this regard is widely considered to be Testing Computer Software, by Cem Kaner. Instead of assuming that testers have full access to source code and complete specifications, these writers, including Kaner and James Bach, argued that testers must learn to work under conditions of uncertainty and constant change. Meanwhile, an opposing trend toward process "maturity" also gained ground, in the form of the Capability Maturity Model. The agile testing movement (which includes but is not limited to forms of testing practiced on agile development projects) has popularity mainly in commercial circles, whereas the CMM was embraced by government and military software providers.
    However, saying that "maturity models" like CMM gained ground against or opposing Agile testing may not be right. Agile movement is a 'way of working', while CMM is a process improvement idea.
    But another point of view must be considered: the operational culture of an organization. While it may be true that testers must have an ability to work in a world of uncertainty, it is also true that their flexibility must have direction. In many cases test cultures are self-directed and as a result fruitless; unproductive results can ensue. Furthermore, providing positive evidence of defects may either indicate that you have found the tip of a much larger problem, or that you have exhausted all possibilities. A framework is a test of Testing. It provides a boundary that can measure (validate) the capacity of our work. Both sides have, and will continue to argue the virtues of their work. The proof however is in each and every assessment of delivery quality. It does little good to test systematically if you are too narrowly focused. On the other hand, finding a bunch of errors is not an indicator that Agile methods was the driving force; you may simply have stumbled upon an obviously poor piece of work.

    Agile vs. traditional
    Exploratory testing means simultaneous test design and test execution with an emphasis on learning. Scripted testing means that learning and test design happen prior to test execution, and quite often the learning has to be done again during test execution. Exploratory testing is very common, but in most writing and training about testing it is barely mentioned and generally misunderstood. Some writers consider it a primary and essential practice. Structured exploratory testing is a compromise when the testers are familiar with the software. A vague test plan, known as a test charter, is written up, describing what functionalities need to be tested but not how, allowing the individual testers to choose the method and steps of testing.
    There are two main disadvantages associated with a primarily exploratory testing approach. The first is that there is no opportunity to prevent defects, which can happen when the designing of tests in advance serves as a form of structured static testing that often reveals problems in system requirements and design. The second is that, even with test charters, demonstrating test coverage and achieving repeatability of tests using a purely exploratory testing approach is difficult. For this reason, a blended approach of scripted and exploratory testing is often used to reap the benefits while mitigating each approach's disadvantages.

    Exploratory vs. scripted
    Some writers believe that test automation is so expensive relative to its value that it should be used sparingly. Others, such as advocates of agile development, recommend automating 100% of all tests. A challenge with automation is that automated testing requires automated test oracles (an oracle is a mechanism or principle by which a problem in the software can be recognized). Such tools have value in load testing software (by signing on to an application with hundreds or thousands of instances simultaneously), or in checking for intermittent errors in software. The success of automated software testing depends on complete and comprehensive test planning. Software development strategies such as test-driven development are highly compatible with the idea of devoting a large part of an organization's testing resources to automated testing. Many large software organizations perform automated testing. Some have developed their own automated testing environments specifically for internal development, and not for resale.

    Manual vs. automated
    Software testers should not only be limited to testing software implementation, but also to testing software design. With this assumption, the role and involvement of testers will change dramatically. The test cycle will change too. To test software design, testers will review requirement and design specifications together with designer and programmer. This will help to identify bugs earlier.

    Software design vs. software implementation
    Several certification programs exist to support the professional aspirations of software testers and quality assurance specialists. No certification currently offered actually requires the applicant to demonstrate the ability to test software. No certification is based on a widely accepted body of knowledge. No certification board decertifies individuals.
    Certifications can be grouped into: exam-based and education-based. Exam-based certifications: For these there is the need to pass an exam, which can also be learned by self-study: e.g. for ISTQB or QAI. Education-based certifications are instructor-led sessions, where each course has to be passed, e.g. IIST (International Institute for Software Testing).

    Certification

    CSTE offered by the Quality Assurance Institute (QAI)
    CSTP offered by the International Institute for Software Testing
    ISEB offered by the Information Systems Examinations Board
    ISTQB offered by the International Software Testing Qualification Board Testing certifications

    CSQE offered by the American Society for Quality (ASQ)
    CSQA offered by the Quality Assurance Institute (QAI) Quality assurance certifications
    One principle in software testing is summed up by the classical Latin question posed by Juvenal: Quis Custodiet Ipsos Custodes (Who watches the watchmen?), or is alternatively referred informally, as the "Heisenbug" concept (a common misconception that confuses Heisenberg's uncertainty principle with observer effect). The idea is that any form of observation is also an interaction, that the act of testing can also affect that which is being tested.
    In practical terms the test engineer is testing software (and sometimes hardware or firmware) with other software (and hardware and firmware). The process can fail in ways that are not the result of defects in the target but rather result from defects in (or indeed intended features of) the testing tool.
    There are metrics being developed to measure the effectiveness of testing. One method is by analyzing code coverage (this is highly controversial) - where every one can agree what areas are not at all being covered and try to improve coverage on these areas.
    Bugs can also be placed into code on purpose, and the number of bugs that have not been found can be predicted based on the percentage of intentionally placed bugs that were found. The problem is that it assumes that the intentional bugs are the same type of bug as the unintentional ones.
    Finally, there is the analysis of historical find-rates. By measuring how many bugs are found and comparing them to predicted numbers (based on past experience with similar projects), certain assumptions regarding the effectiveness of testing can be made. While not an absolute measurement of quality, if a project is halfway complete and there have been no defects found, then changes may be needed to the procedures being employed by QA.

    Who watches the watchmen?
    Software testing can be done by software testers. Until the 1950s the term software tester was used generally, but later it was also seen as a separate profession. Regarding the periods and the different goals in software testing (see D. Gelperin and W.C. Hetzel) there have been established different roles: test lead/manager, tester, test designer, test automater/automation developer, and test administrator.
    Participants of testing team:

    Tester
    Developer
    Business Analyst
    Customer
    Information Service Management
    Senior Organization Management
    Quality team Quotes

    Boris Beizer: Software Testing Techniques. Second Edition, International Thomson Computer Press, 1990, ISBN 1-85032-880-3
    Elfriede Dustin: Effective Software Testing. Addison Wesley, 2002, ISBN 0-20179-429-2
    Elfriede Dustin, et al: Automated Software Testing. Addison Wesley, 1999, ISBN 0-20143-287-0
    Robert V. Binder: Testing Object-Oriented Systems: Objects, Patterns, and Tools. Addison-Wesley Professional, 1999, ISBN 0-201-80938-9
    Rex Black: Managing the Testing Process. Second Edition, John Wiley and Sons, 2002, ISBN 0-471-22398-0
    Mark Fewster, Dorothy Graham: Software Test Automation. Addison Wesley, 1999, ISBN 0-201-33140-3
    Cem Kaner, James Bach, Bret Pettichord: Lessons Learned in Software Testing. A Context-Driven Approach. John Wiley & Sons, 2001, ISBN 0-471-08112-4
    G. T. Laycock: The Theory and Practice of Specification Based Software Testing. PhD Thesis, Dept of Computer Science, Sheffield University, UK, 1993, free PS-version
    AppLab's CEO Sashi Reddi sees potential of global software testing Development Market
    Hung Nguyen, Robert Johnson, Michael Hackett: Testing Applications on the Web (2nd Edition): Test Planning for Mobile and Internet-Based Systems ISBN 0-471-20100-6

No comments: