Todays Market Tape

QA-2


Software Quality Assurance:
* The purpose of Software Quality Assurance is to provide management with appropriate visibility into the process being used by the software project and of the products being built.
* Software Quality Assurance involves reviewing and auditing the software products and activities to verify that they comply with the applicable procedures and standards and providing the software project and other appropriate managers with the results of these reviews and audits.

Verification:
* Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications.
* The determination of consistency, correctness & completeness of a program at each stage.

Validation:
* Validation typically involves actual testing and takes place after verifications are completed
* The determination of correctness of a final program with respect to its requirements.

Software Life Cycle Models :
* Prototyping Model
* Waterfall Model – Sequential
* Spiral Model
* V Model - Sequential

What makes a good Software QA engineer?
* The same qualities a good tester has are useful for a QA engineer. Additionally, they must be able to understand the entire software development process and how it can fit into the business approach and goals of the organization. Communication skills and the ability to understand various sides of issues are important. In organizations in the early stages of implementing QA processes, patience and diplomacy are especially needed. An ability to find problems as well as to see 'what's missing' is important for inspections and reviews.

Testing:
* An examination of the behavior of a program by executing on sample data sets.
* Testing comprises of set of activities to detect defects in a produced material.
* To unearth & correct defects.
* To detect defects early & to reduce cost of defect fixing.
* To avoid user detecting problems.
* To ensure that product works as users expected it to.

Why Testing?
* To unearth and correct defects.
* To detect defects early and to reduce cost of defect fixing.
* To ensure that product works as user expected it to.
* To avoid user detecting problems.

Test Life Cycle
* Identify Test Candidates
* Test Plan
* Design Test Cases
* Execute Tests
* Evaluate Results
* Document Test Results
* Casual Analysis/ Preparation of Validation Reports
* Regression Testing / Follow up on reported bugs.

Testing Techniques
* Black Box Testing
* White Box Testing
* Regression Testing
* These principles & techniques can be applied to any type of testing.

Black Box Testing
* Testing of a function without knowing internal structure of the program.

White Box Testing
* Testing of a function with knowing internal structure of the program.

Regression Testing
* To ensure that the code changes have not had an adverse affect to the other modules or on existing functions.

Functional Testing
* Study SRS
* Identify Unit Functions
* For each unit function
* - Take each input function
* - Identify Equivalence class
* - Form Test cases
* - Form Test cases for boundary values
* - From Test cases for Error Guessing
* Form Unit function v/s Test cases, Cross Reference Matrix
* Find the coverage

Unit Testing:
* The most 'micro' scale of testing to test particular functions or code modules. Typically done by the programmer and not by testers .
* Unit - smallest testable piece of software.
* A unit can be compiled/ assembled/ linked/ loaded; and put under a test harness.
* Unit testing done to show that the unit does not satisfy the functional specification and/ or its implemented structure does not match the intended design structure.

Integration Testing:
* Integration is a systematic approach to build the complete software structure specified in the design from unit-tested modules. There are two ways integration performed. It is called Pre-test and Pro-test.
* Pre-test: the testing performed in Module development area is called Pre-test. The Pre-test is required only if the development is done in module development area.

Alpha testing:
* Testing of an application when development is nearing completion minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.

Beta testing:
* Testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers.

System Testing:
* A system is the big component.
* System testing is aimed at revealing bugs that cannot be attributed to a component as such, to inconsistencies between components or planned interactions between components.
* Concern: issues, behaviors that can only be exposed by testing the entire integrated system (e.g., performance, security, recovery).

Volume Testing:
* The purpose of Volume Testing is to find weaknesses in the system with respect to its handling of large amounts of data during short time periods. For example, this kind of testing ensures that the system will process data across physical and logical boundaries such as across servers and across disk partitions on one server.

Stress testing:
* This refers to testing system functionality while the system is under unusually heavy or peak load; it's similar to the validation testing mentioned previously but is carried out in a "high-stress" environment. This requires that you make some predictions about expected load levels of your Web site.

Usability testing:
* Usability means that systems are easy and fast to learn, efficient to use, easy to remember, cause no operating errors and offer a high degree of satisfaction for the user. Usability means bringing the usage perspective into focus, the side towards the user.

Security testing:
* If your site requires firewalls, encryption, user authentication, financial transactions, or access to databases with sensitive data, you may need to test these and also test your site's overall protection against unauthorized internal or external access.

Test Plan:
* A Test Plan is a detailed project plan for testing, covering the scope of testing, the methodology to be used, the tasks to be performed, resources, schedules, risks, and dependencies. A Test Plan is developed prior to the implementation of a project to provide a well defined and understood project roadmap.

Test Specification:
* A Test Specification defines exactly what tests will be performed and what their scope and objectives will be. A Test Specification is produced as the first step in implementing a Test Plan, prior to the onset of manual testing and/or automated test suite development. It provides a repeatable, comprehensive definition of a testing campaign.


What steps are needed to develop and run software tests?

The following are some of the steps to consider:


* Obtain requirements, functional design, and internal design specifications and other necessary documents.

* Obtain budget and schedule requirements.Determine project-related personnel and their responsibilities, reporting requirements, required standards and processes (such as release processes, change processes, etc.)

* Identify application's higher-risk aspects, set priorities, and determine scope and limitations of tests.

* Determine test approaches and methods - unit, integration, functional, system, load, usability tests, etc.

* Determine test environment requirements (hardware, software, communications, etc.)

* Determine testware requirements (record/playback tools, coverage analyzers, test tracking, problem/bug tracking, etc.)

* Determine test input data requirements

* Identify tasks, those responsible for tasks, and labor requirements

* Set schedule estimates, timelines, milestones

* Determine input equivalence classes, boundary value analyses, error classes

* Prepare test plan document and have needed reviews/approvals

* Write test cases

* Have needed reviews/inspections/approvals of test cases

* Prepare test environment and testware, obtain needed user manuals/reference documents/configuration guides/installation guides, set up test tracking processes, set up logging and archiving processes, set up or obtain test input data

* Obtain and install software releases

* Perform tests

* Evaluate and report results

* Track problems/bugs and fixes

* Retest as needed

* Maintain and update test plans, test cases, test environment, and testware through life cycle

Bug Tracking

What's a 'test case'?

* A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.

* Note that the process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking through the operation of the application. For this reason, it's useful to prepare test cases early in the development cycle if possible.

What should be done after a bug is found?

* The bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested, and determinations made regarding requirements for regression testing to check that fixes didn't create problems elsewhere. If a problem-tracking system is in place, it should encapsulate these processes. A variety of commercial problem-tracking/management software tools are available (see the 'Tools' section for web resources with listings of such tools). The following are items to consider in the tracking process:

* Complete information such that developers can understand the bug, get an idea of it's severity, and reproduce it if necessary.

* Bug identifier (number, ID, etc.)

* Current bug status (e.g., 'Released for Retest', 'New', etc.)

* The application name or identifier and version

* The function, module, feature, object, screen, etc. where the bug occurred

* Environment specifics, system, platform, relevant hardware specifics

* Test case name/number/identifier

* One-line bug description

* Full bug description

* Description of steps needed to reproduce the bug if not covered by a test case or if the developer doesn't have easy access to the test case/test script/test tool

* Names and/or descriptions of file/data/messages/etc. used in test

* File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in finding the cause of the problem

* Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common

* Was the bug reproducible?

* Tester name

* Test date

* Bug reporting date

* Name of developer/group/organization the problem is assigned to

* Description of problem cause

* Description of fix

* Code section/file/module/class/method that was fixed

* Date of fix

* Application version that contains the fix

* Tester responsible for retest

* Retest date

* Retest results

* Regression testing requirements

* Tester responsible for regression tests

* Regression testing results

* A reporting or tracking process should enable notification of appropriate personnel at various stages. For instance, testers need to know when retesting is needed, developers need to know when bugs are found and how to get the needed information, and reporting/summary capabilities are needed for managers.

Why does software have bugs?

* Miscommunication or no communication - as to specifics of what an application should or shouldn't do (the application's requirements).

* Software complexity - the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development. Windows-type interfaces, client-server and distributed applications, data communications, enormous relational databases, and sheer size of applications have all contributed to the exponential growth in software/system complexity. And the use of object-oriented techniques can complicate instead of simplify a project unless it is well engineered.

* Programming errors - programmers, like anyone else, can make mistakes.

* Changing requirements - the customer may not understand the effects of changes, or may understand and request them anyway - redesign, rescheduling of engineers, effects on other projects, work already completed that may have to be redone or thrown out, hardware requirements that may be affected, etc. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of keeping track of changes may result in errors. Enthusiasm of engineering staff may be affected. In some fast-changing business environments, continuously modified requirements may be a fact of life. In this case, management must understand the resulting risks, and QA and test engineers must adapt and plan for continuous extensive testing to keep the inevitable bugs from running out of control.

* time pressures - scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made.

* egos - people prefer to say things like:

* 'no problem'

* 'piece of cake'

* 'I can whip that out in a few hours'

* 'it should be easy to update that old code'

* instead of:

* 'that adds a lot of complexity and we could end up

* making a lot of mistakes'

* 'we have no idea if we can do that; we'll wing it'

* 'I can't estimate how long it will take, until I

* take a close look at it'

* 'we can't figure out what that old spaghetti code

* did in the first place'

* If there are too many unrealistic 'no problem's', the result is bugs.

* poorly documented code - it's tough to maintain and modify code that is badly written or poorly documented; the result is bugs. In many organizations management provides no incentive for programmers to document their code or write clear, understandable code. In fact, it's usually the opposite: they get points mostly for quickly turning out code, and there's job security if nobody else can understand it ('if it was hard to write, it should be hard to read').

* software development tools - visual tools, class libraries, compilers, scripting tools, etc. often introduce their own bugs or are poorly documented, resulting in added bugs.

What steps are needed to develop and run software tests?

The following are some of the steps to consider:


* Obtain requirements, functional design, and internal design specifications and other necessary documents.

* Obtain budget and schedule requirements.Determine project-related personnel and their responsibilities, reporting requirements, required standards and processes (such as release processes, change processes, etc.)

* Identify application's higher-risk aspects, set priorities, and determine scope and limitations of tests.

* Determine test approaches and methods - unit, integration, functional, system, load, usability tests, etc.

* Determine test environment requirements (hardware, software, communications, etc.)

* Determine testware requirements (record/playback tools, coverage analyzers, test tracking, problem/bug tracking, etc.)

* Determine test input data requirements

* Identify tasks, those responsible for tasks, and labor requirements

* Set schedule estimates, timelines, milestones

* Determine input equivalence classes, boundary value analyses, error classes

* Prepare test plan document and have needed reviews/approvals

* Write test cases

* Have needed reviews/inspections/approvals of test cases

* Prepare test environment and testware, obtain needed user manuals/reference documents/configuration guides/installation guides, set up test tracking processes, set up logging and archiving processes, set up or obtain test input data

* Obtain and install software releases

* Perform tests

* Evaluate and report results

* Track problems/bugs and fixes

* Retest as needed

* Maintain and update test plans, test cases, test environment, and testware through life cycle

Bug Tracking

What's a 'test case'?

* A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.

* Note that the process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking through the operation of the application. For this reason, it's useful to prepare test cases early in the development cycle if possible.

What should be done after a bug is found?

* The bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested, and determinations made regarding requirements for regression testing to check that fixes didn't create problems elsewhere. If a problem-tracking system is in place, it should encapsulate these processes. A variety of commercial problem-tracking/management software tools are available (see the 'Tools' section for web resources with listings of such tools). The following are items to consider in the tracking process:

* Complete information such that developers can understand the bug, get an idea of it's severity, and reproduce it if necessary.

* Bug identifier (number, ID, etc.)

* Current bug status (e.g., 'Released for Retest', 'New', etc.)

* The application name or identifier and version

* The function, module, feature, object, screen, etc. where the bug occurred

* Environment specifics, system, platform, relevant hardware specifics

* Test case name/number/identifier

* One-line bug description

* Full bug description

* Description of steps needed to reproduce the bug if not covered by a test case or if the developer doesn't have easy access to the test case/test script/test tool

* Names and/or descriptions of file/data/messages/etc. used in test

* File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in finding the cause of the problem

* Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common

* Was the bug reproducible?

* Tester name

* Test date

* Bug reporting date

* Name of developer/group/organization the problem is assigned to

* Description of problem cause

* Description of fix

* Code section/file/module/class/method that was fixed

* Date of fix

* Application version that contains the fix

* Tester responsible for retest

* Retest date

* Retest results

* Regression testing requirements

* Tester responsible for regression tests

* Regression testing results

* A reporting or tracking process should enable notification of appropriate personnel at various stages. For instance, testers need to know when retesting is needed, developers need to know when bugs are found and how to get the needed information, and reporting/summary capabilities are needed for managers.

Why does software have bugs?

* Miscommunication or no communication - as to specifics of what an application should or shouldn't do (the application's requirements).

* Software complexity - the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development. Windows-type interfaces, client-server and distributed applications, data communications, enormous relational databases, and sheer size of applications have all contributed to the exponential growth in software/system complexity. And the use of object-oriented techniques can complicate instead of simplify a project unless it is well engineered.

* Programming errors - programmers, like anyone else, can make mistakes.

* Changing requirements - the customer may not understand the effects of changes, or may understand and request them anyway - redesign, rescheduling of engineers, effects on other projects, work already completed that may have to be redone or thrown out, hardware requirements that may be affected, etc. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of keeping track of changes may result in errors. Enthusiasm of engineering staff may be affected. In some fast-changing business environments, continuously modified requirements may be a fact of life. In this case, management must understand the resulting risks, and QA and test engineers must adapt and plan for continuous extensive testing to keep the inevitable bugs from running out of control.

* time pressures - scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made.

* egos - people prefer to say things like:

* 'no problem'

* 'piece of cake'

* 'I can whip that out in a few hours'

* 'it should be easy to update that old code'

* instead of:

* 'that adds a lot of complexity and we could end up

* making a lot of mistakes'

* 'we have no idea if we can do that; we'll wing it'

* 'I can't estimate how long it will take, until I

* take a close look at it'

* 'we can't figure out what that old spaghetti code

* did in the first place'

* If there are too many unrealistic 'no problem's', the result is bugs.

* poorly documented code - it's tough to maintain and modify code that is badly written or poorly documented; the result is bugs. In many organizations management provides no incentive for programmers to document their code or write clear, understandable code. In fact, it's usually the opposite: they get points mostly for quickly turning out code, and there's job security if nobody else can understand it ('if it was hard to write, it should be hard to read').

* software development tools - visual tools, class libraries, compilers, scripting tools, etc. often introduce their own bugs or are poorly documented, resulting in added bugs

Testing of a function without knowing internal structure of the program.

Black-box and white-box are test design methods. Black-box test design treats the system as a "black-box", so it doesn't explicitly use knowledge of the internal structure. Black-box test design is usually described as focusing on testing functional requirements. Synonyms for black-box include: behavioral, functional, opaque-box, and closed-box. White-box test design allows one to peek inside the "box", and it focuses specifically on using internal knowledge of the software to guide the selection of test data. Synonyms for white-box include: structural, glass-box and clear-box.

While black-box and white-box are terms that are still in popular use, many people prefer the terms "behavioral" and "structural". Behavioral test design is slightly different from black-box test design because the use of internal knowledge isn't strictly forbidden, but it's still discouraged. In practice, it hasn't proven useful to use a single test design method. One has to use a mixture of different methods so that they aren't hindered by the limitations of a particular one. Some call this "gray-box" or "translucent-box" test design, but others wish we'd stop talking about boxes altogether.

It is important to understand that these methods are used during the test design phase, and their influence is hard to see in the tests once they're implemented. Note that any level of testing (unit testing, system testing, etc.) can use any test design methods. Unit testing is usually associated with structural test design, but this is because testers usually don't have well-defined requirements at the unit level to validate.


Integration testing

is the phase of software testing in which individual software modules are combined and tested as a group. It follows unit testing and precedes system testing.

Integration testing

takes as its input modules that have been checked out by unit testing, groups them in larger aggregates, applies tests defined in an Integration test plan to those aggregates, and delivers as its output the integrated system ready for system testing.

Purpose

The purpose of Integration testing is to verify functional, performance and reliability requirements placed on major design items. These "design items", i.e. assemblages (or groups of units), are exercised through their interfaces using Black box testing, success and error cases being simulated via appropriate parameter and data inputs. Simulated usage of shared data areas and inter-process communication is tested, individual subsystems are exercised through their input interface. All test cases are constructed to test that all components within assemblages interact correctly, for example, across procedure calls or process activations.

The overall idea, is the "building block" approach in which verified assemblages are added to a verified base which is then used to support the Integration testing of further assemblages.


Performance Testing

In software engineering, performance testing is testing that is performed to determine how fast some aspect of a system performs under a particular workload.

Performance testing can serve different purposes. It can demonstrate that the system meets performance criteria. It can compare two systems to find which performs better. Or it can measure what parts of the system or workload cause the system to perform badly. In the diagnostic case, software engineers use tools such as profilers to measure what parts of a device or software contribute most to the poor performance or to establish throughput levels (and thresholds) for maintained acceptable response time.

In performance testing, it is often crucial (and often difficult to arrange) for the test conditions to be similar to the expected actual use.

Technology

Performance testing technology employs one or more PCs to act as injectors – each emulating the presence or numbers of users and each running an automated sequence of interactions (recorded as a script, or as a series of scripts to emulate different types of user interaction) with the host whose performance is being tested. Usually, a separate PC acts as a test conductor, coordinating and gathering metrics from each of the injectors and collating performance data for reporting purposes. The usual sequence is to ramp up the load – starting with a small number of virtual users and increasing the number over a period to some maximum.

The test result shows how the performance varies with the load, given as number of users vs response time. Various tools, including Compuware Corporation's QACenter Performance Edition, are available to perform such tests. Tools in this category usually execute a suite of tests which will emulate real users against the system. Sometimes the results can reveal oddities, e.g., that while the average response time might be acceptable, there are outliers of a few key transactions that take considerably longer to complete – something that might be caused by inefficient database queries, etc.

Performance testing can be combined with stress testing, in order to see what happens when an acceptable load is exceeded –does the system crash? How long does it take to recover if a large load is reduced? Does it fail in a way that causes collateral damage?

Performance specifications

Performance testing is frequently not performed against a specification, i.e. no one will have expressed what is the maximum acceptable response time for a given population of users. However, performance testing is frequently used as part of the process of performance profile tuning. The idea is to identify the “weakest link” – there is inevitably a part of the system which, if it is made to respond faster, will result in the overall system running faster. It is sometimes a difficult task to identify which part of the system represents this critical path, and some test tools come provided with (or can have add-ons that provide) instrumentation that runs on the server and reports transaction times, database access times, network overhead, etc. which can be analysed together with the raw performance statistics. Without such instrumentation one might have to have someone crouched over Windows Task Manager at the server to see how much CPU load the performance tests are generating. There is an apocryphal story of a company that spent a large amount optimising their software without having performed a proper analysis of the problem. They ended up rewriting the system’s ‘idle loop’, where they had found the system spent most of its time, but even having the most efficient idle loop in the world obviously didn’t improve overall performance one iota!

Performance testing almost invariably identifies that it is parts of the software (rather than hardware) that contribute most to delays in processing users’ requests.

Performance testing can be performed across the web, and even done in different parts of the country, since it is known that the response times of the internet itself vary regionally. It can also be done in-house, although routers would then need to be configured to introduce the lag what would typically occur on public networks.

It is always helpful to have a statement of the likely peak numbers of users that might be expected to use the system at peak times. If there can also be a statement of what constitutes the maximum allowable 95 percentile response time, then an injector configuration could be used to test whether the proposed system met that specification.

Tasks to undertake

Tasks to perform such a test would include:

* Analysis of the types of interaction that should be emulated and the production of scripts to do those emulations

* Decision whether to use internal or external resources to perform the tests.

* set up of a configuration of injectors/controller

* set up of the test configuration (ideally identical hardware to the production platform), router configuration, quiet network (we don’t want results upset by other users), deployment of server instrumentation.

* Running the tests – probably repeatedly in order to see whether any unaccounted for factor might affect the results.

* Analysing the results, either pass/fail, or investigation of critical path and recommendation of corrective action.

STRESS TESTING

Stress testing

is a form of testing that is used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results. For example, a web server may be stress tested using scripts, bots, and various denial of service tools to observe the performance of a web site during peak loads. Stress testing a subset of load testing. Also see testing, software testing, performance testing.

SECURITY TESTING

Application vulnerabilities leave your system open to attacks, Downtime, Data theft, Data corruption and application Defacement. Security within an application or web service is crucial to avoid such vulnerabilities and new threats.

While automated tools can help to eliminate many generic security issues, the detection of application vulnerabilities requires independent evaluation of your specific application's features and functions by experts. An external security vulnerability review by Third Eye Testing will give you the best possible confidence that your application is as secure as possible.

Security Testing Techniques

Vulnerability Scanning

Network Scanning

Password Cracking

Log Views

Virus Detect

Penetration Testing

File Integrity Checkers

War Dailing

Test Cases, Suites, Scripts, and Scenarios
Black box testers usually write test cases for the majority of their testing activities. A test case is usually a single step, and its expected result, along with various additional pieces of information. It can occasionally be a series of steps but with one expected result or expected outcome. The optional fields are a test case ID, test step or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database or other common repository. In a database system, you may also be able to see past test results and who generated the results and the system configuration used to generate those results. These past results would usually be stored in a separate table.

The most common term for a collection of test cases is a test suite. The test suite often also contains more detailed instructions or goals for each collection of test cases. It definitely contains a section where the tester identifies the system configuration used during testing. A group of test cases may also contain prerequisite states or steps, and descriptions of the following tests.

Collections of test cases are sometimes incorrectly termed a test plan. They may also be called a test script, or even a test scenario.

Most white box tester write and use test scripts in unit, system, and regression testing. Test scripts should be written for modules with the highest risk of failure and the highest impact if the risk becomes an issue. Most companies that use automated testing will call the code that is used their test scripts.

A scenario test is a test based on a hypothetical story used to help a person think through a complex problem or system. They can be as simple as a diagram for a testing environment or they could be a description written in prose. The ideal scenario test has five key characteristics. It is (a) a story that is (b) motivating, (c) credible, (d) complex, and (e) easy to evaluate. They are usually different from test cases in that test cases are single steps and scenarios cover a number of steps. Test suites and scenarios can be used in concert for complete system tests.

Scenario testing is similar to, but not the same as session-based testing, which is more closely related to exploratory testing, but the two concepts can be used in conjunction.




est Case Template:

testcase id test case name test case desc test steps test case status test status (P/F) test prority defect severity
step expected actual


Sample Test Case:

HOME PAGE:
test URL: www.qatest.co.in/rail

Preconditions: Open Web browser and enter the given url in the address bar. Home page must be displayed. All test cases must be executed from this page.

Test case id Test case name test case desc test steps test case status test status (P/F) test prority defect severity
step expected actual
Login01 Validate Login To verify that Login name on login page must be greater than 3 characters enter login name less than 3 chars (say a) and password and click Submit button an error message “Login not less than 3 characters” must be displayed design high
enter login name less than 3 chars (say ab) and password and click Submit button an error message “Login not less than 3 characters” must be displayed design high
enter login name  3 chars (say abc) and password and click Submit button Login success full or an error message “Invalid Login or Password” must be displayed design high
Login02 Validate Login To verify that Login name on login page should not be greater than 10 characters enter login name greater than 10 chars (say abcdefghijk) and password and click Submit button an error message “Login not greater than 10 characters” must be displayed design high
enter login name less than 10 chars (say abcdef) and password and click Submit button Login success full or an error message “Invalid Login or Password” must be displayed design high
Login03 Validate Login To verify that Login name on login page  does not take special characters enter login name starting with specail chars (!hello) password and click Submit button an error message “Special chars not allowed in login” must be displayed design high
enter login name ending with specail chars (hello$) password and click Submit button an error message “Special chars not allowed in login” must be displayed design high
enter login name  with specail chars in middle(he&^llo) password and click Submit button an error message “Special chars not allowed in login” must be displayed design high
Pwd01 Validate Password To verify that Password on login page must be greater than 6 characters enter Password less than 6 chars (say a) and Login Name and click Submit button an error message “Password not less than 6 characters” must be displayed design high
enter Password  6 chars (say abcdef) and Login Name and click Submit button Login success full or an error message “Invalid Login or Password” must be displayed design high
Pwd02 Validate Password To verify that Password on login page must be less than 10 characters enter Password greater than 10 chars (say a) and Login Name and click Submit button an error message “Password not greater than 10 characters” must be displayed design high
enter Password less than 10 chars (say abcdefghi) and Login Name and click Submit button Login success full or an error message “Invalid Login or Password” must be displayed design high
Pwd03 Validate Password To verify that Password on login page must be allow special characters enter Password with special characters(say !@hi&*P) Login Name and click Submit button Login success full or an error message “Invalid Login or Password” must be displayed design high
Llnk01 Verify Hyperlinks To Verify the Hyper Links available at left side on login page working or not Click Home Link Home Page must be displayed design low
Click Sign Up Link Sign Up page must be displayed design low
Click New Users Link New Users Registration Form must be displayed design low
Click Advertise Link Page with Information and Tariff Plan for Advertisers must be displayed design low
Click Contact Us Link Contact Information page must be displayed design low
Click Terms Link Terms Of the service page must be displayed design low
Flnk01 Verify Hyper links To Verify the Hyper Links displayed at Footer on login page working or not Click Home  Link Home Page must be displayed design low
Click Sign Up  Link Contact Information page must be displayed design low
Click Contact Us  Link Page with Information and Tariff Plan for Advertisers must be displayed design low
Click Advertise Link Terms Of the service page must be displayed design low
Click Terms Of Membership Link Privacy Policy page must be displayed design low
Click Privacy Policy Link Privacy Policy page must be displayed design low
Lblnk01 Verify Hyper links To Verify the Hyper Links displayed at Login Box on login page working or not Click NEW USERS Link located in login box New Users Registration Form must be displayed design low
Click New Users(Blue Color) Link located in login box New Users Registration Form must be displayed design low
Click Forget Password  Link located in login box Password Retrieval Page must be displayed design medium


1. How can we write a good test case?



2. for a triangle(sum of two sides is greater than or equal to the third side),what is the minimal number of test cases required.

The answer is 3

1. Measure all sides of the triangle.

2. Add the minnimum and second highest length of the triangle and store the result as Res.

3. Compare the Res with the largest side of the triangle.

3. How will you check that your test cases covered all the requirements?

By using traceabiltymatrix.
Traceability matrix means the matrix showing the relationship b/w the requirements & testcases.


1. what bugs are mainly come in webtesting what severity and priority we are giving

The bug that mainly comes in web testing are cosmetic bugs on web pages , field validation related bugs and also the bugs related to scalibility ,throughput and response time for web pages.

2. What is the difference in testing a CLENT-SERVER application and a WEB application ?


1. Testing Scenarios : How do you know that all the scenarios for testing are covered?

By using the Requirement Traceability Matrix (RTM) we can ensure that we have covered all the functionalities in Test Coverage.

RTM is a document that traces User Requirements from analysis through implementations. RTm can be used as a completeness check to verify that all the requirements are present or that there is no unnecessary/extra features and as a maintenance guide to new personnel.

We can use the simple format in Excel sheet where we map the Functionality with the Test case ID.

2. Complete Testing with Time Constraints : Question: How do you complete the testing when you have a time constraint?

If i am doinf regression testing and i do not have sufficient time then we have to see for which sort of regression testing i have to go
1)unit regression testing
2)Regional Regression testing
3)Full Regression testing.

3. Given an yahoo application how many test cases u can write?

First we need requirements of the yahoo applicaiton.
I think test cases are written aganist given requirements.So for any working webapplication or new application, requirements are needed to prepare testcases.The number of testcases depends on the requirements of the application

Note to learners : A Test Engineer must have knowledge on SDLC. I suggest learners to take any one exiting application and start pratice from writing requirements.

4. Lets say we have an GUI map and scripts, a we got some 5 new pages included inan application how do we do that?

By integration testing.

5. GUI contains 2 fields Field 1 to accept the value of x and Field 2 displays the result of the formula a+b/c-d where a=0.4*x, b=1.5*a, c=x, d=2.5*b; How many system test cases would you write

GUI contains 2 fields

Field 1 to accept the value of x and

Field 2 displays the result

so that there is only one testcase to write.


1. How will be the testcases for product testing . Provide an example of test plan template.

For product testing, the test plan includes more rigourous testing since most of these products are off the self CD buys or net downloads.

Some of the common parameters in Testing must include
-------------------------------------------------------
1) Testing on Different Operating Systems
2) Installations done from CD ROM Drives with different machine configurations
3) Installations done from CD ROM Drives with different machine configurations with different versions of Browsers and Software Service Packs
4) LICENSE KEY functionality
5) Eval Version checks and Full Version checks with reference to eval keys that would need to be processed.


1. What are the different types of Bugs we normally see in any of the Project? Include the severity as well.

The Life Cycle of a bug in general context is:

Bugs are usually logged by the development team (While Unit Testing) and also by testers (While sytem or other type of testing).

So let me explain in terms of a tester's perspective:

A tester finds a new defect/bug, so using a defect tracking tool logs it.

1. Its status is 'NEW' and assigns to the respective dev team (Team lead or Manager). 2. Th
e team lead assign's it to the team member, so the status is 'ASSIGNED TO'
3. The developer works on the bug fixes it and re-assings to the tester for testing. Now the status is 'RE-ASSIGNED'
4. The tester, check if the defect is fixed, if its fixed he changes the status to 'VERIFIED'
5. If the tester has the autority (depends on the company) he can after verifying change the status to 'FIXED'. If not the test lead can verify it and change the status to 'fixed'.

6. If the defect is not fixed he re-assign's the defect back to the dev team for re-fixing.

This is the life cycle of a bug.

1. User Interface Defects - Low
2. Boundary Related Defects - Medium
3. Error Handling Defects - Medium
4. Calculation Defects - High
5. Improper Service Levels (Control flow defects) - High
6. Interpreting Data Defects - High
7. Race Conditions (Compatibility and Intersystem defects)- High
8. Load Conditions (Memory Leakages under load) - High
9. Hardware Failures:- High

2. Top Ten Tips for Bug Tracking

1. A good tester will always try to reduce the repro steps to the minimal steps to reproduce; this is extremely helpful for the programmer who has to find the bug.

2. Remember that the only person who can close a bug is the person who opened it in the first place. Anyone can resolve it, but only the person who saw the bug can really be sure that what they saw is fixed.

3. There are many ways to resolve a bug. FogBUGZ allows you to resolve a bug as fixed, won't fix, postponed, not repro, duplicate, or by design.

4. Not Repro means that nobody could ever reproduce the bug. Programmers often use this when the bug report is missing the repro steps.

5. You'll want to keep careful track of versions. Every build of the software that you give to testers should have a build ID number so that the poor tester doesn't have to retest the bug on a version of the software where it wasn't even supposed to be fixed.

6. If you're a programmer, and you're having trouble getting testers to use the bug database, just don't accept bug reports by any other method. If your testers are used to sending you email with bug reports, just bounce the emails back to them with a brief message: "please put this in the bug database. I can't keep track of emails."

7. If you're a tester, and you're having trouble getting programmers to use the bug database, just don't tell them about bugs - put them in the database and let the database email them.

8. If you're a programmer, and only some of your colleagues use the bug database, just start assigning them bugs in the database. Eventually they'll get the hint.

9. If you're a manager, and nobody seems to be using the bug database that you installed at great expense, start assigning new features to people using bugs. A bug database is also a great "unimplemented feature" database, too.

10. Avoid the temptation to add new fields to the bug database. Every month or so, somebody will come up with a great idea for a new field to put in the database. You get all kinds of clever ideas, for example, keeping track of the file where the bug was found; keeping track of what % of the time the bug is reproducible; keeping track of how many times the bug occurred; keeping track of which exact versions of which DLLs were installed on the machine where the bug happened. It's very important not to give in to these ideas. If you do, your new bug entry screen will end up with a thousand fields that you need to supply, and nobody will want to input bug reports any more. For the bug database to work, everybody needs to use it, and if entering bugs "formally" is too much work, people will go around the bug database.


  1. What criteria would you use to select Web transactions for load testing?

    this again comes from voice of customer, which includes what are the very commonly used transactions of the applications, we cannot load test all transactions , we need to understand the business critical transactions , this can be done either talking.

    2. For what purpose are virtual users created?

    Virtual users are created to emulate real users.

    3. Why it is recommended to add verification checks to your all your scenarios?

    To verify the Fnctional flow....verification checks are used in the scenarios

    4. In what situation would you want to parameterize a text verification check?

    I think verification is the process done when the test results are sent to the developer, developer fixes that and the recitification of the bugs. Then testor need to verification of the bugs which is sent by him.

    5. Why do you need to parameterize fields in your virtual user script?

    need for parameterisation is ,for eg. test for inserting a record in table, which is having a primery key field. the recorded vuser script tries to enter same record into the table for that many no of vusers. but failed due to integrity constraint. in that situation we definetly need parameterisation.

    6. What are the reasons why parameterization is necessary when load testing the Web server and the database server?

    parameterization is done to check how your application performs the same operation with different data.In load runner it is necessary to make a single user to refer the page for several times similar in case of database server.

    7. How can data caching have a negative effect on load testing results?

    yes, data caching have a negative effect on load testing results, this can be altered according to the requirments of the scenario in the run-time settings.

    8. What usually indicates that your virtual user script has dynamic data that is dependent on you parameterized fields?

    Use the extended logging option of reporting.

    9. What are the benefits of creating multiple actions within any virtual user script?

    Reasuability. Repeatability, Reliability.

    10. Load Testing - What should be analyzed.

    To determine the performance of the system following objectives to be calculated.
    1) Response time -: The time in which system responds to a transaction i.e., the time interval between submission of request and receiving response.
    2) Think time -: Time

    11. What is the difference between Load testing and Performace Testing?

    Performance testing verifies loads, volume and response time as defined by requirements while load testing is testing an application under heavy loads to determine at what point the system response time degrades.

TESTING CYCLE

Although testing varies between organizations, there is a cycle to testing:

* Requirements Analysis: Testing should begin in the requirements phase of the software life cycle.

* Design Analysis: During the design phase, testers work with developers in determining what aspects of a design are testable and under what parameter those testers work.

* Test Planning: Test Strategy, Test Plan(s), Test Bed creation.

* Test Development: Test Procedures, Test Scenarios, Test Cases, Test Scripts to use in testing software.

* Test Execution: Testers execute the software based on the plans and tests and report any errors found to the development team.

* Test Reporting: Once testing is completed, testers generate metrics and make final reports on their test effort and whether or not the software tested is ready for release.

* Retesting the Defects

Not all errors or defects reported must be fixed by a software development team. Some may be caused by errors in configuring the test software to match the development or production environment. Some defects can be handled by a workaround in the production environment. Others might be deferred to future releases of the software, or the deficiency might be accepted by the business user.