Software Quality:
To say software is a quality one, it has to satisfy four factors
1. Meet customer requirements
2. Meet customer expectations Technical Factors
3. Time to market
4. Cost to purchase Non-Technical Factors
Software Quality Assurance: (SQA)
Maintaining and measuring the strength of the development process.
Eg: Life Cycle Testing.
Life Cycle Development:
Stages in S/W Development Life Cycle
1. Information Gathering
2. Analysis
3. Design
4. Coding
5. Testing
6. Maintenance & Implementation
Life Cycle Testing:
BRS: Business Requirement Software
In this Fish model, Analysis, Design, Coding phases are called Verification.
System Testing and Maintenance are called Validation.
BRS : (Business Requirement Specification) This document defines customer requirements to be developed as a software.
This document is also known as Customer Requirement Specification (CRS) and User Requirement Specification (URS).
SRS : (Software Requirement Specification) This document defines functional and system requirements to be used.
Review : It is a static testing technique to estimate the completeness and correctness of a document.
HLD : (High Level Design) This document defines overall hierarchy of the system from root module to leaf modules.
This HLD is also known as External Design.
LLD’s : (Low Level Design) This document defines the internal logic of every sub modules in terms of structural logic (DFD’s) and backend logic (E-R diagrams).
This LLD is also known as Internal Logic.
Eg: DFD’s, E-R diagrams, class diagrams and object diagrams etc.
Prototype : A sample model of an application without functionality (i.e. only screens) is called prototype.
Eg: Power Point Slides.
White Box Testing : It is called a coding level testing technique to estimate the completeness and correctness of a program in terms of Internal Logic.
Black Box Testing : It is a build level testing technique (i.e. .exe form of the software program). During this test, test engineers validate completeness and correctness of every functionality in terms of customer requirements.
Software Testing : The verification and validation of a software is called software Testing.
Verification : Whether the system is right or wrong?
Validation : Whether the system is right system or not? (with respect to customer requirements).
Note : The above discussed model is almost implemented by all companies to produce the quality software. The above model is refined in depth i.e is called V-Model.
V-Model :
V- stands for Verification and Validation.
This model defines mapping between software development stages and testing stages. This model is derived from Fish Model.
Refinement Form of V-Model : V-Model is expensive process for small scale and medium scale organizations. Due to this reason, small scale and medium scale organizations are maintaining separate testing team for System Level Testing only (i.e. Black Box Testing).
I. Reviews During Analysis: In general, the software development process starts with information gathering and analysis. In this phase, Business Analyst Category people develop BRS and S/wR.
To estimate the completeness and correctness of these documents, responsible Business Analysts are conducting reviews. In this review, they can follow below factors
BRS S/wRS
1. Are they right requirements?
2. Are they complete?
3. Are they achievable? (With respect to Technology).
4. Are they reasonable? (With respect to Time).
5. Are they testable?
II. Reviews During Design: After completion of analysis and their reviews, design category people concentrate on external design and internal design development.
To estimate completeness and correctness of documents, they can follow below factors
HLD LLD’s
1. Are they understandable?
2. Are they met right requirements?
3. Are they complete?
4. Are they follow able?
5. Does they handle errors?
III. Unit Testing: After completion of design and their reviews, programmers concentrate on coding to physically construct a software. In this phase, programmers are conducting Unit Level Testing on those programs through White Box Testing technique. White Box Testing technique classified into three parts
1. Execution Testing
Basis Paths Coverage
Loops Coverage
Program Technique Coverage
Basis Paths Coverage: Means that every statement in the program is correctly participating in execution.
For Eg: In If – else statement, every such statement has to check two times for If separately and else part separately.
Loops Coverage: Means that to check whether the loop is correctly terminating or not, without going to infinite loop.
Program Technique Coverage: A programmer is said to a good programmer, if the execution performs less number of memory cycles and CPU cycles.
2. Operations Testing: To check whether the program is running on customer expected platform or not? Platforms means that operating system, compilers, browsers etc.
3. Mutation Testing: Mutation means that a complex changes in program logic. Programmers are following this technique to estimate completeness and correctness of a program testing.
Tests Tests Tests
Fig: Showing the different tests after a complex change.
IV. Integration Testing: After completion of dependent modules development and testing, development in composing them to form a build and conducts integration testing to verify completeness and correctness of that modules composition.
There are three approaches in Integration testing such as
a) Top-down Approach: In this approach, conducts testing on main module without
coming to some of the sub modules is called Top-down approach.
Developers are using temporary programs instead of under constructive sub modules, which are called as ‘stubs’. Stubs are those which deactivate the flow to under constructive module and return the flow to main module. Stub calls the main module.
b) Bottom-up Approach: In this approach, conducts testing on sub modules without
coming from main module is called Bottom-up approach.
Developer used a temporary program instead of under constructive main module called ‘driver’. Driver is that which activates the flow to sub modules. Drivers are called by the sub module.
c) Hybrid Approach: This approach is a combination of Top-down and Bottom-up approaches. This approach is also known as SandWitch Approach.
Build: A finally integrated all module set in ‘.exe’ form is called build or system.
V. Functional & System Testing: After completion of all possible modules integration as a system. The separate testing team in an organization validates that build through a set of Black Box testing techniques.
These techniques are classified into four divisions such as
1. Usability Testing
2. Functional Testing
3. Performance Testing
4. Security Testing
1. Usability Testing: In general, a System Level Testing starts with usability. This is done in early days of the job. In this level, testing team follows two testing techniques.
a) User Interface Testing: Under this testing technique, we have three categories
Ease of use (i.e. understandable screens to the user)
Look & Feel (Attractiveness of screens)
Speed in Interface (Sharp navigations to complete a task)
b) Manual Support Testing: In this technique, context sensitiveness (with respect to work or task) of the user manuals. This is done end days of the job.
Eg: Help documents of user manuals.
Receive build from developers
User Interface Testing
Remaining Functional & System Tests
Manual Support Testing
2. Functional Testing: A mandatory part in black box testing is functional testing. During this tests, testing team concentrate on “meet customer requirements”. Functional testing is classified into below sub tests
a) Functionality Testing: It is also known as requirements testing and validates correctness of every functionality. Functionality Testing consists of below six coverage:
i. Behavioral Coverage: Means that property of objects changes with respect to task. For example, whenever we type User-id & Password automatically Submit button is enabled. Means that with respect to task the next button should be enabled.
ii. Error-Handling Coverage: In this testing, we have to prevent negative navigations. For example in login process, if we give wrong password it has to a warning message so that it should not terminate.
iii. Input Domain Coverage: In this we have to check, the size and type of input object is correct or not.
iv. Calculations Coverage: In this we have to check, the correctness of the outputs.
v. Backend Coverage: In this we have to see, that the impact of front-end operations are showing on back-end database. It means that there are some operations performed on front-end, but the change has to take place in back-end tables also.
vi. Service Levels Coverage: In this we have to see, that the order of functionalities are placed in right position. For example:
There are some functionalities listed below
Here in this, there is one wrong placement, i.e. Forgot
Password. If you forgot password how can you login to
the window.
b) Input Domain Testing: It is a part of functionality testing, but test engineers are giving special treatment to input domains of object, such as ‘Boundary Value Analysis’ (BVA) and ‘Equivalence Class Partitions’ (ECP).
The BVA and ECP are as follows:
BVA defines the range and size of the object. For example take age, the range is 18-60, here the range is taken into consideration, not the size. ECP defines what type of characters it accepts is valid and remaining is invalid.
Example 1: A login process allows user-id and password to authorize users. From designed documents, user-id allows alpha-numeric from 4-16 characters long and password allows in lower case from 4-8 characters long.
Prepare BVA and ECP for user-id and password.
User-id:
Password:
These are the BVA and ECP values; by these we can know the size or range of the object.
Example 2: A textbox allows 12-digits numbers. In this number, ‘*’ is mandatory and ‘-‘ is optional. Give the BVA and ECP for this textbox.
Textbox:
c) Recovery Testing: It is also known as Reliability Testing. During this test, test engineers validates that whether our application build change from abnormal state to normal state or not?
Suppose that if an application is terminated or power is off in middle of the process, that application should not hang the system it should give end user and then system should enter from abnormal state to normal state by the backup and recovery procedures.
d) Compatibility Testing: It is also known as portability testing. During this test, test engineers validate that whether our application build run on customer expected platforms or not.
During this test, test engineers are facing two types of compatibility problems such as forward compatibility and backward compatibility.
Example for forward compatibility, is that VB program not working on UNIX platform, and means that our software is correct, but the operating system is having some defects. This case does not occur because; mostly the operating system will not have such defects.
Example for backward compatibility is that Oracle-95 working on Windows-98. It means that Oracle-95 is developed for Windows-95, but it is working on Windows-98 also, and means that there is some defect in our application.
e) Configuration Testing: It is also known as ‘Hardware Compatibility Testing’. During this test, test engineers validates that whether our application build run on different technology hardware devices or not?
Ex: Different technology printers, different technology LAN cards, different LAN topologies. These all should work to our application build.
f) Intersystem Testing: It is also known as end-to-end testing. During this test, test engineers validates that whether our application build co-existence with other existing software to share common resources or not?
Eg:
This figure gives the idea about the Intersystem testing. Before adding a new component to the previous application systems, whole system is working well, but when we add a new component i.e. is income tax bill application, the system hangs. Suppose when we does a transaction in water bill application and after sometime we undergo some transaction in electricity bill application, this time the system is working well. But when we do some transaction in new component i.e. income tax bill application, after sometime we undergo a transaction in telephone bill application the system hangs, means the connection is right but the disconnection from server is not good, so system hangs.
g) Sanitation Testing: It is also known as garbage testing. During this test, test engineers find any extra functionality to build with respect to SRS.
h) Installation Testing:
Easy interface is during installation and occupied disk space is checked after installation.
i) Parallel Testing: It is also known as comparative testing. During this test, test engineers try to find competitiveness of our application product through comparison with other competitive product. This test is done only for the software product, not for application software.
3) Performance Testing: It is an expensive testing division in black box testing. During this test, testing team concentrate on “speed of the processing” in our application build.
Performance Testing classified into below testing techniques
a) Load Testing: The execution of our application build under customer expected configuration and customer expected load to estimate performance is called Load Testing or Scalability Testing. ‘Scale’ means number of concurrent users.
b) Stress Testing: The execution of our application build under customer expected configuration and un-interval loads to estimate performance is called Stress Testing.
For example take web browsing application, here customer expected configuration is 1000 users at a time, but test engineers test for customer expected configuration and some intervals i.e. for 2000 users, 200 users, 1 user and no users but it should give the same performance.
c) Storage Testing: The execution of our application build under huge amount of resources to estimate storage limits is called Storage Testing.
Example: When we take VB as front-end and MS-Access as back-end database, here MS-Access supports only 2 GB of storage, is a limitation.
d) Data Volume Testing: The execution of application build under huge amounts of resources to estimate volume (size) of data in terms of records is called Data Volume Testing. In previous Data Volume Testing, we express the storage space in terms of GB, but here we express in terms of records.
4) Security Testing: This testing technique is complex to be applied. During this test, testing team concentrate on “Privacy to user operations “in our application build.
This testing technique is classified into below sub tests
a) Authorization: Whether a user is authorized or not to connect to an application?
b) Access Control: Whether a valid user has permission to use specific service or not?
c) Encryption/Decryption:
Here sender i.e. client performs encryption and receiver i.e. server performs decryption.
Note: In small scale organizations, authorization and access control are covered by the test engineers. Developers perform the encryption/decryption procedures.
VI. User Acceptance Testing (UAT): After completion of all possible functional and system tests, our project management concentrates on User Acceptance Testing to collect feedback from customer site people.
There are two ways to conduct UAT
VII. Testing during Maintenance: After completion of User Acceptance Testing and their modifications, project management concentrate on release team formation.
This team consists of few developers, few testers and few hardware engineers. This release team conducts port testing in customer site to estimate completeness and correctness of software installation in customer site.
Compact installation
Over all functionality
Input devices handling
Output devices handling
Secondary storage devices
Operating system error-handling
Co-existence with other software’s to share common resources
After completion of port testing, release team concentrate on training sessions to be conducted for end users. During utilization of that software, customer site people are sending “change requests” to our organization.
Defect Removal Efficiency (DRE):
DRE = A/A+B
Here ‘A’ is bugs found by testing team during testing. ‘B’ bugs found by customer site people during certain period of maintenance.
Testing Terminology:
1. Monkey Testing or Chimpanzee Testing: The coverage of “main activities” in our application build during testing is called money testing. Due to lack of time, testing team follows this type of testing.
For example Mail open, Mail compose, Mail reply, Mail forward are there we conduct only Mail open and Mail compose. Because, Mail reply and Mail forward is similar to Mail compose due to lack of time.
2. Exploratory Testing: The coverage of all activities in level by level during testing is called exploratory testing. Due to lack of knowledge on that application, test engineers are following this style of testing.
It is done module by module, due to lack of knowledge on that entire module.
3. Ad-hoc Testing: A tester conducts a test on application build, depends on predetermined ideas, called Ad-hoc Testing. Based on the past experience tester tests the build of the project.
4. Sanity Testing: Whether a development team released build is stable for complete testing or not? For example Specifying that it is not good without reason like, just watch is not working.
5. Smoke Testing: It is an extra shake-up in sanity testing. In this level, testing team reject a build with reason when that build is not working to be applied complete testing.
For example to say that watch is not working due to key rod i.e. with reason.
6. Big Bang Testing: A single stage of testing process after completion of entire system development is called Big Bang Testing. It is also known as “Informal Testing”.
7. Incremental Testing: A multiple stages of testing process from program level to system level are called Incremental Testing or Formal Testing.
Eg: LCT (Life Cycle Testing).
8. Manual Vs Automation: A test engineer conducts any test on application build without using any software tool help is called Manual Testing. A test engineer conduct a test on application build with the help of a software tool is called Test Automation.
Eg: A carpenter fitting screw without screw driver (manually) is manual testing and fitting screw with screw driver is called test automation.
Test Automation is done in two approaches they are as follows
Impact means that test repetition. Criticality means that complexity to be applied a test manually. Due to impact and criticality of test, test engineers concentrate on test automation.
9. Retesting: The re-execution of a test with multiple test data on same application build is called Retesting.
10. Regression Testing: The re-execution of tests on modified build to ensure “bug fix work” and possibilities of side effects occurrence is called Regression Testing.
Note: From the definitions of Retesting and Regression testing, test repetition is a mandatory task in test engineer’s job. Due to this reason, test engineers are going to Test Automation.
WinRunner 7.0
Developed by Mercury Interactive
Functionality Testing Tool
Supports VB, VC++, Java, Power Builder, Delphi, D2K, HTML, SIEBEL
Run on Windows family operating system
XRunner on Unix and Linux platforms
WinRunner records our manual test process in TSL (Test Script Language like as “C”).
WinRunner Testing Process
Learning: The recognition of objects and windows in our application build by WinRunner is called Learning.
Recording: The conversion of a manual test into an automation program is called Recording. For example recording manual voice in tape and if we want to listen the voice repeat voice from tape.
Edit Script: The insertion of check points and control statements (i.e. if statements) into recorded script is called edit script.
Run Script: The execution of automated script on application build is called Run Script. During this script execution, WinRunner returns test results as passed or failed.
Analyze Results: Test engineers analyze those results to concentrate on defect tracking when that test is failed. This is the day-to-day job of test engineers.
Add-in Manager: This window provides list of technologies with respect to license of WinRunner.
WinRunner Icons:
1. This symbol indicates start recording. In this WinRunner, we have one advantage i.e. when we start recording it automatically learn about the objects in our project.
2. Run from top
3. Run from point
4. Stop recording
5. Pause (Stop run)
Note: WinRunner 7.0 supports Auto-Learning to re0cognize objects and windows during recording.
Recording Modes: WinRunner allows you to record our operations in two types of modes
a) Context Sensitive Mode
b) Analog Mode
In above two modes, context sensitive mode is default. In this mode, WinRunner records mouse and keyboard operations with respect to objects and windows. For example
1) Click push button button_press (“buttonname”);
2) Fill textbox edit_set (“editbox”, “xxxx”);
3) Focus to window set_window (“windowname”, time to focus);
4) Select item in list box (combo) list_select_item (“listboxname”, “selected item”);
5) Select an option in menu menu_select_item (“menu name; option name”);
6) Select radio button button_set (“radiobuttonname”, ON);
7) Select a check box button_set (“check box name”, ON/OFF);
Analog Mode: We can use this mode to record mouse pointer movements on the desktop. To change to analog mode during recording, we can follow below options
1) Click “start recording” twice (or)
2) Go to create menu and then record analog (or)
3) Press F2 to change from one mode to other.
Test engineers are using this analog mode to record digital signatures, graph drawing, image movements etc.
Note: In this analog mode, WinRunner records mouse pointer movements with respect to desktop coordinates. Due to this reason, test engineers are maintaining corresponding window positions and monitor resolutions as constant during records and running.
Test Script: An automated program to apply a test on application build is called Test Script. Every test script consists of navigational statements to operate project and check points to conduct testing.
Case Study:
Test Script:
set_window (“login”, 5);
button_check_info(“OK”,”enabled”, 0);
edit_set(“User-Id”, “xxxx”);
button_check_info(“OK”, “enabled”, 0);
password_edit_set(“Password”, “encypted pwd”);
button_check_info(“OK”, “enabled”, 1);
button_press(“OK”);
Check Points: WinRunner is a functionality testing tool and it provides facilities to apply below coverage on application build.
1) Behavioral Coverage
2) Error-handling coverage
3) Input domain coverage
4) Calculations Coverage
5) Backend Coverage
6) Service Level Coverage
To automate above coverage, WinRunner provides four types of check points
1) GUI check point
2) Bitmap check point
3) Database check point
4) Text check point
1) GUI Check Point: To verify properties of objects, we can use this check point. This check point consists of three sub options
a) For single property
b) For object/window
c) For multiple objects
a) For Single Property: To verify a single property of an object, we can use this option. (One property to one object, one to one)
Example: update button
Test Procedure
Focus to window
Disabled
Open a record
Disabled
Perform a change
Enabled
Navigation:
Test Script: This is a test script for update button in Flight reservation
set_window (“Flight Reservation”, 5);
button_check_info (“update order”, “enabled”, 0);
menu_select_item(“File; openorder”);
set_window (“openorder”, 5);
button_set (“order_No”, ON);
edit_set (“Edit_1”, “1”);
button_press(“OK”);
set_window (“Flight Reservation”, 10);
button_check_info (“update order”, “enabled”, 0);
button_set(“Economy”, ON);
button_check_info (“update order”, “enabled”, 1);
button_press(“update order”);
Note: TSL is a case sensitive language. It maintains entire script in lower case and flags in upper case.
Example 2:
Test Script:
set_window(“student”, 5);
button_check_info(“OK”, “enabled”, 0);
list_select_item(“RollNo”, “xxxxxx”);
button_check_info(“OK”, “enabled”, 0);
edit_set(“Name”, “xxxxx”);
button_check_info(“OK”, “enabled”, 1);
button_press(“OK”);
Example 3:
Test Script:
set_window(“Employee”, 5);
button_check_info(“OK”, “enabled”, 0);
list_select_item(“EmpNo”, “xxxxxx”);
edit_check_info(“Name”,”focused”, 1);
button_check_info(“OK”, “enabled”, 0);
edit_set(“Name”, “xxxxx”);
button_check_info(“OK”, “enabled”, 1);
button_press(“OK”);
Case Study:
Example 4:
set_window(“Jouney”, 5);
list_select_item(“Fly from”, “xxxxx”);
list_get_info(“Fly from”, “count”, n);
list_check_info(“Fly to”, “count”, n-1);
Example 5:
Expected:
Selected item in name must appear in message box after click display button.
Test Script:
set_window(“Sample 1”, 5);
list_select_item(“Name”, “xxxxxx”);
list_get_info(“Name”, “value”, x);
button_press(“OK”);
set_window(“Sample 2”, 5);
button_press(“display”);
edit_check_info(“Message”, “value”, x);
Example 6:
Test Script:
set_window(“Insurance”, 5);
list_select_item(“Type”, “xxxx”);
list_get_info(“Type”, “value”, x);
if (x = = “A”)
edit_check_info(“Age”, “focused”, 1);
else if (x = = “B”)
list_check_info(“Gender”, “focused”, 1);
else
list_check_info(“Qualification”, “focused”, 1);
Example 7:
Test Script:
set_window(“Student”, 5);
list_select_item(“RollNo”, “xx”);
button_press(“OK”);
edit_get_info(“percentage”, “value”, p);
if (p>= 80)
edit_check_info(“Grade”, “value”, “A”);
else if (p <80 && p>=70)
edit_check_info(“Grade”, “value”, “B”);
else if (p <70 && p>=60)
edit_check_info(“Grade”, “value”, “C”);
else
edit_check_info(“Grade”, “value”, “D”);
b) For object/window: To verify more than one properties of a single object, we can use this option.
Eg:
Navigation:
Function:
Obj_check_gui(“Object Name”, “Check list file.ckl”, “Expected values file”, time to create);
In above syntax, check list file specifies list of properties to be tested. Expected values for that selected properties.
c) For Multiple Objects:
To verify more than one properties of more than one object, we can use this option
Navigation:
Function: win_check_gui(“window name”, “check list file.ckl”, “expected values file”, time to create);
Note: Above check point is applicable for more than one object in same window.
Paths of Files:
Test Script: C:/Program files/Mercury Interactive/WinRunner/Tmp/testname/script
Checklist: C:/program files/MI/WinRunner/Tmp/testname/script/chklist/list1.ckl
Expected Value File: C:/PF/MI/WinRunner/Tmp/testname/script/Exp Val File/gui
WinRunner allows you to perform changes in check list files and expected value files, due to sudden changes in requirements or test engineer mistakes.
a) Changes in expected values: Due to sudden changes in requirements or mistakes of test engineer, they can perform changes in expected values through below navigation.
b) Add new property: Due to changes in requirements or mistakes of test engineer, they can add new properties to existing check points through below navigation.
Note: If WinRunner selected default values are not equal to test engineer expected values, perform changes in result window as per requirements and re-execution of test.
Draw back: Here the default values are selected for all properties even though there are expected values selected by the test engineer.
Example 8:
Navigation:
set_window (“Sample”, 5);
obj_check_gui (“object name”, “check list file”, “Expected value file”, time to create);
obj_check_gui (“age”, “list1.ckl”, “gui1”, 1);
Note: Range is single property, but to test range test engineers are using “For object/window” option, because two expected values are required to define a range.
Example 9:
Navigation:
Test Script:
set_window (“Sample”, 5);
obj_check_gui(“object name”, “check list file”, “expected value file”, time to create);
obj_check_gui(“Name”, “list1.ckl”, “gui1”, 1);
Here check list file means that “Regular Expression”, Expected value file means that “[a-z]*”.
Example 10: Prepare regular expression for alpha numeric.
Regular Expression: [a-zA-Z0-9]*
Example 11: Prepare regular expression for alpha numeric with init char (start with char)
Regular Expression: [a-zA-Z][a-zA-Z0-9]*
Example 12: Prepare regular expression for alphabets in lower case but start with “R”, end with “o”.
Regular Expression: [R][a-z]*[o]
Example 13: Prepare regular expression for yahoo id.
Example 14: Prepare regular expression for alphabets in lower case with “_” but does not start and end with “_”.
Regular Expression: [a-z][a-z_]*[a-z]
2) Bitmap Check Point: We can use this check point to compare images.
Eg: Logo’s testing, graphs comparison, digital signature comparison etc.
It consists of two sub options
a) For object/window
b) For screen area
a) For object/window: To compare our expected image with actual image, we can use this option.
Example 1: Logo Testing
Example 2: Graphs Comparison (i.e. Negative Testing)
If Expected = = Actual, the test result is Fail
If Expected != Actual, the test result is pass. [If it is expected difference]
Navigation:
Function: obj_check_bitmap (“button”, “img1”, 1);
Obj_check_bitmap (“Image object name”, “Image file name”, time to create);
b) For Screen Area: To compare our expected image area with actual, we can use this option.
Navigation:
Function: obj_check_bitmap (“Image object name”, “Image file name”, time to create, x, y, width, height);
Note 1: TSL functions does not support “function over loading”, but they follows “variable number of arguments” concept like as ‘C’ language.
Note 2: WinRunner bitmap check point supports static images only, does not supports dynamic images.
Note 3: In functionality test automation, GUI check point is mandatory and bitmap check point is optional.
3) Data Base Check Point: To automate backend testing, test engineers are using this check point. During backend testing, test engineers concentrate on impact of front end operations on content of backend tables, in terms of “data validation and data integrity”.
Data validation means that whether the front end values are correctly inserted or not?
Data integrity means that whether the dependent table’s content correctly effected or not?
To conduct this backend testing, test engineer follows below approach
Step1: Connect to data base
Step 2: Execute select statement
Step 3: Captured result in to Excel sheet
Step 4: Analyze that result for data validation and data integrity.
Note: Most of this DSN’s available in the market now-a-days is developed in ‘C’ language, but WinRunner is also developed in ‘C’ language.
To follow above approach, test engineers are collecting below information from development tem.
DSN Name
Table Definitions (Names & columns of tables)
Forms versus Tables
Above information is called as DDD (Data Base Design Document).
This check point consists of three sub options
a) Default check
b) Custom check
c) Runtime record check
a) Default check: To conduct backend testing depends on database content, we use this option.
Example 1:
Create database check point (Current content of database selected as expected)
Perform front end operations
Execute database check point (Current content of database selected as actual)
Expected Vs Actual, if both are equal then, test is fail. If both are not equal and the difference is a expected then, the test is pass.
Navigation:
In above navigation, ODBC is “Open Database Connectivity”. ODBC is selected if the data content is in the local host, otherwise if the data is in the remote host i.e. in the server select data junction.
Function: db_check (“check list file.cdl”, “Query result file”);
db_check (“list1.cd1”, “dbvf1”);
In above syntax, check list file specifies that “content” is the property. Query result file specifies the content of select statement.
b) Custom check: To conduct backend testing depends on rows count, columns count and content, we can use this check point.
In general test engineers are using default check option, because content of database is able to cover rows count and columns count also. So a test engineer does not prefer this option.
c) Runtime record check: It is a new concept in WinRunner 7.0. We can use this option to find mapping between front end and back end columns.
In this check point, test engineer specifies expected mapping and WinRunner tests whether that mapping is right or wrong? Depends on existing normalized database content.
We can use with this check point to find mapping, when more than one columns in database table maintains same type and same range of values.
Navigation:
In the above navigation there are three sub options, there are
a) Exactly one matching record: When ever we go for checking of values for primary key columns, we select this option. It means that primary key columns values are unique so there is no chance of same value. If it finds more than one value similar, the mapping is wrong.
b) One or more matching records: When our expected mapping is for more than one record, means that if the objects in front end having one or more same values in the columns of the backend tables.
c) No matching records:
Function: db_record_check(“Checklistfile.cvr”,DVR_ONE_MATCH/DVR_ONE_MORE_MATCH/DVR_NO_MATCH, Variable);
db_check (“list1.cvr”, DVR_ONE_OR_MORE, record_num);
Here in the above function, “.cvr” is checklist verification at runtime and “DVR” is Data verification at runtime. Check list specifies what is expected mapping between front end objects and backend columns. “Flag” specifies type of matching. Variable specifies number of records matched.
For (i=1; i<=5; i++)
{
set_window (“Flight Reservation”, 5);
menu_select_item (“File”, Open order…);
set_window (“Open Order”, 2);
button_set (“Order_No”, ON);
edit_set (“Edit_1”, i);
button_press (“OK”);
db_record_check (“list1.cvr”, DVR_ONE_OR_MORE_MATCH, record_num);
}
After execution of the above check point, the test result is obtained. In this result, if all the check point results are in green color, then the expected mapping is correct. If any one of the result is red color means test is failed and the expected mapping is wrong, but the green colors specifies that there are one or more same values in the columns of the table in backend database.
4) Text Check Point: To conduct calculations and other text based test, we can use “get text” option in create Menu. This option consists of two sub options
a) From object/window
b) From Screen Area
a) From object/window: To capture an object value into a variable.
Navigation:
Function: obj_get_text (“object Name”, variable);
Obj_get_text (“Name:”, t);
Note: Above function is equal to edit_get_info (“Editbox name”, “value”, variable);
Example:
set_window (“Sample”, 5);
obj_get_text (“Input”, x);
obj_get_text(“Output”, y);
if (y = = x * 100)
printf (“Test is pass”);
else
printf (“Test is fail”);
b) From screen area: To capture static text from screens, we can use this option
Navigation:
Function: obj_get_text (“Screen area Name”, variable, x1, y1, x2, y2);
Obj_get_text (“GS_Drawing”, text, 34,43,56,67);
For computer the coordinates will be in the following manner
Example 1: Expected = Number of tickets * price = total
set_window (“Flight Reservation”, 4);
obj_get_text (“Tickets”, t);
obj_get_text (“Price”, p);
p = substr (p, 2, length(p)-1);
obj_get_text (“total”, tot);
tot = substr (tot, 2, length(tot)-1);
if (tot = t * p)
printf (“Test is pass”);
else
printf (“Test is fail”);
Example 2:
set_window (“Shopping”, 4);
obj_get_text (“Quantity”, q);
obj_get_text (“price”, p);
p = substr (p, 4, length (p)-2);
obj_get_text (“Total”, tot);
tot = substr (tot, 4, length (tot)-2);
if (tot = = q * p)
printf (“Test is pass”);
else
printf (“Test is fail”);
Example 3:
set_window (“Audit”, 5);
obj_get_text (“File 1”, f1);
f1 = substr (f1, 1, length (f1)-2);
obj_get_text (“File 2”, f2);
f2 = substr (f2, 1, length (f2)-2);
obj_get_text (“sum”, s);
sum = substr (s, 1, length (s)-2);
if ( s * 1024 = = f1 + f2)
printf (“Test is pass”);
else
printf (“Test is fail”);
After executing the above examples we get the result not in result window, we get result in separate window. So in order not to see in separate window we use the below syntax.
tl_step(): We can use this function to create our own pass or fail message during test execution.
Syntax: tl_step (“Step name”, 0/1, “message”);
In the above syntax tl stands for test log. This test log indicates test results as passed or failed. Step name is our choice we can give any name. ‘0’ stands for test is pass, the result is in green color, if non-zero, the test is failed, the result is red color in result sheet. “Message” is the message we give that “Test is pass”.
Data Driven Test:
To validate functionality, test engineers are executing corresponding test with multiple test data. This approach is also known as ‘Retesting’. There are four types of Data Driven Tests.
1) Dynamic Test Data Submission: To validate functionality, test engineers are executing a test with multiple test data through dynamic submission.
To read values from keyboard during test execution, we can use below TSL statement.
Syntax: create_input_dialog (“Message”);
Note: In this style of Data Driven Testing, tester interaction is mandatory during test execution.
Example 1:
for (i=1;i<=5;i++)
{
x=create_input_dialog (“Enter order no:”);
set_window (“Flight Reservation”, 4);
menu_select_item (“File; openorder….”);
set_window (“open order”, 1);
button_set(“order No”, ON);
edit_set(“Edit_1”, x);
button_press(“OK”);
}
Example 2:
for(i=1;i<=10;i++)
{
x=create_input_dialog(“Enter input 1 value:”);
y=create_input_dialog(“Enter input 2 value:”);
set_window(“Multiply”, 4);
edit_set(“Input 1”, x);
edit_set(“Input 2”, y);
button_press(“OK”);
obj_get_text(“Result”, temp);
if(temp==x*y)
tl_step(“Step”&i, 0, “Multiplication is correct”);
else
tl_step(“Step”&i, 1, “Multiplication is wrong”);
}
Example 3:
for(i=1;i<=10;i++)
{
x=create_input_dialog(“Enter Item no:”);
y=create_input_dialog(“Enter Quantity:”);
set_window(“Shopping”, 4);
edit_set(“Item No”, x);
edit_set(“Quantity”, y);
button_press(“OK”);
obj_get_text(“Price”, p);
p=substr(p, 2, length(p)-1);
obj_get_text(“Total”, tot);
tot=substr(tot, 2, length(tot)-1);
if(tot==p*q)
tl_step(“Step”&i, 0, “Calculation is pass”);
else
tl_step(“Step”&i, 1, “Calculation is fail”);
}
Example 4:
for(i=1;i<=10;i++)
{
x=create_input_dialog(“Enter User-id:”);
y=create_input_dialog(“Enter Password:”);
set_window(“Login”, 4);
edit_set(“User-id”, x);
password_edit_set(“Password”, password_encrypt(y));
button_press(“OK”);
button_get_info(“NEXT”, “enabled”, n);
if(n==1)
tl_step(“Step”&i, 0, “User is authorized”);
else
tl_step(“Step”&i, 1, “User is unauthorized”);
}
2) Through Flat Files (.txt): Some times test engineers are conducting retesting depends on multiple test data in flat files.
To read values from flat file during test execution, we can use below file functions.
1) file_open():- We can use this function to open a flat file into RAM.
Syntax:file_open(“pathoffile”,FO_MODE_READ/FO_MODE_WRITE/FO_MODE_APPEND);
2) file_getline():- We can use this function to read a line of text from file.
Syntax: file_getline(“path of file”, variable);
Like as ‘C’ language, file pointer incremented automatically in TSL.
3) file_close():- To swap out a opened file from RAM, we can use this function.
Syntax: file_close(“path of file”);
4) file_printf():- We can use this function to write a line of text into a file.
Syntax: file_printf(“pathof file”, “format”, values/variables);
Eg: file_printf(“c:\\my documents\\filename”, “a=%d and b=%d”, a,b);
5) file_compare():- We can use this function to compare two files content.
Syntax: file_compare(“pathof file1”, “path of file 2”, “path of file 3”);
In above syntax, third file is optional, and it specifies concatenated content of both compared files. The first two are compared, and then the concatenated content is stored in file 3.
Example 1:
f=”c:\\my documents\\testdata.txt”;
file_open(“f”, FO_MODE_READ);
while(file_getline(f,s)!=EOF)
{
set_window(“Flight Reservation”, 5);
menu_select_item(“File; open order…”);
set_window(“Open order”,1);
button_set(“Order.No”, ON);
edit_set(“Edit_1”, s);
button_press(“OK”);
}
file_close(f);
Example 2:-
f=”c:\\my documents\\testdata.txt”;
f=”c:\\my documents\\testdata.txt”;
file_open(“f”, FO_MODE_READ);
while(file_getline(f,s)!=E_FILE_EOF)
{ split(s,x,” “);
set_window(“Multiply”,4);
edit_set(“Input 1”, x[1]);
edit_set(“Input 2”, x[2]);
button_press(“OK”);
obj_get_text(“Result”, tmp);
for(i=1;i<=n;i++)
{ if(tmp==x[1]*x[2])
tl_step(“Step”, 0, “Product is correct”);
else
tl_step(“Step”, 1, “Product is wrong”);
}
} file_close(f);
Example 3:
C:\\My Documents\\testdata.txt;
f=”c:\\My Documents\\testdata.txt”;
file_open(“f”, FO_MODE_READ);
while(file_getline(f,s)!=E_FILE_EOF)
{
split(s, x, “ “);
set_window(“Shopping”, 5);
edit_set(“ItemNo”, x[3]);
edit_set(“Quantity”, x[6]);
button_press(“OK”);
obj_get_text(“Price”, p);
p=substr(p, 2, length(p)-1);
obj_get_text(“Total”, tot);
tot=substr(tot, 2, length(tot)-1);
for(i=1;i<=n;i++)
{ if(tot==p*x[6])
tl_step(“Step”, 0, “Calculation is right”);
else
tl_step(“Step”, 1, “Calculation is wrong”);
}
} file_close(f);
Example 4:
Test data in file:- c:\\My Documents\\testdata.txt
f=”c:\\My Documents\\testdata.txt”;
file_open(“f”, FO_MODE_READ);
while(file_getline(f,s)!E_FILE_EOF)
{
split(s, x, “ “);
split(x[1], y, “@”);
set_window(“Login”, 5);
edit(“User-id”, y[1]);
password_edit_set(“Password”, password_encrypt(x[2]));
button_press(“OK”);
button_get_info(“NEXT”, “enabled”, n);
if(n==1)
tl_step(“Step”, 0, “Authorized User”);
else
tl_step(“Step”, 1, “Unauthorized User”);
} file_close(f);
3) From Front End Grids:- Some times test engineers are conducting ‘Retesting’ depends on multiple data objects such as list, menu, activeX, table and data window(i.e. reports)
Example 1:
set_window(“Journey”, 3);
list_get_info(“Fly From”, “count”, n);
for(i=0; i<n; i++)
{
list_get_item(“Fly From”, i, x);
list_select_item(“Fly From”, x);
if(list_get_item(“Fly To”, x)!=E_OK)
tl_step(“Step”&i, 0, “Item does not appear”);
else
tl_step(“Step”&i, 1 “Item appear”);
}
Note: If a TSL statement executed successfully then, statement returns E_OK.
In the above example a new check point is introduced, i.e. “list_get_item”, the syntax is list_get_item(“listbox name”, item number, variable);
First the item number value is stored into the variable and then the item is selected into the list box. In list box the array starts with x[0] and ends with x[n-1].
Example 2:
Expected: Selected name appears in message box like as “My name is xxxxx”.
set_window(“Sample 1”, 5);
list_get_info(“Name”, “count”, n);
for(i=0; i<n;i++)
{
set_window(“Sample 1”, 1);
list_get_item(“Name”, i, x);
list_select_item(“Name”, x);
button_press(“OK”);
set_window(“Sample 2”, 4);
button_press(“Display”);
obj_get_text(“Message”, msg);
if(msg==”My name is “&x)
tl_step(“Step”&i, 0, “Matched”);
else
tl_step(“Step”&i, 1, “Unmatched”);
}
Example 3:
set_window(“Insurance”, 5);
list_get_info(“Type”, “count”, n);
for(i=0;i<n;i++)
{
list_get_item(“Type”, i, x);
list_select_item(“Type”, x);
if(x==”A”)
edit_check_info(“Age”, “focused”, 1);
else if(x==”B”)
list_check_info(“Gender”, “focused”, 1);
else
list_check_info(“Qualification”, “focused”, 1);
}
Example 4:
sum=0;
set_window(“Audit”, 4);
tbl_get_rows_count(“FileStore”, n);
for(i=1; i<n; i++)
{
tbl_get_cell_data(“FileStore”, “#”&i, “#2”, s);
s=substr(s, 1, length(s)-2);
sum=sum+s;
}
obj_get_text(“Total”, tot);
tot=substr(tot, 1, length(tot)-2);
if(tot==sum)
tl_step(“Step”, 0, “Calculation is right”);
else
tl_step(“Step”, 1, “Calculation is wrong”);
Example 5:
set_window(“Shopping”, 4);
tbl_get_rows_count(“Bill”, n);
for(i=1;i<n;i++)
{
tbl_get_cell_data(“Bill”, “#”&i, “#1”, q);
tbl_get_cell_data(“Bill”, “#”&i, “#2”, p);
p=substr(p, 2, length(p)-1);
tbl_get_cell_data(“Bill”, “#”&i, “#3”, tot);
tot=substr(tot, 2, length(tot)-1);
if(tot==p*q)
tl_step(“Step”&i, 0, “Calculation is right”);
else
tl_step(“Step”&i, 1, “Calculation is wrong”);
}
There are some functions used in the above TSL script, they are as follows;
a) list_get_item( ) :- We can use this function to capture a list item depends on item number. The syntax is
list_get_item(“listbox”, itemno, variable);
Note: List box item number starts with zero.
b) tbl_get_rows_count( ):- We can use this function to find number of rows in table grid. The syntax is as follows:
tbl_get_rows_count(“Table name”, variable);
Note: By default row and column number in a table stars with zero.
c) tbl_get_cell_data( ):- We can use this function to capture specified table cell value. The syntax is as follows:
tbl_get_cell_data(“Table name”, “#row no”, “#column no”, variable);
4) Through Excel Sheet:- In general, test engineers are creating data driven test depends on excel sheet as maximum.
From the above scenario, test engineers are maintaining requires test data in an excel sheet. To use this excel sheet during test execution we can use below functions in TSL script language.
1) ddt_open( ):- To open an excel sheet, into RAM in required mode. The syntax is:
ddt_open(“path of excel sheet”, DDT_MODE_READ/DDT_MODE_READWRITE);
2) ddt_get_row_count( ):- We can use this function to find number of rows in excel sheet. The syntax is as follows:
ddt_get_row_count(“path of excel sheet”, variable);
This variable returns the number of rows in excel sheet excluding header.
3) ddt_set_row( ):- We can use this function to point a specific row in excel sheet. The syntax is:
ddt_set_row({“path of excel sheet”, row number);
4) ddt_val( ):- We can use this function to read specified excel sheet column value from that pointed row. The syntax is:
ddt_val(“path of excel sheet”, “column name”);
5) ddt_close( ):- We can use this function to swap out a opened excel sheet from RAM. The syntax is:
ddt_close(“path of excel sheet”);
Note: Test engineers are filling excel sheet with data base content or our own test data.
Navigation to create ddt:-
Test Script:
table=”default.xls”;
rc=ddt_open(table, DDT_MODE_READWRITE);
if(rc!=E_OK && rc!=E_FILE_EOF)
pause(“cannot open table”);
ddt_update_from_db(table, “msqr1.sql”, count);
ddt_save(table);
ddt_get_row_count(table, n);
for(i=1;i<=n;i++)
{
ddt_set_row(table, i);
set_window(“Flight Reservation”, 2);
menu_select(“File;OpenOrder”);
set_window(“Open Order”, 1);
button_set(“OrderNo”, ON);
edit_set(“Edit_1”, ddt_val(table, “order_number”);
button_press(“OK”);
}
ddt_close(table);
Function:
1) ddt_update_from_db( ):- We can use this function to update excel sheet content with respect to database changes. The syntax is :
ddt_update_from_db(“path of excel sheet”, “Query file”, variable);
In the above syntax, variable specifies number of alterations in excel sheet content.
2) ddt_set_val( ):- We can use this function to write a value into excel sheet. The syntax :
ddt_set_val(“path of excel sheet”, “column name”, value/variable);
Example 2:-
Input 1 Input 2 Result
x x
x X
--- ----
--- ----
--- ----
table=”default.xls”;
rc=ddt_open(table, DDT_MODE_READWRITE);
if(rc!=E_OK && rc!=E_FILE_EOF)
pause(“cannot open table”);
ddt_get_row_count(table, n);
for(i=1;i<=n;i++)
{
ddt_set_row(table, i);
x=ddt_val(table,”input1”);
y=ddt_val(table, “input 2”);
z=x+y;
ddt_set_val(table, “result”, z);
ddt_save(table);
}
ddt_close(table);
Note: To open excel sheet of current test, we can use tools Menu Data Table option.
Example 3:- Prepare TSL script to print a list box items one by one into excel sheet.
Example 4:
table=”default.xls”;
rc=ddt_open(table, DDT_MODE_READWRITE);
if(rc!=E_OK && rc!=E_FILE_OPEN)
pause(“cannot open table”);
ddt_get_row_count(table, n);
for(i=1;i<=n;i++)
{
ddt_set_row(table, i);
x=ddt_val(table, “Input”);
fact=1;
for(k=x;k>=1;k--)
fact=fact*k;
ddt_set_val(table, “result”, fact);
ddt_save(table);
}
ddt_close(table);
Batch Testing: The sequential execution of more than one test on our application build is called “Batch Testing”.
Every test batch consists of a set of dependent tests. In these test batches end state of one test is base state to next test. This test batch is also known as Test Suit or Test set.
Example 1:
Test case 1: Successful order open
Test case 2: Successful updating
Example 2:
Test case 1: Successful order open
Test case 2: Successful deletion
Example 3:
Test case 1: Successful New user registration
Test case 2: Successful Login operation
Test case 3: Successful Mail operation
Test case 4: Successful Mail reply
To create test batches in WinRunner, we can use call statement. The syntax is
call testname( ); or call “path of test”( );
We can use first syntax, when calling and called tests both are in same folder. We can use second syntax when both are in different folders.
Sub test:
set_window(“Flight Reservation”, 12);
menu_select_item(“File; open order…”);
set_window(“Open order”,1);
button_set(“Order.No”, ON);
edit_set(“Edit_1”, “1”);
button_press(“OK”);
set_window(“Flight Reservation”, 1);
obj_get_text(“Name”, t);
if(t = = ” “)
pause(“cannot open record”);
Main Test:
call testname1( );
set_window(“Flight Reservation”, 6);
obj_get_text(“Tickets”, t);
obj_get_text(“Price”, p);
p=substr(p,2, length(p)-1);
obj_get_text(“Total”, tot);
tot=substr(tot,2, length(tot)-1);
if(tot = = p*t)
tl_step(“S1”, 0, “Result correct”);
else
tl_step(“S1”, 1, “Result wrong”);
Parameter passing: WinRunner allows you to pass parameters between one test to other test, like as programming languages. To pass parameters, we can follow below navigation.
Navigation:
In the above figure, the transmission of control, with parameters passing is done.
Data Driven Batch: WinRunner allows you to create Data Driven Batches with multiple test data.
Example: Data Driven Main test in a batch.
table=”default.xls”;
rc=ddt_open(table, DDT_MODE_READ);
if(rc!=E_OK && rc!=E_FILE_OPEN)
pause(“cannot open table”);
ddt_get_row_count(table,n);
for(i=1;i<=n;i++)
{
ddt_set_row(table,i);
temp=ddt_val(table, “input”);
call kkk(temp);
}
ddt_close(table);
Sub Test:
set_window(“Flight Reservation”, 12);
menu_select_item(“File; open order…”);
set_window(“Open order”,1);
button_set(“Order.No”, ON);
edit_set(“Edit_1”, “1”);
button_press(“OK”);
set_window(“Flight Reservation”, 1);
obj_get_text(“Name”, t);
if(t = = ” “)
pause(“cannot open record”);
treturn( ):- WE can use this statement to return a value from subtest to main test. The syntax is :
treturn(value/variable);
Silent Mode: It is a runtime setting. In this mode, WinRunner continued test execution. When a check point is fail also.
Navigation:
Note: If WinRunner settings are in silent mode tester interactive statements are not working during test execution.
Eg: create_input-dialog( );
Function Generator:
This option list out all built-in functions in TSL. To search unknown TSL function from that library, we can follow below navigation
Navigation:
Example 1: Clip boarding testing.
A test engineer conduct a an selected pack of an object content is called clip board testing. The syntax is:
edit_get_selection(“Name”, x);
printf(x);
Example 2: Search TSL function to print system date.
Syntax: printf(time str( ));
Example 3: Search TSL function to print current test name.
Syntax: printf(getvar (“test name”));
Example 4: Open Project
WinRunner allows you to open a project during test execution.
Syntax: invoke_application(“path of build.exe”, “command”, “working directory”, SW_SHOW/SW_SHOWMAXIMIZE/SW_SHOWMINIMIZE);
SW stands for set_window.
Working directory is c:\windows\temp is the default directory.
Example 5: Window existence test.
Whether a specified window is available on desktop or not?
Syntax: win_exists(“windows name”, time);
In above syntax, time defines delay before window existence.
Example 6:
Main Test:
Call test1( );
If(win_exists(“Sample”, 0)==E_OK)
{
call test2( );
}
call test3( );
Example 7: Execute prepared query.
A prepared statement means that a statement with variables in definition.
Eg: Select * from emp where empno<=x;
For above scenario, we can use database category functions
a) db_connect( ):- We can use this function to connect to data base using existing DSN.
Syntax: db_connect(“sessions name”, “DSN=xxxx”);
Sessions name is that memory allocated for this function.
b) db_execute_query( ):- We can use this function to execute your prepared query on that connected database.
Syntax: db_execute_query(“Sessions name”, “prepared query”,variable);
In the above syntax, variable specifies number of rows in query result after execution.
c) db_write_records( ):- We can use this function to write corresponding query result into a specified file.
Syntax: db_write_records(“Sessions name”, “path of destination file”, TRUE/FALSE, NO_LIMIT);
TRUE means that the result is displayed with column name i.e. header.
FALSE means that the result is displayed with out header i.e. only rows.
x=create_input_dialog(“Enter a limit”);
db_connect(“query1”,”DSN=Flight32”);
db_execute_query(“query1”,”select *from orders where order_number<=”&x,num);
db_write_records(“query1”, “default.xls”,TRUE,NO_LIMIT);
USER DEFINED FUNCTIONS (UDF):- Like as programming languages, WinRunner also allows you to create user defined functions.
Syntax: public/static function function name(in/out/inout arg name ……..);
{
---------
--------- #Repeatable navigation
---------
return( );
}
public: This function invoked in any test.
static: This function invoked in any test but maintains constant locations to variables. For example:
in:- These parameters working as general arguments (caller sending value, function receives the value).
out :- These parameters working as return values.
inout :- These parameters working as both in and out.
return( ):- This statement is used for one value return.
Example 1: public function add(in a, in b, out c)
{
c=a+b;
}
Calling test: x=20; y=10;
add(x,y,z);
printf(z );
Example 2: public function add(in a, inout b)
{
b=a+b;
}
Calling test: x=20; y=10;
add(x,y);
printf(y);
Example 3: public function add(in a, in b);
{
auto c;
c=a+b;
return(c);
}
Calling test: x=20; y=10;
z=add(x,y);
printf(z);
Example 4: public function openrecord(in x);
{
set_window(“Flight Reservation”, 12);
menu_select_item(“File; open order…”);
set_window(“Open order”,1);
button_set(“Order.No”, ON);
edit_set(“Edit_1”, x);
button_press(“OK”);
}
Compiled Modules:
A permanent .exe copy of a function is called compiled module. To create these modules, we can follow below navigation
Startup test:- WinRunner tool maintains a default script program as startup test. This script executed automatically, when you launch WinRunner on the desktop. The path is
C:\Progarm Files\Mercury Interactive\WinRunner\dat\myinit
load( ):- We can use this function to load user defined function .exe into RAM.
Syntax: load(“Compiled Module Name”, 0/1, 0/1);
First 0/1 0 means the function is user defined function and 1 means that function is system defined function.
Second 0/1 0 means that the path of the load function appears in Windows menu and 1 means the path is invisible.
Note: Above load( ) statement test engineers are using in startup script of WinRunner.
unload( ):- We can use this function to swap out unwanted functions from RAM.
Syntax: unload(“path of compiled module”, “Function name”);
reload( ):- We can use this function to reload unloaded functions into RAM.
Syntax:- reload(“path of compiled module”, “function name”); or
reload(“path of compiled module”, 0/1, 0/1);
Note: We can use unload( ) and reload( ) statements in our test script if required.
Synchronization point: We can use these concepts to define time mapping between testing tool and application build.
a) wait( ):- We can use this function to define a fixed delay during test execution.
Syntax: wait(time in seconds);
This function defines delay. Due to this reason, test engineers are not using wait( ) when that application is multi-user application ( i.e. networking application).
b) for object/window property:- We can use this option to define time mapping between testing tool and application build depends on a property value of an object.
Eg: progress or status bar in the project or front end screen.
In this mode, WinRunner is waiting until specified object property is equal to specified object expected.
Navigation:
First of all synchronization means that inter process communication in a system.
Function for the above navigation:
obj_wait_info(“object name”, “property”, Expected value, maximum time to wait);
c) for object/window bitmap:- We can use this concept to define time mapping between tool and application depends on image objects. In this mode, WinRunner wait until specified object image is equal to our expected image.
Navigation:
Function: obj_wait_bitmap(“Image object name”, “Image file name”, maximum time to wait);
d) For screen area bitmap:- We can use this concept to define time mapping depends on part of image.
Navigation:
Function: obj_wait_bitmap(“Image object name”, “Image file name”, maximum time to wait, x, y, width, height);
e) Change runtime settings:- Recording time parameters are not useful at runtime. Due to this reason, test engineers are using above four concepts to synchronize WinRunner and build. If above four concepts are not able to use on our application build, then test engineers are changing WinRunner runtime settings. By default WinRunnner maintains below runtime settings.
delay is used when ever we go for window synchronization, the default value is 1000 milli seconds.
timeout is used when we want to execute context sensitive mode and check points in script, the default is 10000 milli seconds.
Navigation:
Priorities of synchronization: In general, test engineers are following below preferences to synchronize testing tool and application build.
1) For object/window property
2) For object/window bitmap or for screen area bitmap
3) Change runtime settings
4) wait( );
Learning: In general, the automation testing process starts with learning. But WinRunner 7.0 test engineers starts with recording, because WinRunner 7.0 supports auto learning. This tool allows pre-learning also like as lower versions.
1) Auto learning: The recognition of objects and windows in our application build during recording is called auto learning.
Step 1: Start recording
Step 2: Recognize object During recording time
Step 3: Script generation
Step 4: Catch entry in GUI Map
Step 5: Catch object depends on that entry. During running time
To edit above like recognition entries of objects and windows, we can follow below navigation.
To save or maintain that entries long time, WinRunner 7.0 provides two types of modes.
a) Global GUI Map file
b) Per test mode
a) Global GUI Map file: In this mode, WinRunner maintains common entries for objects and windows in a single file. Suppose if you forgot entries saving before closing WinRunner, then WinRunner maintains that entries in local buffer (i.e 10KB of fixed memory). To open this buffer, we can follow below navigation.
b) Per test mode: In this mode, WinRunner maintains entries for objects and windows for every test as a separate file.
In the above two modes, WinRunner maintains Global GUI Map file as default. If you want to change to per test mode, we can follow below navigation.
2) Pre-learning:- In lower versions of WinRunner 7.0, WinRunner testing process starts with learning to recognize objects and windows in application build before starts test creation. For this pre-learning, test engineers are following below navigation.
Imp GUI Map entries allows you to perform changes with respect to test requirement.
Situation 1: (Wild card characters)
Some times, our application build objects and windows labels are changing with respect to inputs. To conduct data driven test on that type of objects and windows, test engineers are using wild card characters in that corresponding entries.
The below are the original and modified entries, we perform changes only in physical description of the entry.
To perform changes in entries like as above example, we can follow below navigation:
Situation 2: (Regular Expression)
Some times my application window objects, labels variant depending on events. To conduct data driven test on that type of objects, we can use regular expressions in corresponding entry.
Situation 3: (Virtual object wizard)
Sometimes, WinRunner is not able to recognize objects in our application build due to advanced technology. To recognize non recognized objects forcibly, we can use virtual object wizard(VOW).
Navigation:
Situation 4: (Mapped to Standard Class)
Some times WinRunner is not able to return all testable properties for a recognized object. To get all testable properties for that object also, test engineers follows below navigation.
Situation 5: (GUI Map Configuration)
Some times more than one object in a same window consists of same physical description with respect to WinRunner defaults (i.e. class or label). To distinguish this type of objects, test engineers are performing changes in physical description of that object type.
Navigation:
Note: Test engineers are maintaining MSW-Id as default optional, because every two objects consists of different MSW-Id.
MSW-Id MicroSoft Windows identity of object.
Situation 6: (Selective Recording)
To record our business operations on specified project only, we can use this setting. Navigation is:
User Interface Testing: WinRunner is a functionality testing tool, but it provides a set of facilities to automate user interface testing also. In this user interface test automation, WinRunner depends on MicroSoft rules.
Controls start with init cap
Ok/Cancel existence
System menu existence
Controls must visible
Controls are not over lapped
Controls are aligned
To apply above six rules on our application build, we can use some TSL functions.
a) load_os_api( ):- We can use this function to load application programming interface (i.e. api) system calls into RAM. The syntax is
load_os_api( );
b) configure_chkui( ):- To customize required rules in that six, test engineers are using below function. The syntax is
configure_chkui(TRUE/FALSE, …… 6 times);
c) check_ui( ):- We can use this function to apply above customized rules on specified window. The syntax is
check_ui(“window name”);
Navigation:
Note: But sometimes we can not test user interface testing completely, so we do not go for test automation, we go for manual testing only.
Regression Testing: During test execution, test engineers are reporting mismatches to developers. After solving that mismatches by developers, testing team receive modified build and conducts Regression testing to ensure that modification in below approach.
a) GUI Regression: To find object properties level differences in between old build and modified build, we can conduct GUI Regression test.
The check points are created on the old build and they are applied to the new build.
Note: During this GUI test execution, if test engineers got expected differences, then test engineer concentrate on remaining testing, otherwise test engineer reject that modified build to the development team.
Navigation for GUI Regression Test:
b) Bitmap Regression Test: To find images level differences between old and modified build we can use this technique.
Navigation for Bitmap Regression test:
Note: In this regression, GUI Regression test is mandatory, but Bitmap Regression Test is optional
Note: After completion of GUI Regression and bitmap regression, testing team concentrate on functionality regression with respect to modification.
Note: Rapid Test Script Wizard does not appears in “Create” menu, when you selected web test option in add-in manager (or) when you selected per test mode in settings.
Exception Handling:- Exception means a runtime error. To handle runtime errors during test execution, we can use three types of exceptions in WinRunner.
a) TSL Exceptions
b) Object Exceptions
c) PopUp Exceptions
a) TSL Exceptions: These exceptions raised when a specified TSL statement returns specified error. To create this type of exceptions, we can follow below navigation:
Example:
The function with physical description body is pasted in the WinRunner, and we have to write the functional body with respect to the exception raised.
public function exep (in rc, in func)
{
printf(func&”returns”&rc);
}
b) Object Exceptions:- These exceptions raised when a specified object property is equal to our expected.
To create object exception, we can follow below navigation:
Example: public function excep(in win, in obj, in attr, in val)
{
printf(“enabled”);
}
b) popup exceptions:- These exceptions raised when specified window come to focus. We can use these exceptions to skip unwanted windows raised during test execution.
Navigation:
We can use below TSL functions to enable and disable exceptions in required positions in test scripts.
a) exception_off( ):- We can use this function to disable specified exception. The syntax is exception_off(“Exception Name”);
b) exception_on( ):- We can use this function to enable a disable exception. The syntax is exception_on(“Exception name”);
c) exception_off_all( ):- We can use this function to disable all exceptions. The syntax is
exception_off_all( );
Note: By default exceptions are in “ON” positions.
Web Testing: WinRunner 7.0 provides facilities to automate functionality testing on web pages also (i.e. HTML).
Client/Server Architecture:-
Web Architecture:-
Web Functionality Testing: During functionality testing on website, test engineers apply below coverage’s on web pages.
1) Behavioral Coverage
2) Error-Handling Coverage
3) Input Domain Coverage
4) Calculations Coverage
5) Backend Coverage
6) URL’s test Coverage (Uniform Resource Locator)
7) Static test Coverage
I) URL’s Coverage: It is a new coverage for web applications in functionality testing, because web applications are URL driven applications. This URL testing is also known as links testing (or) Site map testing (or) navigational testing of website.
During this test, testing team concentrate on links execution and links existence.
Link execution means that, whether that link is correctly working or not?
Link existence means that, whether the link is right place or not?
To automate this URL’s testing using WinRunner, we can use GUI check point. Test engineers create this GUI check point on text link, image link, cell, table, and frame.
Note: In this web functionality testing automation, test engineer select web test option in add-in manager to recognize web objects developed in HTML & DHTML. WinRunner testing tool does not support XML (Xtensive Mark up Language) object.
a) Text Link:- It is a non-standard object and it consists of a set of non-standard properties such as:
i) Back ground Color (#hexa decimal number of expected color)
* ii) Broken link (valid/not valid)
iii) Color (#hexa decimal number of expected color)
iv) Font (Style of link text)
v) Text (spelling of the link test)
* vi) URL (Expected path of the next page)
Among all the above, we go for only two properties i.e. Broken link and URL. In broken link we have two options, they are valid and invalid. Valid means that the link is opened in a separate web page where as not valid means that the web page opened in the same page.
Function: obj_check_gui(“Link text”, “checklist file.ckl”, “Expected value file”, time to create);
b) Image link:- It is also a non-standard object and it consists of a set of non-standard properties such as:
* i) Broken link (Valid/Not valid)
ii) Image content (The binary representation of image eg: .bmp)
iii) Source (Path of image in hard disk)
iv) Type (Image link, plain image, dynamic image, image button, previously saved site image such as banners).
v) URL (Expected path of next page)
Function: obj_check_gui(“Image file name”, “check list file name.ckl”, “Expected value file”, time to create);
Above like check points creation done by test engineers when that web pages are in offline mode. There are two types of offline environments
In this URL testing, test engineers collect information from development team through site map document. This document defines mapping between link name and corresponding offline path of next page.
Format of sitemap document (in excel sheet or MS Word)
c) Cell:- It is also a non-standard object and it indicates some area of web page, including text links and image links. To cover one cell level links through a single check point we can use cell properties. To get that cell properties, we can follow below navigation:
Properties:-
i) Back ground color (#hexa decimal number of expected color)
ii) Broken links (Link name, URL, YES/NO)
iii) Cell Content (Static text in that cell area)
iv) Format (Hierarchy of links in that cell)
v) Images (Image file name, Image type, width. Height)
vi) Links (Link name, Expected URL)
Function: win_check_gui(“Cell logical name”, “Check gui file.ckl”, “Expected value file”, Time to create);
d) Table:- It is also a non-standard object and it consists of a set of non-standard properties. But, they are not sufficient to automate URL’s testing. These properties used for cell’s coverage (i.e. Number of cells = Number of rows * Number of Columns).
e) Frame:- It is also a non-standard object and it consists of a set of standard and non-standard properties. But test engineers are using non-standard properties for URL’s testing automation.
i) Broken links (Link name, URL, YES/NO)
ii) Count objects (Number of standard and non-standard objects)
iii) Format (Hierarchy of internal links)
iv) Frame content (Static text in that frame)
v) Images (Image file name, Image type, width, height)
vi) Links (Link name, URL)
Function: win_check_gui(“Logical Frame name”, “Lisk of check file.ckl”, “Expected value file”, Time to create);
Note:- In general, test engineers are automating URL’s testing at frame level. If a frame consists of huge amount of links, test engineers concentrate on cell level.
II) Static Text Testing:- To conduct calculations and other text based tests, test engineers are using get text option in WinRunner. This option consists of four sub options, when you select a web test option in add-in manager.
a) From Object/Window:- To capture web objects values into variables, we can use this option. The navigation is as follows:
Function:- web_obj_get_text(“Object name”, “#Cell row number”, “#Cell Column number”, variable, “Text before”, “Text after”, time to create);
Example:
Test script:
sum=0;
set_window(“Rediff”, 4);
tbl_get_rows_count(“Mail Box”, n);
for(i=1;i<n;i++)
{
tbl_get_cell_data(“Mail Box”, “#”&I,”#4”, s);
s=substr(s,1,length(s)-2);
sum=sum+s;
}
web_obj_get_text(“Total”, “#0”, “#0”, tot, “ “, “KB”, 1);
if(tot==sum)
tl_step(“Step”,0, “Calculation is pass”);
else
tl_step(“Step”, 1, “Calculation is fail”);
b) From Screen area:- This function is not supported for web pages. But this sub option is present in Create menu, because now-a-days the web pages are developed by using java applets, VB script, which are supported by WinRunner.
c) From Selection (Web only):- To capture static text web page, we can use thisw option. Navigation is as follows:
Function:- web_frame_get_text(“Frame logical name”, variable, “Text before”, “Text after”, Time to create);
Example:-
Expected: Indian Rs value = American $value * 45 + Australian $ value *35.
set_ window(“Shopping”, 2);
web_frame_get_text(“Frame name”, x, “American $”, “as”, 1);
web_frame_get_text(“Frame name”, y, “Australian $”, “as”, 1);
web_frame_get_text(“Frame name”, z, “Rs:”, “as”, 1);
if(z = = (x*45 + y*35))
tl_step(“Step”, 0, “Calculation is pass”);
else
tl_step(“Step”, 1, “Calculation is fail”);
d) Web Text check point:- We can use this option to identify existence of text in specified place of web page.
Function:- web_frame_get_text(“Frame logical name”, Expected text, “text before”, “text after”);
Web Functions:-
i) web_link_click( ):- WinRunner used this function to record a text link operation.
Syntax: web_link_click(“Link Text”);
ii) web_image_click( ):- WinRunner use this function to record an image link operaton.
Syntax: web_image_click(“Image file name”, x, y);
iii) web_browser_invoke( ):- We can use this function to open a web page during text execution.
Syntax: web_browser_invoke(IE/NETSCAPE, “URL”);
WinRunner 6.0 Vs WinRunner 7.0:-
1) Auto Learning
2) Per Test Mode
3) Web Test Function
4) Date Functions (i.e. time_str( );)
5) Run time Record check
6) Selective Recording
7) GUI spy (Whether an object is recognizable or not?)
Note: To stop spying, we can use L_Ctrl+F3.
TESTING DOCUMENTS
I) Test Policy:- It is a company level document and developed by Quality Control people (QC almost management).
The below abbreviations are follows:
LOC Lines of code
FP Functional Points (i.e. Number of screens, input’s, output’s, queries, forms, reports)
QAM Quality Assessment Measurement
TMM Test Management Measurement
PCM Process Capability Measurement.
Above Test Policy can defines, “Testing Objective”. To meet that objective, Quality Analyst people can define, Testing Approach through a Test Strategy document.
II) Test Strategy:- It is a company level document and developed by Quality Analyst (QA) people. This test strategy defines a common testing approach to be followed.
Components in Test Strategy:-
1) Scope and Objective:- About Organization, Purpose of testing and testing objective.
2) Business Issues:- Budget control for testing. Eg:
3) Testing Approach:- Mapping between testing issues and development stages (V-Model).
This testing approach is done in the following way i.e. in matrix form. That matrix is called as Test Responsibility Matrix (TRM)/Test Matrix (TM). It is mainly based on he Development stages and Test factors. In Development Stages we have, five stages, and in testing factors side we have, fifteen factors. These are shown in the following figure:
Development
Stages
Test
Factors Information
Gathering
&
Analysis
Design
Coding
System Testing
Maintenance
1) Ease of Use
X
X
Depends on Change Request
2) Authorization
4) Roles and Responsibilities:- Names of jobs in testing team and their responsibilities during testing.
5) Test Deliverables:- Required testing documents to be prepared during testing.
6) Communication and Status Reporting:- Required negotiation between every two consecutive jobs in testing team.
7) Defect Reporting & Tracking:- Required negotiation between testing team and development team to track defects.
8) Testing Measurements & Metrics:- QAM, TMM, PCM.
9) Risks & Mitigations:- List of expected failures and possible solutions to over come during testing.
10) Training Plan:- Required training sessions to testing team to understand business requirements.
11) Change & Configuration Management:- How to handle change requests of customer during testing and maintenance.
12) Test Automation & Tools:- Required possibilities to go to automation.
Test Factors:- To define a quality software, Quality Analyst people are using fifteen test factors.
1) Authorization:- Whether a user is valid or not valid to connect to application?
2) Access Control:- Whether a valid user have permissions to use specific services or not?
3) Audit Trail:- Whether our application maintains Metadata about user operations or not?
4) Continuity of processing:- The integration of internal modules for control and data transmission (Integration Testing).
5) Correctness:- Meet customer requirements in terms of functionality.
6) Coupling:- Co-existence with other existing software (Inter System Testing) to share common resources.
7) Ease of use:- User friendliness of screens.
8) Ease of operate:- Installation, Uninstallation, dumping (on computer to other computer), downloading, uploading.
9) File Integrity:- Creation of back up during execution of our application (For recovery).
10) Reliability:- Recover from abnormal states.
11) Portability:- Run on different platforms.
12) Performance:- Speed in processing.
13) Service levels:- Order of functionalities.
14) Maintainable:- Whether our application build is long time serviceable in customer site or not?
15) Methodology:- Whether test engineers are following standards or not during testing.
Test Factors Vs Black Box Testing Techniques
1) Authorization Security Testing
Functionality/Requirements Testing.
2) Access Control Security Testing
Functionality/Requirements Testing.
3) Audit Trail Functionality/Requirements Testing
(Error-Handling Coverage)
4) Continuity of Processing Integration Testing (White Box Testing).
5) Correctness Functionality/Requirements Testing
6) Coupling Inter Systems Testing.
7) Ease of use User Interface Testing
Manual Support Testing
8) Ease of Operate Installation Testing
9) File Integrity Functionality/Requirements Testing
Recovery Testing
10) Reliability Recovery Testing (One-user level)
Stress Testing (Peak load level)
11) Portability Compatibility Testing
Configuration Testing (H/W)
12) Performance Load Testing
Stress Testing
Data Volume Testing
Storage Testing
13) Service Level Functionality/Requirements Testing
Stress Testing (Peak load)
14) Maintainable Compliance Testing
15) Methodology Compliance Testing (Whether our testing teams follow testing standards or not during testing?)
III) Test Methodology:- It is a project level document and developed by Quality Analyst or corresponding Project Manager(PM).
The test methodology is a refinement form of the test strategy with respect to corresponding project. To develop a test methodology from corresponding test strategy, QA/PM follows below approach.
Step 1: Acquire test strategy
Step 2: Identify Project Type
Project
Type
Analysis
Design
Coding System
Testing
Maintenance
1) Traditional Project
2) Off_the_shelf Project (Out Sourcing)
3) Maintenance
Project (On-Site Project)
Note:- Depends on project type, Quality Analyst(QA) or Project Manager(PM) decrease number of columns in TRM (Test Responsibility Matrix) means i.e. in development stages.
Step 3: Determine Project Requirements
Note: Depends on current project version requirements, Quality Analyst(QA) or Project Manager(PM) decrease number of rows in TRM, means that is done in Test Factors.
Step 4: Determine the scope of project requirements.
Note: Depends on expected future enhancements, Quality Analyst(QA) or Project Manager(PM) can add some of the previously removed test factors into TRM.
Step 5: Identify tactical risks
Note: Depends on analyzed risks, Quality Analyst(QA) or Project Manager(PM) decrease some of selected rows in TRM.
Step 6: Finalize TRM for current project, depending on above analysis.
Step 7: Prepare system test plan.
Step 8: Prepare modules test plans if required.
Testing Process:-
PET Process:- (Process Experts and Tools and Technology)
It is also a refinement form of V-Model. This model defines mapping between development process and testing process.
IV) Test Planning:- After completion of test methodology creation and finalization of required testing process, test lead category people concentrate on test planning to define “What to test?”, “When to test?”, “How to test?”, “Who to test?”.
Test plan Format:- (IEEE)
1) Test Plan_ID:- Unique number or name
2) Introduction:- About project
3) Test Items:- Modules or functions or services or features
4) Features to be tested:- Responsible modules for test designing.
5) Features not to be tested:- Which ones & why not?
6) Approach:- Selected testing techniques to be applied on above modules. (Finalized TRM by Project Manager)
7) Testing Tasks:- Necessary tasks to do before starts every feature testing.
8) Suspension Criteria:- What are the technological problems, raised during execution of above features testing.
9) Feature pass or fail criteria:- When a feature is pass and when a feature is fail.
10) Test Environment:- Required hardwares and softwares to conduct testing on above modules.
11) Test Deliverables:- Required testing documents to be prepared during above modules testing by test engineers.
12) Staff & Training needs:- The names of selected test engineers and required training sessions to them to understand business logic. (i.e. Customer Requirement)
13) Responsibilities:- Work allocation to above selected testers, in terms of modules.
14) Schedule:- Dates and time
15) Risks & Mitigations:- Non-technical problems raised during testing and their solutions to over come.
16) Approvals:- Signatures of Project Manager or Quality Analyst & Test Lead.
3, 4, 5 Defines What to test?
6, 7, 8, 9, 10, 11 Defines How to test?
12, 13 Defines Who to test?
14 Defines When to test?
To develop above like test plan document, test lead follows below work bench (approach).
1) Testing Team Formation:- In general, the test plan process starts with testing team formation, depends on below factors
Availability of test engineers
Possible test duration
Availability of test environment resources.
Case Study:
Test Duration:-
Client/Server, Web Applications, ERP (like SAP) 3-5 months of System Testing.
System Software (Net working, compilers, Hard ware related projects) 7-9 months of System Testing.
Machine Critical (Like Satellite projects) 12-15 months of System Testing.
Team Size:- Team size is based on the developers and expressed in terms of ratio’s i.e.
Developers : Testers = 3 : 1
2) Identify Tactical Risks:-After completion of testing team formation, test lead concentrate on risks analysis or cause-root analysis.
Examples:-
Risk 1: Lack of knowledge on that domain of test engineers (Training sessions required to test engineers.)
Risk 2: Lack of budget (i.e. Time)
Risk 3: Lack of resources. (Bad Testing environment, in terms of facilities)
Risk 4: Lack of test data (Improper documents, and mitigation is Ad-Hoc testing, i.e. based on past experience).
Risk 5: Delays in delivery (in terms of job completion, mitigation is working for over time).
Risk 6: Lack of development process rigor. (Rigor means seriousness)
Risk 6: Lack of communication
3) Prepare Test Plan:- After completion of testing team formation and risks analysis, test lead prepare test plan document in IEE format.
4) Review Test Plan:- After completion of test plan document preparation, test lead conducts reviews on that document for completeness and correctness. In this review, test lead applies coverage analysis.
Requirements based coverage (What to test?)
Risks based coverage (Who & When to test?)
TRM based coverage (How to test?)
V) Test Design:- After completion of test planning and required training sessions to testing team, test design will come in to state. In this state, test engineers are preparing test cases for responsible modules, through three test case design methods.
a) Business logic based test case design
b) Input domain based test case design
c) User interface based test case design
a) Business logic based test case design:- In general, test engineers are preparing test cases depending on use cases in S/wRS.
Every use case in S/wRS, describes that how to use a functionality? These use cases are also known as functional specifications (FS).
From the above model, every test case describes that a test condition to be applied. To prepare test cases depending on use cases, test engineers are following below approach.
Step 1: Collect all required use cases of responsible module
Step 2: Select a use case and their dependencies from that collected list.
Determinant Dependent
Login Mail Log out
Module
2.1: Identify entry condition (Base State)
(First operation user-Id- in login)
2.2: Identify input required (Test Data)
2.3: Identify Exit condition (End state)
User-Id is last operation in login
2.4: Identify outputs & outcome
Eg:
Output(means value) Outcome(means process change state)
2.5: Study normal flow (Navigation or procedure)
2.6: Study alternative flows and exceptions?
Step 3: Prepare test cases depending on above collected information from use case.
Step 4: Review that test cases for completeness and correctness.
Step 5: Go to Step 2 until all completion of all use cases study.
Use Case 1: A login process user_id and password to authorize users. User-Id allows alphanumerics in lower case from 4-16 characters long. Password allows alphabets in lower case from 4 to 8 characters long.
Test Case 1: Successful entry of User_id
Test Case 2: Successful entry of password
Test Case 3: Successful login operation.
Use Case 2:- An insurance application allows users to select different types of policies. From a use case, when a user select type B insurance, system asks age of the customer. The age value should be greater than 18 years and should be less than 60 years.
Test case 1: Successful selection of policy type B insurance
Test case 2: Successful focus to age, when you selected type B insurance
Test case 3: Successful entry of age
Use Case 3:- In a shopping application, users can apply for different types of items purchase orders. From a purchase order Use case, user selects item number and enters quantity up to 10. After input’s filling, system returns one item price and total amount with respect to quantity.
Test case 1: Successful selection of item number
Test case 2: Successful entry of quantity
Test case 3: Successful calculation with Total = Price * Quantity
Use Case 4:- Prepare test cases for a computer shutdown
Test case 1: Successful selection of shut down operation using start menu
Test case 2: Successful selection of shutdown option using alt+F4
Test case 3: Successful shutdown operation
Test case 4: Unsuccessful shutdown operation due to a process in running
Test case 5: Successful shutdown operation through power off.
Use Case 5:- A door opened when a person come to in front of door and that door closed when person come to inside of the door.
Test case 1: Successful open of door, when a person is in front of door.
Test case 2: Successful door closing due to absence of the person.
Test case 3: Successful door closed when a person cone to inside.
Test case 4: Unsuccessful door closing due to person standing at middle of the door.
Use Case 6:- Prepare test cases for money with drawl from ATM, with all rules and regulations.
Test case 1: Successful insertion of card
Test case 2: Unsuccessful card insertion due to wrong angle
Test case 3: Unsuccessful card insertion due to invalid account. EG: Time expired or other bank card
Test case 4: Successful entry of PIN number.
Test case 5: Unsuccessful operation due to wrong PIN number enter 3 times
Test case 6: Successful selection of language
Test case 7: Successful selection of account type
Test case 8: Unsuccessful selection due to wrong accounts type selection with respect to that corresponding card.
Test case 9: Successful selection of with-drawl operation
Test case 10: Successful entry of amount
Test case 11: Unsuccessful operation due to wrong denominations. (Test box Oriented)
Test case 12: Successful with-drawl operation (Correct amount, right receipt & possibility of card come back)
Test case 13: Unsuccessful with-drawl operation due to amount greater than possible balance.
Test case 14: Unsuccessful with-drawl operation due to lack of amount in ATM
Test case 15: Unsuccessful with-drawl operation due to server down
Test case 16: Unsuccessful operation due to amount greater than day limit (Including multiple transactions also)
Test case 17: Unsuccessful operation due to click cancel, after insert card
Test case 18: Unsuccessful operation due to click cancel after insert card, enter PIN number
Test case 19: Unsuccessful operation due to click cancel after insert card, enter PIN number, selection of language
Test case 20: Unsuccessful operation due to click cancel after insert card, enter PIN number, selection of language, Selection of account type
Test case 21: Unsuccessful operation due to click cancel after insert card, enter PIN number, selection of language, Selection of account type, selection of with-drawl
Test case 22: Unsuccessful operation due to click cancel after insert card, enter PIN number, selection of language, Selection of account type, selection of with-drawl, after entering amount
Test case 23: Number of transactions per day.
Use Case 7:- Prepare test cases for washing machine operation
Test case 1: Successful power supply
Test case 2: Successful door open
Test case 3: Successful water supply
Test case 4: Successful dropping of detergent
Test case 5: Successful clothes filling
Test case 6: Successful door closing
Test case 7: Unsuccessful door close due to clothes over flow
Test case 8: Successful washing setting selection
Test case 9: Successful washing operation
Test case 10: Unsuccessful washing operation due to lack of water
Test case 11: Unsuccessful washing operation due to clothes over load
Test case 12: Unsuccessful washing operation due to improper power supply
Test case 13: Unsuccessful washing due to wrong settings
Test case 14: Unsuccessful washing due to machine problems
Test case 15: Successful dry clothes
Test case 16: Unsuccessful washing operation due to water leakage from door
Test case 17: Unsuccessful washing operation due to door opened in the middle of the process
Use Case 8:- An E-Banking application allows users through internet connection. To connect to bank server our application allows values for below fields.
Password: 6- digits number
Area code: 3-digits number/ blank
Prefix: 3-digits number but does not start with “0” & “1”
Suffix: 6-digit alphanumeric
Commands: Cheque deposit, money transfer, bills pay, mini statement
Test case 1: Successful entry of password
Test Case 2: successful entry of area code
Test case 3: Successful entry of prefix
Test case 4: Successful entry of suffix
Test case 5: Successful selection of commands such as cheque deposit, money transfer, bills pay and mini statement.
Test case 6: Successful connection to bank server with all valid inputs.
Test case 7: Successful connection to bank server with out filling area code.
Test case 8: Unsuccessful connect to bank server with out filling all fields except area code.
Test case Format (IEEE):-
1) Test case-Id: Unique number or name
2) Test case name: The name of test condition
3) Feature to be tested: Module or function name (To be tested)
4) Test Suit-Id: The name of test batch, in which this case is a member
5) Priority: Importance of test cases in terms of functionality
P0 Basic functionality
P1 General Functionality (I/P domain, Error handling, Compatibility, Inter Systems, Configuration, Installation…)
P2 Cosmetic Functionality (Eg: User Interface Testing)
6) Test Environment: Required hardwares & softwares to execute this case
7) Test Effort (Person per hour): Time to execute this test case (Eg: Average time to execute a test case is 20 minutes)
8) Test Duration: Date and time to execute this test case after receiving build from developers.
9) Test Setup: Necessary tasks to do before starts this test case execution.
10) Test Procedure: A step by step process to execute this test case
These are filled during test design These are filled during test execution
11) Test case Pass/Fail Criteria: When this case is pass & when this case is fail.
Note: In general, test engineers are creating test case document with the step by step procedure only. They can try to remember remaining fields for the further test execution.
Case Study 1:- Prepare test case document for “Successful file save” in notepad.
1) Test case-Id: Tc_save_1
2) Test case Name: Successful file save
3) Test Procedure:
Case Study 2:- Prepare test case document for “Successful Mail Reply”.
1) Test case-Id: Tc_Mail_Reply_1
2) Test case name: Successful Mail Reply
3) Test Procedure:
b) Input domain based test case design:- Sometimes test engineers are preparing some of the test cases depends on designed test cases.
EG: Input domain test cases, because use cases are responsible for functionality description and not responsible to size and type of inputs.
Due to above reason, test engineers are studying data models in low level design documents to collect complete information about size and type of every input object.
EG: E-R diagrams
During the study of data model, test engineers follows below approach
Step 1: Collect data models of responsible modules
Step 2: Study every input attribute in terms of size, type & constraints
Step 3: Identify critical attributes, which are participating in internal manipulations
Step 4: Identify non-critical attributes, which are just input/output type
A/c No
A/c Name Critical Attributes
Non-Critical Attributes Balance
Address
Step 5: Prepare data matrices for every input attribute
Data Matrix
Note: If a test case is covering an operation, test engineers are preparing a step by step procedure for that test case. If a test case is covering an object, test engineers are preparing data matrix like table.
For example: Login is a operation, and entering Uer_Id and Password are object.
Case Study: A bank automation application allows fixed deposit operation from bank employees. This fixed deposit form allows below fields as inputs.
Depositor name: Alphabets in lower case with initcap
Amount : 1500 to 100000
Tenure(Time to deposit) : Up to 12 months
Interest : Numeric with decimal point
From the fixed deposit operation use case, if tenure greater than 10 months, then interest also greater than 10%.
Prepare test case document for above scenario.
Test case 1:-
1) Test case_Id: Tc_Fd_1
2) Test case name: Successful entry of depositor name
3) Data matrix
Test case 2:
1) Test case_Id: Tc_Fd_2
2) Test case Name: Successful entry of amount
3) Data Matrix
Test case 3:
1) Test case_Id: Tc_Fd_3
2) Test case Name: Successful entry of tenure
3) Data Matrix
Test case 4:
1) Test case_Id: Tc_Fd_4
2) Test case Name: Successful entry of interest
3) Data Matrix
Test case 5:
1) Test case_Id: Tc_Fd_5
2) Test case Name: Successful fixed deposit operations
3) Test Procedure:
Test case 6:
1) Test case_Id: Tc_Fd_6
2) Test case Name: Successful fixed deposit operation when tenure is greater than 10months & interest also greater than 10%
3) Test Procedure:
Test case 7:
1) Test case_Id: Tc_Fd_7
2) Unsuccessful fixed deposit operation with out filling all fields
3) Test procedure
c) User Interface based Test Case Design: To prepare test cases for usability testing, test engineers are depending on User Interface conventions (rules) in our organizations, Global user interface rules, (Eg: MicroSoft 6 rules) and interest of customer site people.
Example Test cases:-
Test case 1: Spelling check
Test case 2: Graphics check (Alignment, Font, Style, color, other Micro Soft 6 rules
Test case 3: Meaning error messages
Test case 4: Accuracy of data displayed (Reality is there or not).
Example:
1) 2)
Test case 5: Accuracy of data in the data base as a result of user input
Test case 6: Accuracy of data in the data base as a result of external factors.
EG: Imported files.
Test case7: Meaningful help messages.
Note: Test case 1 to test case 6 indicates User Interface Testing, test case 7 indicates Manual Support Testing.
Test cases Selection Review:- After completion of test cases design for responsible modules to test lead, for completeness and correctness checking.
In this review, test lead follows coverage analysis
Business Requirement based coverage
Use Case based coverage
Data Model based coverage
User Interface based coverage
TRM based coverage
At the end of this review, test lead prepare Requirement Traceability Matrix (RTM) and it is also known as Requirements Validation Matrix (RVM) and this matrix defines mapping between test cases and customer requirements.
V!) Test Execution:- After completion of test design and their reviews, testing team concentrate on build release from development team.
1) Levels of test execution:-
2) Test Execution Levels Vs Test Cases:-
Level-0 all P0 Test cases
Level-1 all P0, P1 & P2 test cases as batches
Level-2 Selected P0, P1 & P2 test cases with respect to modifications
Level-3 Selected P0, P1 & P2 test cases with respect to high bug density modules.
3) Test Harness:- (Test Frame Work)
It means that “ready to start testing”.
Test Harness = Test Environment + Test Bed
(Test Environment is hardwares and softwares required for testing, where as Test Bed means that “testing documents”.
4) Build Version Control:- During test execution, test engineers are receiving build from developers in below model (In general).
From the above model, test engineers are downloading application from softbase in the server system, through networking protocols. (Eg: FTP File Transfer Protocol).
To distinguish builds, development people are using unique version numbering system for that builds. This numbering system is understandable to testing team to distinguish old build and modified build.
For this build version controlling, development people are using version control tools also. (i.e. Microsoft Visual Source Safe to maintain old code and modified code).
5) Level-0:- (Sanity Testing)
In general, test execution starts with sanity testing to decide whether development team released build is stable for complete testing to be applied or not? During this sanity testing, testing team concentrate on below factors to verify.
Understandable
Operatable
Observable
Controllable
Simplicity Testability Factors
Maintainable
Consistency
Automatable
When an application build allows above factors successfully, test engineers conducts complete testing on that build, other wise test engineers reject that build to developers. This type of Level-0 testing is also known as Tester Acceptance Testing/ Build Verification Testing/ Testability Testing/ Octangle Testing.
6) Test Automation:- After receiving stable build from developers, testing team concentrate on test automation if possible. In this test automation, test engineers conducts recording and check points insertion to create automated test script.
Test Automation
Complete Automation * Selective Automation
(All P0 Test cases &
carefully selected P1 Test cases)
From the above model, test engineers are not automating some of P1 test cases and all P2 test cases, because they are not repeatable and easy to apply.
7) Level-1 (Comprehensive Testing):- After receiving stable build and completion of possible automation, test engineers concentrate on real test execution as batches. Every test batch consists of a set of dependent tests (i.e. end state of every test is base state to other test). Test batches are also known as Test Suites or Test Sets.
During this test batches execution, test engineers prepare Test Log document. This document consists of three types of entries.
Passed, all expected values are not equal to actual.
Failed, any one expected variates with actual
Blocked, post-pone due to parent functionality is wrong
Level-1 (Comprehensive Test Cycle)
8) Level-2 (Regression Testing):- During Level-1 test execution, test engineers are reporting mismatches to developers. After receiving modified build from them, testing team concentrate on Regression Testing to ensure completeness and correctness of that modification.
Developers Resolved Bug Severity
High Medium Low
All P0 Test cases, all P1 All P0 test cases, carefully Some of P0 test cases,
Test cases, carefully selected P1 test cases, Some of P1 test cases,
Selected P2 test cases. Some of P2 test cases. Some of P2 test cases.
Modified Build
Case 1: If development team resolved bug severity is high, then test engineers are re-executing all P0, all P1 and carefully selected P2 test cases on that modified build.
Case 2: If development team resolved bug severity is medium, then test engineers are re-executing all P0, carefully selected P1, and some of P2 test cases.
Case 3: If development team resolved bug severity is low, then test engineers are re-executing some of P0, P1 and P2 test cases.
* Case 4: If testing team received modified build from developers due to sudden changes in customer requirements, then test engineers are receiving all P0, all P1 and carefully selected P2 test cases with respect to sudden changes in requirements.
VII) Test Reporting:- During test execution, test engineers are reporting mismatches to developers through an IEEE defect report.
IEEE Format:-
1) Defect_Id: Unique number or name
2) Description: Summary of that defect.
3) Build version_Id: Current build version, in which above defect raised.
4) Feature: In which module of that build, you found that defect.
5) Test case Name: Corresponding failed test condition, which returns above defect.
6) Reproducible: Yes/No
Yes If defect appears every time during test repetition
No If defect does not appears every time (i.e. appears rarely) during test execution.
7) If yes, Attach Test Procedure:
8) If No, Attach snap shot and strong reasons:
9) Severity: Seriousness of defect with respect to functionality
High not able to continue remaining testing before solving that defect.
Medium Mandatory to solve, but able to continue remaining testing before solving that defect.
Low may or may not to solve
10) Priority: The importance of defect to solve with respect to customer (high, medium, low)
11) Status: New/ reopen
New For the first time test engineers are reporting defect to the developers.
Reopen Re-reporting the defect again second time.
12) Reported by: Name of test engineer.
13) Reported on: Date of submission
14) Assigned to: The name of responsible person at development side to receive that defect.
15) Suggested fix: Possible reasons to accept and resolve that defect.
________________________________________________________________________
(By developers)
16) Fixed bug: (Acceptance or rejected) Project Manager or Project Lead
17) Resolved bug: Developer
18) Resolved on: Date of solving
19) Resolution type: Type of solution
20) Approved by: The signature of Project Manager
Defect Age: The time gap between “resolved on” and “reported on”.
Defect Submission: (i.e. Process) Defect submission process is different for large
scale organization and small scale organization.
Defect Statuses:
Bug Life Cycle:
Resolution Type: (Receiving by developers to testers)
There are twelve resolution types, used by developers to intimate to test engineers.
1) Duplicate, rejected due to this defect like as previously reported defect.
2) Enhancement, rejected due to this defect related to future requirements of customer.
3) Software Limitation, rejected due to this defect raised with respect to limitations of software technologies.
4) Hardware Limitation, rejected due to this defect raised with respect to limitations of hardware.
5) Not applicable, rejected due to improper meaning of the defect.
6) Functions as designed, rejected due to coding is correct with respect to design documents.
7) Need more information, not accepted and not rejected but developer’s required extra information to fix.
8) Not reproducible, not accepted and not rejected but developers required correct procedure to reproduce that defect.
9) No plan to fix it, not accepted and not rejected but developers required extra time to fix.
10) Fixed, accepted and ready to resolve.
11) Fixed indirectly, accepted but postponed to future version (i.e. differed)
12) User misunderstandings, extra negotiation between developers and test engineers.
* Types of defects (bugs):-
1) User Interface bugs (Low Priority)
Ex1: Spelling mistake (High priority based on customer requirements)
Ex2: Improper right alignment (Low priority)
2) Input domain bugs (Medium Severity)
Ex1: Does not allows valid type (High priority)
Ex2: Allows invalid type also (Low priority)
3) Error handling bugs (Medium Severity)
Ex1: Does not return error message (High Priority)
Ex2: Incomplete meaning of error message (Low Priority)
4) Calculations bugs (High Severity)
Ex1: Dependent outputs are wrong (High priority)
Ex2: Final output is wrong (Low priority)
5) Race Condition bugs (High Severity)
Ex1: Dead lock (High Priority Shoe Stopper)
Ex2: Does not run on expected platforms (Low priority)
6) Hardware bugs (High Severity)
Ex1: Device is not responding (High priority)
Ex2: Wrong output device (Low Priority)
7) Load condition bugs (High Severity)
Ex1: Does not allows multiple users (High Priority)
Ex2: Does not allows customer expected load (Low priority)
8) ID_Control bugs (Medium Severity)
Ex: Wrong logo, logo missing, wrong version number, version number missing, copy right window missing, tester name missing etc.
9) Version controlled bugs (Medium Severity)
Ex: Mismatches between two consecutive build versions.
10) Source bugs (Medium Severity)
Ex: Mistakes in help documents.
VIII) Test Closer:- After completion of all possible test cases execution and bug solving, test lead conduct test closer review meeting to estimate completeness and correctness of test execution process.
In this review, test lead follows below factors:
1) Coverage Analysis
Business requirement based coverage (BRS)
Use cases based coverage (S/wRS)
Data Model based coverage (Design Documents based)
User Interface based coverage
TRM based coverage (PM)
2) Bug Density
Example: Modules Name %of bugs found
A 20%
B 20%
C 40%
D 20%
-------------
100%
-------------
In the above example, more number of bugs found in “Module-C”, So there is need for Regression testing.
3) Analysis of differed bugs: Whether the corresponding differed bugs are postponable or not?
At the end of this test closure review meeting, test lead concentrate on Level-3 testing (i.e. Post Mortem Testing / Final Regression Testing/ Pre-Acceptance / Release Testing)
The below figure brief idea:
IX) User Acceptance Testing:- After completion of final regression, Project management concentrate on User Acceptance Testing to collect feed back from customer site people. There are two approaches to conduct this testing.
1) Alpha Testing
2) Beta Testing
These are explained in previous topic.
X) Sign Off:- After completion of User Acceptance Testing and their modifications, test lead prepare Final Test Summary Report (FTSR). This report is a part in Software Release Note (S/wRN). This Final Test Summary Report consists of below documents as members.
Final Bugs summary report is of below format:
Case Study 1: (Schedule for five months of Testing Process)
Deliverable Responsibility Completion Time
1) Test cases Selection Test Engineers 20-30 days
2) Test cases Selection Review Test Lead & Test Engineers 4-5 days
3) Requirements Test Lead 1-2 days
4) Level-0 Test automation Test Engineers 10-20 days
5) Level-1 & Level-2 Test Engineers 40-60 days
6) Communication & Status Reporting Test Lead & Test Engineers Weekly twice
7) Defect Reporting & Tracking Test Lead & Test Engineers On Going
8) Test Closure & Final Regression Test Lead & Test Engineers 4-5 days
9) User Acceptance Testing Customer site people with involvement of testing team 4-5 days
10) Sign Off Test Lead 1-2 days
Case Study 2:-
1) What type of testing you are doing?
2) What type of testing process is going on at your company?
3) What type of documents will you prepare for testing?
4) What is your involvement in that document?
5) How will you select reasonable test to be applied on a project?
6) When will you go to automation?
7) What methods will you follow to prepare test cases?
8) What are the key components in your company test plan document?
9) What is your company test case format?
10) What is the meaning of Regression Testing and when will you do this?
11) What is the difference between error, defect and bug?
12) How will you report defects in your company?
13) What are the characteristics of the defect to define?
14) Explain bug life cycle?
15) How will you know whether your reported defect accepted / rejected?
16) What you do, when your reported defect rejected by developers?
17) What is the difference between defect age and build interval period?
18) What you do to conduct testing on unknown project?
19) What you do to conduct testing on unknown project with out documents?
20) What are the differences between V-Model and Life Cycle/ Water Fall/ SDLC?
Developed by Mercury Interactive
Test Management Tool
To store testing documents into database
This tool working as a two-tier application
I) Project Admin: This part used by test lead to create data base for new project testing documents and estimate test status of on-going project testing.
a) Create Database: Test Lead create database for testing documents to be stored. By default, testdirector tool maintains MS_Access technology.
Navigation:
Note: In general, testdirector tool create a new project database with 26 tables.
b) Estimate Test Status: TestDirector allows lead to estimate test status of on-going project.
Navigation:
II) TestDirector:- This part used by test engineers to store corresponding testing documents into created database by test lead.
Navigation:
i) Plan Tests: This part is used by test engineers to store their responsible modules test cases.
a) Create Subject:-
b) Create Sub Subject:-
c) Create Test Case:- After completion of subject and sub subjects creation, test engineer prepare reasonable test cases under that corresponding sub subjects.
d) Details:- After creation of a test case under corresponding subject, test engineer maintains required details for that case in given text area such as Test case_ID, Test Suit_ID, Piority, Test Set Up, Test Environment, Test Duration, Test Effort, Test Case Pass/ Fail Criteria.
e) After Entering details, test engineer is preparing test procedure or data matrix for that test case.
Navigation:
f) Test Script:- After preparation of design steps for that test case, test engineers are conducting test automation when that test case type is “WR Automated”.
Navigation:
g) Attach:- It is optional part and used by test engineers to attach any extra files with respect to that test case.
ii) Run Tests: TestDirector provides a facility to execute a set of dependent test as batches.
a) Create Test Batch:
b) Execute Automated Test:-
c) Execute Manual Tests:-
iii) Track Defects:- TestDirector provides a facility to report test execution mismatches to development team, through mailing facility.
ICONS:-
1) Filter Icon:- To select specific defects/ bugs and tests on desktop, we can use this option
To delete Filters, we can use “Clear” icon.
2) Sort Icon:- We can use this option to arrange defects or tests in specified order.
3) Report Icon:- To create hard copies for defects and tests, we can use report icon.
4) Test Grid:- This icon list out all test cases under all sub subjects and subjects.
From the above testing process, Performance Testing is mandatory for multi user applications. EG: Websites and Networking applications.
But manual Load Testing is expensive to find performance of an application under load. Due to this reason, test engineers are planning to be applied Load Test Automation. EG: LoadRunner, SilkPerformer, SQALoadTest and JMeter.
LoadRunner 6.0:-
Developed by Mercury Interactive
Load Testing Tool to estimate performance
Support Client/Server, Web, ERP and Legacy Technologies (C, C++, COBOL…) for Load Testing
Create Virtual Environment to decrease Testing Cost.
Virtual Environment:-
RCL:- Remote Command Launcher, converts a local request into a remote request.
VUGEN:- Virtual User Generator creates multiple virtual requests depending on a real remote request.
Portmapper:- It submit all virtual user requests to single server process port.
CS:- Controller Scenario returns performance results, during execution of server process to respond to multiple virtual user requests.
Time Parameters:- To measure the performance of software applications, we can use below time parameters:
1) Elapsed Time:
Request
Response
This means that the total time for request transmission, process in server and response transmission. This time is also known as Turn Around Time (or) Swing Time.
2) Response Time: The time to get first response from server process.
Request +
Ack Response Time
Response
3) Hits per second: The number of web requests received by web server in one second of time.
4) Throughput: The speed of web server to respond to that web request in one second of time. (KB/Sec).
Note: We can use last two time parameters to estimate performance of web applications.
I) Client/Server Load Testing:- LoadRunner allows you to conduct load testing on multiuser two-tier applications. If the project is in VB-Oracle or JAVA-Oracle. In this load testing, test engineers are using below components:
Customer expected configured server computer
Client/Server master build (Usability and Functionality testing is completed on that build)
Remote Command Launcher
Virtual User Generator
Portmapper
Controller Scenario
Database Server
Test cases for Client/Server Load Testing:-
Navigation:
Transaction Point: LoadRunner is not able to return performance results, when no transaction found in VUser script. Due to this reason, test engineers are inserting transaction points to enclose required operations as actions part.
Navigation:
Rendezvous Point:- It is an interrupt point in VUsers script execution. This point stop current process execution until remaining programs also executed up to that point.
Navigation:
Analyze result: LoadRunner returns performance results through a percentile graph.
Formula:-
Increase Load:- (Stress Testing)
LoadRunner allows you to increase load for existing transactions.
Performance Results Submission:- During load and stress testing, testing team submit performance results to project management in below format.
Benchmark Testing:- After receiving performance results from testing team, project manager can go to decide whether the corresponding values are specifying good or bad performance.
In this benchmarking, project manager compare current performance values depending on below factors.
Performance results of old version
Interest of customer site people
Performance results of competitive products in market
Interest of product managers
If the current performance is not good, then development team is concentrating on changes in structure of application or improve configuration of the environment.
Mixed Operations:- LoadRunner allows you to conduct testing on variant operations under variant loads.
EG:- Select operation with load 10 and Update operation with load 10.
Navigation:
Note1: In this multiple groups execution, test engineers are maintaining same name for Rendezvous point.
Note2: One group users are waiting at Rendezvous point until remaining group VUsers come to same point.
Note3: LoadRunner maintains 30 seconds as maximum time gap between two consecutive groups.
*Note4: In general, test engineers are maintaining 25 VUsers for one group to get perfect performance results.
II) Web Load Testing:-LoadRunner allows you to conduct testing on three-tier applications also. In this web load testing, test engineers are using below components to create test environment.
Customer expected configured Server computer
Web master build (Project)
Remote Command Launcher
Portmapper
Virtual User Generator
Controller Scenario
Browser (IE/Net Scape)
Web Server
Database Server
Test cases for Web Load Testing:
URL open
Text Link
Image Link Web Load Test Cases
Form Submission
Data Submission
1) URL Open:- It emulates you to open a web site home page under load
Function: web_url(“Step name”, “URL= path of home page”, “Target frame= “, LAST);
2) Text Link:- It emulates you to open a middle page through text link under load
Function: web_link(“Link text”, “URL=path of next page”, “Target Frame= “, LAST);
3) Image Link:- It emulates you to open a middle page through an image link under load.
Function: web_image(“Image File name”, “URL= path of next page”, “Target Frame= “,
LAST);
4) Form Submission:- It emulates you to submit a form data to web server under load.
Function: web_submit_form(“Form name”, “attributes”, “hidden fields”, ITEMDATA, “Fields values”, ENDITEM, LAST);
web_submit_form(“Login”, “method=GET”, “action=http:\\localhost\dir\login.asp”, “sysdate”, ITEMDATA, “User_ID=xxxx”, “Pwd=xxx”, “Sysdate=xxx”, ENDITEM, LAST)
5) Data Submission:- It emulates you to submit a formless (No form on desktop) or context less data to web server under load.
Function:- web_server_data(“Step name”, “attributes”, ITEMDATA, “Field Values”, ENDITEM, LAST);
Note 1:- In web load testing, test engineers are selecting E_Business web(Http/HTML) as VUser type.
Note 2:- In web load testing, LoadRunner treat one action as one transaction, by default. To record above VUser script statements, we can follow below navigation.
Analyze Results:- During web load testing, LoadRunner returns two extra time parameters to analyze results.
a) Hits Per Second:-
b) ThroughPut:-
Performance Results Submission:- During web load testing, test engineers are reporting performance results to project manager in below format:
Scenario(URL,Link, Image,Submit form, Data submit)
Load Transaction Time
In Sec Through put
KB/sec
URL
URL
URL
URL 10
15
25
30
Peak Load 2
2
2
3 118
140
187
187
Benchmarking:- Form World Wide Web consortium standards, Link operations are taking 3 seconds, data related operations are taking 12 seconds under normal load.
Developed by Mercury Interactive
Functionality testing tool like as WinRunner
Derived from WinRunner Testing tool concept
Record our business operations in VBScript instead of TSL
QTP support .NET, SAP, PeopleSoft, ORACLE Applications, XML & Multimedia technology as extra than WinRunner
Test Process in QTP:-
Note:- QTP records our manual operations in VBScript. It is an object based scripting language used for website development.
Recording:- QTP records our manual operations in three modes such as:
a) General recording
b) Analog recording
c) Low level recording
a) General Recording:- It is a default mode in QTP. In this mode QTP records mouse and keyboard operations with respect to objects and windows, like as Context Sensitive mode in WinRunner to record.
To select this mode we can follow below navigation:
Click start recording icon once
(or)
Test Menu Record
(or)
F3 as shortcut key
Any one of three options in general recording.
b) Analog Recording:- To record mouse pointer movements on the desktop, we can use Analog mode.
EG: To record digital signatures, graph drawings, image movements etc. To select this analog recording, we can use
Test Menu Analog Recording (OR) Ctrl + Shift + F4
c) Low Level Recording:- In this mode QTP records mouse pointer movements on the desktop with respect to time. TO select low level recording mode we can use
Test menu Low Level Recording (OR) Ctrl + Shift + F3
Check Points:- QTP allows check points into automated test script to conduct functionality testing on build.
Standard Check Points:- To verify properties of objects like as GUI check points in WinRunner, we can use this check points in QTP.
QTP allows Excel sheet column values as parameter data.
Note: QTP check points allows one object at a time.
b) Bitmap check point:- QTP allows you to compare static and dynamic images in our application buid. To create check point on dynamic images, we can select multimedia option in ADD-IN manager. In this dynamic images testing, QTP support 10 seconds play time images as maximum.
0 comments:
Post a Comment