Tuesday, 24 July 2012

Important Terminologies in Software Testing



Traceability Matrix
Traceability matrix/Requirement Traceability matrix (RTM)Traceability matrix is a matrix which is used to keep track of the requirements. It is a mapping between the requrements and test cases, we will do this for identify missing test cases. It is prepared by trhe either test lead or test engg along with test lead.Exact requirements from the requirement doc given by the client are copied in this matrix. These requirements are assigned a unique number and the remark as testable or not. Against each testable requirement test objective and test case is identified. It is highly possible that for one req there could be multiple test objectives and test cases. For each of the test objective and test case unique number is assigned. Number flow is usually like Requirement Id >> test obj Id>> test case id.Advantages:a. We can trace the missing test cases.b. Whenever requirements changes then we can easily refer to matrix document, change the usecase and go to corresponding testcases and change them. c. Easy to test any functionality. Only we need to refer matrix document and we can reach to related test cases.d. We can trace the impact of functionalities on one another. Because different functionalities can have same test cases.

Bug Density

1. Bug Density: Bug Density is nothing but the number of bugs found in 1000 lines of code. Now every organization has this 1000 set to their requirement and need it can be 100 or any number based on the scale of the project.2. what is defect density?Number of defects divided per unit time. Defect density is one of the metric which is equal to the ratio of number of defects to the number of lines of code .Defect Density = Defect/unit sizeDD=Total Defect/KLOC( Kilo lines of Code)Ex: Suppose 10 bugs are found in 1 KLOCTherefore DD is 10/KLOC (Kilo lines of code)3. what is defect matrix?Time at which defects were discovered relative to when they wereinserted into the software.

System Testing!!

What is System testing? Testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements [IEEE 90]. System testing is actually done to the entire system against the Functional Requirement Specification(s) (FRS) and/or the System Requirement Specification (SRS).Types of system testing:-a. User interface testingb. Usability testingc. Performance testingd. Compatibility testinge. Error handling testingf. Load testingg. Volume testingh. Stress testingi. User help testingj. Security testingk. Scalability testingl. Capacity testingm. Sanity testingn. Smoke testingo. Exploratory testingp. Adhoc testingq. Regression testingr. Reliability testings. Recovery testingt. Installation testingu. Idempotency testingv. Maintenance testing

Severity and Priority

Severity: Severity determines the defect's effect on the application. Severity is given by Testers
Priority: Determines the defect urgency of repair.Priority is given by Test lead or project manager
1. High Severity & Low Priority : For example an application which generates some banking related reports weekly, monthly, quarterly & yearly by doing some calculations. If there is a fault while calculating yearly report. This is a high severity fault but low priority because this fault can be fixed in the next release as a change request.
2. High Severity & High Priority : In the above example if there is a fault while calculating weekly report. This is a high severity and high priority fault because this fault will block the functionality of the application immediately within a week. It should be fixed urgently.
3. Low Severity & High Priority : If there is a spelling mistake or content issue on the homepage of a website which has daily hits of lakhs. In this case, though this fault is not affecting the website or other functionalities but considering the status and popularity of the website in the competitive market it is a high priority fault.
4. Low Severity & Low Priority : If there is a spelling mistake on the pages which has very less hits throughout the month on any website. This fault can be considered as low severity and low priority.Priority is used to organize the work. The field only takes meaning when owner of the bugP1 Fix in next buildP2 Fix as soon as possibleP3 Fix before next releaseP4 Fix it time allowP5 Unlikely to be fixedDefault priority for new defects is set at P3

Bug, Error, Defect and Issue

a. Bug:A software bug is an error, flaw, mistake, failure, or fault in a program that prevents it from behaving as intended (e.g., producing an incorrect result). Most bugs arise from mistakes and errors made by people in either a program's source code or its design, and a few are caused by compilers producing incorrect code.
b. Error: The mistake made by developer in coding.
c. Defect: Defect is something which is in the requirement document and it is not implemented or it is implemented in a wrong way.
d. Issue: Issue is something which is not all above, Some issues are there like site is slow, session related problems, security problems etc.
Waterfall Model

This is the most common and classic of life cycle models, also referred to as a linear-sequential life cycle model. It is very simple to understand and use. In a waterfall model, each phase must be completed in its entirety before the next phase can begin. At the end of each phase, a review takes place to determine if the project is on the right path and whether or not to continue or discard the project.
Requirement
a. Design
b. Implementation & Unit Testing
c. Integration & System Testing
d. Operation
Advantagesa. Simple and easy to use.
b. Easy to manage due to the rigidity of the model – each phase has specific deliverables and a review process.
c. Phases are processed and completed one at a time.
d. Works well for smaller projects where requirements are very well understood.
Disadvantages
a. Adjusting scope during the life cycle can kill a project
b. Poor model for complex and object-oriented projects.
c. Poor model for long and ongoing projects.
d. Poor model where requirements are at a moderate to high risk of changing.
Spiral Model

The spiral model gives more emphases placed on risk analysis. The spiral model has four phases: Planning, Risk Analysis, Engineering and Evaluation. A software project repeatedly passes through these phases in iterations (called Spirals in this model). The baseline spiral, starting in the planning phase, requirements are gathered and risk is assessed. Each subsequent spirals builds on the baseline spiral.
Requirements are gathered during the planning phase. In the risk analysis phase, a process is undertaken to identify risk and alternate solutions. A prototype is produced at the end of the risk analysis phase.
Software is produced in the engineering phase, along with testing at the end of the phase. The evaluation phase allows the customer to evaluate the output of the project to date before the project continues to the next spiral.
In the spiral model, the angular component represents progress, and the radius of the spiral represents cost.
Advantagesa. High amount of risk analysis.
b. Good for large and mission-critical projects.
c. Software is produced early in the software life cycle.
Disadvantagesa. Can be a costly model to use.
b. Risk analysis requires highly specific expertise.
c. Project’s success is highly dependent on the risk analysis phase.
d. Doesn’t work well for smaller projects.

Software Testing

Black Box Testing
Black Box testing refers to the technique of testing a system with no knowledge of the internals of the system. Black Box testers do not have access to the source code and are oblivious of the system architecture. A Black Box tester typically interacts with a system through a user interface by providing inputs and examining outputs without knowing where and how the inputs were operated upon. In Black Box testing, target software is exercised over a range of inputs and the outputs are observed for correctness.
Advantagesa. Efficient Testing — Well suited and efficient for large code segments or units.
b. Unbiased Testing — clearly separates user's perspective from developer's perspective through separation of QA and Development responsibilities.
c. Non intrusive — code access not required.
d. Easy to execute — can be scaled to large number of moderately skilled testers with no knowledge of implementation, programming language, operating systems or networks.
Disadvantagesa. Localized Testing — Limited code path coverage since only a limited number of test inputs are actually tested.
b. Inefficient Test Authoring — without implementation information, exhaustive input coverage would take forever and would require tremendous resources.
c. Blind Coverage — cannot control targeting code segments or paths which may be more error prone than others.

White Box Testing
White Box testing refers to the technique of testing a system with knowledge of the internals of the system. White Box testers have access to the source code and are aware of the system architecture. A White Box tester typically analyzes source code, derives test cases from knowledge about the source code, and finally targets specific code paths to achieve a certain level of code coverage. A White Box tester with access to details about both operations can readily craft efficient test cases that exercise boundary conditions.
Advantages
a. Increased Effectiveness — Crosschecking design decisions and assumptions against source code may outline a robust design, but the implementation may not align with the design intent.
b. Full Code Pathway Capable — all the possible code pathways can be tested including error handling, resource dependencies, and additional internal code logic/flow.
c. Early Defect Identification — Analyzing source code and developing tests based on the implementation details enables testers to find programming errors quickly.
d. Reveal Hidden Code Flaws — access to source code improves understanding and uncovering unintended hidden behavior of program modules.
Disadvantagesa. Difficult To Scale — requires intimate knowledge of target system, testing tools and coding languages, and modeling. It suffers for scalability of skilled and expert testers.
b. Difficult to Maintain — requires specialized tools such as source code analyzers, debuggers, and fault injectors.
c. Cultural Stress — the demarcation between developer and testers starts to blur which may become a cultural stress.
d. Highly Intrusive — requires code modification has been done using interactive debuggers, or by actually changing the source code. This may be adequate for small programs; however, it does not scale well to larger applications. Not useful for networked or distributed systems.

Gray Box Testing
Gray Box testing refers to the technique of testing a system with limited knowledge of the internals of the system. Gray Box testers have access to detailed design documents with information beyond requirement documents. Gray Box tests are generated based on information such as state-based models or architecture diagrams of the target system.
Advantagesa. Offers Combined Benefits — Leverage strengths of both Black Box and White Box testing wherever possible.
b. Non Intrusive — Gray Box does not rely on access to source code or binaries. Instead, based on interface definition, functional specifications, and application architecture.
c. Intelligent Test Authoring — Based on the limited information available, a Gray Box tester can author intelligent test scenarios, especially around data type handling, communication protocols and exception handling.
d. Unbiased Testing — The demarcation between testers and developer is still maintained. The handoff is only around interface definitions and documentation without access to source code or binaries.
Disadvantagesa. Partial Code Coverage — Since the source code or binaries are not available, the ability to traverse code paths is still limited by the tests deduced through available information. The coverage depends on the tester authoring skills.
b. Defect Identification — Inherent to distributed application is the difficulty associated in defect identification. Gray Box testing is still at the mercy of how well systems throw exceptions and how well are these exceptions propagated with a distributed Web Services environment.

Difference between Black Box and White Box Testing

1. Synonyms for black-box include: behavioral, functional, opaque-box, and closed-box.
2. Synonyms for white-box include: structural, glass-box and clear-box.
3. Generally black box testing will begin early in the software development i.e. in requirement gathering phase itself. But for white box testing approach one has to wait for the designing has to complete.
4. We can use black testing strategy almost any size either it may be small or large. But white box testing will be effective only for small lines of codes or piece of codes.
5. In white box testing we can not test Performance of the application. But in Black box testing we can do it.

Some FAQs with Answers

What is 'Software Testing'?
  • Software Testing involves operation of a system or application under controlled conditions and evaluating the controlled conditions should include both normal and abnormal conditions.
What is 'Software Quality Assurance'?
  • Software Quality Assurance involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with.
What is the 'Software Quality Gap'?
  • The difference in the software, between the state of the project as planned and the actual state that has been verified as operating correctly, is called the software quality gap.
What is Equivalence Partitioning?
  • In Equivalence Partitioning, a test case is designed so as to uncover a group or class of error. This limits the number of tests cases that might need to be developed otherwise. Here input domain is divided into classes of groups of data. These classes are known as equivalence classes and the process of making equivalence classes is called equivalence partitioning. Equivalence classes represent a set of valid or invalid states for input conditions.
What is Boundary Value Analysis?
  • It has been observed that programs that work correctly for a set of values in an equivalence class fail on some special values. These values often lie on the boundary of the equivalence class. Boundary value for each equivalence class, including the equivalence class of the output, should be covered. Boundary value test cases are also called extreme cases. Hence, a boundary value test case is set of input data that lies on the edge or boundary of a class input data or that generates output that lies at the boundary of a class of output data.
Why does software have bugs?
  • Miscommunication or no communication - understand the application's requirements. Software complexity - the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development. Programming errors - programmers "can" make mistakes. Changing requirements - A redesign, rescheduling of engineers, effects on other projects, etc. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of keeping track of changes may result in errors. Time pressures - scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made. Poorly documented code - it's tough to maintain and modify code that is badly written or poorly documented; the result is bugs. Software development tools - various tools often introduce their own bugs or are poorly documented, resulting in added bugs.
What does "finding a bug" consist of?
  • Finding a bug consists of number of steps that are performed: Searching for and locating a bug Analyzing the exact circumstances under which the bug occurs Documenting the bug found Reporting the bug to you and if necessary helping you to reproduce the error Testing the fixed code to verify that it really is fixed What will happen about bugs that are already known?When a program is sent for testing (or a website given), then a list of any known bugs should accompany the program. If a bug is found, then the list will be checked to ensure that it is not a duplicate. Any bugs not found on the list will be assumed to be new.
What's the big deal about 'requirements'?
  • Requirements are the details describing an application's externally perceived functionality and properties. Requirements should be clear & documented, complete, reasonably detailed, cohesive, attainable, and testable. A non-testable requirement would be, for example, 'user-friendly' (too subjective). Without such documentation, there will be no clear-cut way to determine if a software application is performing correctly.
What can be done if requirements are changing continuously?
  • A common problem and a major headache. It's helpful if the application's initial design allows for some adaptability so that later changes do not require redoing the application from scratch. If the code is well commented and well documented this makes changes easier for the developers. Use rapid prototyping whenever possible to help customers feel sure of their requirements and minimize changes. Be sure that customers and management understand the scheduling impacts, inherent risks, and costs of significant requirements changes. Design some flexibility into test cases (this is not easily done; the best bet might be to minimize the detail in the test cases, or set up only higher-level generic-type test plans)
How can it be known when to stop testing?
  • This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be performed. Common factors in deciding when to stop are:
  1. Deadlines achieved (release deadlines, testing deadlines, etc.
  2. Test cases completed with certain percentage passed
  3. Test budget depleted •
  4. Coverage of code/functionality/requirements reaches a specified point
  5. Defect rate falls below a certain level
  6. Beta or Alpha testing period ends .
What if there isn't enough time for thorough testing?
  • Use risk analysis to determine where testing should be focused. Figure out which functionality is most important to the project's intended purpose? Which functionality is most visible to the user? Which functionality has the largest safety impact? Which functionality has the largest financial impact on users? Which aspects of the application are most important to the customer? Which aspects of the application can be tested early in the development cycle? Which parts of the code are most complex, and thus most subject to errors? What do the developers think are the highest-risk aspects of the application? Which tests will have the best high-risk-coverage to time-required ratio? What if the software has so many bugs that it can't really be tested at all?Since this type of problem can severely affect schedules, and indicates deeper problems in the software development process (such as insufficient unit testing or insufficient integration testing, poor design, improper build or release procedures, etc.) managers should be notified, and provided with some documentation as evidence of the problem
How does a client/server environment affect testing?
  • Client/server applications can be quite complex due to the multiple dependencies among clients, data communications, hardware, and servers. Thus testing requirements can be extensive. When time is limited (as it usually is) the focus should be on integration and system testing. Additionally, load/stress/performance testing may be useful in determining client/server application limitations and capabilities.
Does it matter how much the software has been tested already?
  • No. It is up to the tester to decide how much to test it before it is tested. An initial assessment of the software is made, and it will be classified into one of three possible stability levels:
  1. Low stability (bugs are expected to be easy to find, indicating that the program has not been tested or has only been very lightly tested)
  2. Normal stability (normal level of bugs, indicating a normal amount of programmer testing)
  3. High stability (bugs are expected to be difficult to find, indicating already well tested)
How is testing affected by object-oriented designs?
  • Well-engineered object-oriented design can make it easier to trace from code to internal design to functional design to requirements. While there will be little affect on black box testing (where an understanding of the internal design of the application is unnecessary), white-box testing can be oriented to the application's objects. If the application was well designed this can simplify test design.
Will automated testing tools make testing easier?
  • A tool set that allows controlled access to all test assets promoted better communication between all the team members, and will ultimately break down the walls that have traditionally existed between various groups. Automated testing tools are only one part of a unique solution to achieving customer success. The complete solution is based on providing the user with principles, tools, and services needed to efficiently develop software.
Why outsource testing?
  • Skill and Expertise Developing and maintaining a team that has the expertise to thoroughly test complex and large applications is expensive and effort intensive. - Testing a software application now involves a variety of skills.Focus - Using a dedicated and expert test team frees the development team to focus on sharpening their core skills in design and development, in their domain areas.Independent assessment - Independent test team looks afresh at each test project while bringing with them the experience of earlier test assignments, for different clients, on multiple platforms and across different domain areas.Save time - Testing can go in parallel with the software development life cycle to minimize the time needed to develop the software.Reduce Cost - Outsourcing testing offers the flexibility of having a large test team, only when needed. This reduces the carrying costs and at the same time reduces the ramp up time and costs associated with hiring and training temporary personnel.
What steps are needed to develop and run software tests?
The following are some of the steps needed to develop and run software tests:
  1. Obtain requirements, functional design, and internal design specifications and other necessary documents
  2. Obtain budget and schedule requirements
  3. Determine project-related personnel and their responsibilities, reporting requirements, required standards and processes (such as release processes, change processes, etc.)
  4. Identify application's higher-risk aspects, set priorities, and determine scope and limitations of tests
  5. Determine test approaches and methods - unit, integration, functional, system, load, usability tests, etc.
  6. Determine test environment requirements (hardware, software, communications, etc.)
  7. Determine test-ware requirements (record/playback tools, coverage analyzers, test tracking, problem/bug tracking, etc.)
  8. Determine test input data requirements Identify tasks, those responsible for tasks, and labor requirements
  9. Set schedule estimates, timelines, milestones
  10. Determine input equivalence classes, boundary value analyses, error classes Prepare test plan document and have needed reviews/approvals
  11. Write test cases
  12. Have needed reviews/inspections/approvals of test cases
  13. Prepare test environment and test ware, obtain needed user manuals/reference documents/configuration guides/installation guides, set up test tracking processes, set up logging and archiving processes, set up or obtain test input data
  14. Obtain and install software releases
  15. Perform tests
  16. Evaluate and report results
  17. Track problems/bugs and fixes
  18. Retest as needed Maintain and update test plans, test cases, test environment, and test ware through life cycle

Other Sites




http://pureqastuff.blogspot.in/2007/11/shortcuts-in-qtp.html

http://www.sqatester.com/tutorials/index.htm

http://www.testing-jobs.cybermediadice.com/

http://www.testingbrain.com/tutorials/software-testing-tutorial.html

http://www.qalinks.com/

http://www.coolinterview.com/type.asp?iType=214

http://www.softwaretestinghub.com/

Shortcuts in QTP

File Menu

New > Test CTRL + N
New > Business Component CTRL + SHIFT + N
New > Scripted Component ALT + SHIFT + N
New > Application Area CTRL +Alt + N
Open > Test CTRL + O
Open > Business Component CTRL + SHIFT + O
Open > Application Area CTRL + ALT + O
Save CTRL + S
Export Test to Zip File CTRL + ALT + S
Import Test from Zip File CTRL + ALT + I
Print CTRL + P

Edit Menu

Cut CTRL + X (EV only)
Copy CTRL + C
Paste CTRL + V
Delete DEL
Undo CTRL + Z (EV only)
Redo CTRL + Y (EV only)
Rename Action F2
Find CTRL + F (EV only)
Replace CTRL + H (EV only)
Go To CTRL + G (EV only)
Bookmarks CTRL + B (EV only)
Complete Word CTRL + Space (EV only)
Argument Info CTRL + SHIFT + SPACE (EV only)
Apply “With” To Script CTRL + W (EV only)
Remove “With” Statements CTRL + SHIFT + W (EV only)

Insert Menu

Checkpoint > StandardCheckpoint F12
Output Value > Standard Output Value CTRL + F12
Step > Step Generator F7
New Step F8 OR INS (KV only)
New Step After Block SHIFT + F8 (KV only)
Key: KV = Keyword View
EV = Expert View


Test/Component/Application Area Menu

Record F3
Run F5
Stop F4
Analog Recording CTRL + SHIFT + F4
Low Level Recording CTRL + SHIFT + F3
Step Menu
Object Properties CTRL + ENTER
Value Configuration Options CTRL + F11 on an input value
(KV only)
Output Options CTRL + F11 on an output value
(KV only)

Debug Menu

Pause PAUSE
Step Into F11
Step Over F10
Step Out SHIFT + F11
Insert/Remove Breakpoint F9
Clear All Breakpoints CTRL + SHIFT + F9

Data Table Options

Edit > Cut CTRL + X
Edit > Copy CTRL + C
Edit > Paste CTRL + V
Edit > Clear > Contents CTRL + DEL
Edit > Insert CTRL + I
Edit > Delete CTRL + K
Edit > Fill Right CTRL + R
Edit > Fill Down CTRL + D
Edit > Find CTRL + F
Edit > Replace CTRL + H
Data > Recalc F9
Insert Multi-line Value CTRL + F2 while editing cell
Activate next/previous sheet CTRL + PAGEUP/CTRL + PAGEDOWN

General Options

View Keyword View/Expert View CTRL + TAB
Open context menu for step or Data Table cell SHIFT + F10
or Application key ( )
Expand all branches * [on numeric keypad] (KV only)
Expand branch + [on numeric keypad] (KV only)
Collapse branch - [on numeric keypad] (KV only)

Wednesday, 30 May 2012

Hi Viewers







Hi,


People who want the documents please give us your mail ids
will try to forward them.


----
Thanks for your Support.