Wednesday, October 28

Interview questions for Software testing/Embedded Testing



1. What are the levels of Testing?
          There are 4 level of Testing
1. Unit Testing
2. Integration Testing
3. System Testing
4. User acceptence Testing



2. What are the types of Testing?
     There are 4 types of testing
1. Sanity Testing is a test whether build is testable
            
or not.
2. Unit testing, Integration Testing, System Testing
3. Regression Testing
4. Final Regression testing or Postmortem Testing
     
3. What are the phases of the Software Development Life Cycle ?
There are five main stages of SDLC.

1. Requirement
In this phase a techno team take a requirements form a customer by meeting,what they actually want or what they need in there product.


2. Analysis
In this requirements are converted into documents and covers all the customer requirements called FRS (functional requirement specifications), Finally it will approved by head or any senior persons of the customerside, After approvel the requirements are nail down and the devlopeing process is start right from there.

3. Design
In this phase the design of the product is prepared i.e all the requirements are converted into the architechture design.(SRS : Software resuirment specification is prepared) .

This phase includes :

* LLD - Low Level Design Documentation :
This level deals with lower level modules.The flow of diagram handled here is data Flow Diagram.Developers handle this Level.

* HLD - High Level Design Documentation: This level deals with higher level modules.The flow of diagram handled here is ER - Entity Relationship.Both Developers and Testers handle this Level.

3. Coding
In this phase all the requirements of the customer are converted into the code form.

4. Testing
In this phase the software under devlopement is tested for quality of the product,that the product we are builting is error free or a quality product.

This phase incudes 2 types of Testing:
i. Static Testing : Testing each and every phase completely is called as Static testing.It is also called as Reviews.
ii. Dynamic Testing : Testing after the competion of the entire project .

5. Maintance
In this phase the maintance of the product is carried out.



4. Which phase of SDLC does work of tester starts ? give an example ?
Tester’s work starts form the intial stage of the SDLC. i.e. starts with requirement gathering and analysis. 
At this stage tester starts reviewing the documents and try to find the ambigious requiremnets or the requirements that could not be fulfilled.



5. Why sanity testing is also called smoke testing?
As I know sanity and smoke testings are different ....smoke testing is test whether the build is installed properly or not and is ready for further major testing.
         
Sanity testing is carried after smoke testing to check whether the major functionality is working properly or not 
to proceed further testing.

6. Why adhoc testing is also called random testing?
Generally this type of Testing is done after all types of testing is done. In this type of testing neither any documents nor any test cases are followed, just it is done randomly to get defects.


7. What if there is not enough time for thorough testing?
Most of the times, it's not possible to test the whole application within the specified time. In such situations, Tester needs to use the commonsense and find out the risk factors in the projects and concentrate on testing them.

Here are some points to be considered when you are in such a situation:

# What is the most important functionality of the project ?
# What is the high-risk module of the project ?
# Which functionality is most visible to the user ?
# Which functionality has the largest safety impact ?
# Which functionality has the largest financial impact on users ?
# Which aspects of the application are most important to the customer ?
# Which parts of the code are most complex, and thus most subject to  

   errors?
# Which parts of the application were developed in rush or panic mode ?
# What do the developers think are the highest-risk aspects of the 

   application?
# What kind of problems would cause the worst publicity ?
# What kind of problems would cause the most customer service complaints ?
# What kind of tests could easily cover multiple functionalities ?

Considering these points, you can greatly reduce the risk of project release failure under strict time constraints.



8. What is software testing?
       Software Testing involves operation of a system or application under controlled conditions and evaluating the controlled conditions should include both normal and abnormal conditions.


9. What is 'Software Quality Assurance?
        Software Quality Assurance involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with.


10. What is the 'Software Quality Gap'?
The difference in the software, between the state of the project as planned and the actual state that has been verified as operating correctly, is called the software quality gap.


11. What is Equivalence Partitioning?
In Equivalence Partitioning, a test case is designed so as to uncover a group or class of error. This limits the number of tests cases that might need to be developed otherwise. Here input domain is divided into classes of groups of data. These classes are known as equivalence classes and the process of making equivalence classes is called equivalence partitioning. Equivalence classes represent a set of valid or invalid states for input conditions.


12. What is Boundary Value Analysis?
It has been observed that programs that work correctly for a set of values in an equivalence class fail on some special values. These values often lie on the boundary of the equivalence class. Boundary value for each equivalence class, including the equivalence class of the output, should be covered. Boundary value test cases are also called extreme cases. Hence, a boundary value test case is set of input data that lies on the edge or boundary of a class input data or that generates output that lies at the boundary of a class of output data.


13. Why does software have bugs? 
Miscommunication or no communication - understand the application's requirements.
Software complexity - the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development.
Programming errors - programmers "can" make mistakes.
Changing requirements - A redesign, rescheduling of engineers, effects on other projects, etc. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of keeping track of changes may result in errors.
Time pressures - scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made.
Poorly documented code - it's tough to maintain and modify code that is badly written or poorly documented; the result is bugs.
Software development tools - various tools often introduce their own bugs or are poorly documented, resulting in added bugs.



14. What does "finding a bug" consist of? 
Finding a bug consists of number of steps that are performed:
Searching for and locating a bug Analyzing the exact circumstances under which the bug occurs Documenting the bug found Reporting the bug to you and if necessary helping you to reproduce the error Testing the fixed code to verify that it really is fixed

15.
What will happen about bugs that are already known?
When a program is sent for testing (or a website given), then a list of any known bugs should accompany the program. If a bug is found, then the list will be checked to ensure that it is not a duplicate. Any bugs not found on the list will be assumed to be new.

16.
What's the big deal about 'requirements'?
Requirements are the details describing an application's externally perceived functionality and properties. Requirements should be clear & documented, complete, reasonably detailed, cohesive, attainable, and testable. A non-testable requirement would be, for example, 'user-friendly' (too subjective). Without such documentation, there will be no clear-cut way to determine if a software application is performing correctly.

17.
What can be done if requirements are changing continuously?
A common problem and a major headache.
It's helpful if the application's initial design allows for some adaptability so that later changes do not require redoing the application from scratch. If the code is well commented and well documented this makes changes easier for the developers. Use rapid prototyping whenever possible to help customers feel sure of their requirements and minimize changes. Be sure that customers and management understand the scheduling impacts, inherent risks, and costs of significant requirements changes. Design some flexibility into test cases (this is not easily done; the best bet might be to minimize the detail in the test cases, or set up only higher-level generic-type test plans)

18. What is a Tesing?
Testing is a process which identifies correctness, completeness and quality of software.


19. What should a good tester have?
1.       Test break attitude.
2.       an ability to take the point of view of the customer,          
3.       a strong desire for quality,          and       an attention to detail.
4.       Tact and diplomacy are useful in maintaining a cooperative relationship with           developers, and an ability to communicate with both technical (developers)      and non-technical (customers, management) people is useful.
5.       Also they must be able to understand the entire software development process  and how it can fit into the business approach and goals of the organization


20. What is the responsibility of Tester?
The Test Engineer's function is to use the system much like real users would, find all the bugs, find ways to replicate the bugs, submit bug reports to the developers, and to provide feedback to the developers, i.e. tell them if they've achieved the desired level of quality 


Enhancing this Test engineer should -
Ø       Create test cases, procedures, scripts and generate data.
Ø       Execute test procedures and scripts, analyze standards of measurements,   evaluate results of system / integration / regression testing.
Also...
Ø       Speed up the work of the development staff;  
Ø       Reduce organization's risk of legal liability;  
Ø       Give you the evidence that software is correct and operates properly;
Ø       Improve problem tracking and reporting;  
Ø       Maximize the value of software; 
Ø       Maximize the value of the devices that use it;  
Ø       Assure the successful launch of product by discovering bugs and design flaws, before users get discouraged, before shareholders loose their cool and before employees get bogged down;  
Ø       Help the work of development staff, so the development team can devote its time to build up product;  
Ø       Promote continual improvement;
Ø       Provide documentation required by ISO, CMM, FDA, FAA, other regulatory agencies and requested by customers;  
Ø       Save money by discovering defects 'early' in the design process, before failures occur in production, or in the field;   Save the reputation of company by discovering bugs and design flaws;
Ø       Before bugs and design flaws damage the reputation of company.


21. What is unit testing?


Unit Testing is a method of testing that verifies the individual
    units of source code are working properly.


22. What is Integration Testing?
Integration testing is kind of black box testing done after Unit testing.
The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements after integrating the units.
Test cases are developed with the express purpose of exercising the interfaces between the components. Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable/acceptable based on client input




23. What is white box / clear box testing?
White box / Clear box testing is a testing approach that examines the
application's program structure, and derives test cases from the   application's program logic.


24. What is black box  / closed box testing?
Black box testing a type of testing that considers only externally visible    
behavior. Black box testing considers neither the code itself, nor the "inner   workings" of the software.


25. What is system testing?
System testing is black box testing, performed by the Test Team, and
at the start of the system testing the complete system is configured in a controlled environment. The purpose of system testing is to validate an application's accuracy and completeness in performing the functions as designed. System testing simulates real life scenarios that occur in a "simulated real life" test environment and test all functions of the system that are required in real life. System testing is deemed complete when actual results and expected results are either in line or differences are explainable or acceptable, based on client input.
Upon completion of integration testing, system testing is started. Before system testing, all unit and integration test results are reviewed by Software QA to ensure all problems have been resolved. For a higher level of testing it is important to understand unresolved problems that originate at unit and integration test levels.



26. What is regression testing?
The objective of regression testing is to ensure the software remains
intact. A baseline set of data and scripts is maintained and executed to verify changes introduced during the release have not "undone" any previous code. Expected results from the baseline are compared to results of the software under test. All discrepancies are highlighted and accounted for, before testing proceeds to the next level


27. What is alpha testing ?
Alpha testing is testing of an application/project whendevelopment is nearing completion. Minor design changes can still be made as a result of  alpha testing. Alpha testing is typically performed by a group that is   independent of the design team, but still within the company, e.g. in-house software test engineers, or software QA engineers


28. What is beta testing ?
Beta testing is testing an application when development and testing are essentially completed and final bugs and problems need to be found before the final release. Beta testing is typically performed by end-users or others, not programmers, software engineers, or test engineers.


29. What is gamma testing?
Gamma testing is testing of software that has all the required features, but it did not go through all the in-house quality checks. Cynics tend to refer to software releases as "gamma testing".


30. What is stress testing?
Stress testing is testing that investigates the behavior of software (and hardware) under extraordinary operating conditions. For example, when a web server is stress tested, testing aims to find out how many users can be on-line, at the same time, without crashing the server. Stress testing tests the stability of a given system or entity. It tests something beyond its normal operational capacity, in order to observe any negative results. For example, a web server is stress tested, using scripts, bots, and various denial of service tools


31. What is performance testing?
Although performance testing is described as a part of system testing, it can be regarded as a distinct level of testing. Performance testing verifies loads, volumes and response times, as defined by requirements.


32. What is load testing?
Load testing is testing an application under heavy loads, such as the testing of a web site under a range of loads to determine at what point the system response time will degrade or fail
Load testing simulates the expected usage of a software program, by simulating multiple users that access the program's services concurrently. Load testing is most useful and most relevant for multi-user systems, client/server models, including web servers. For example, the load placed on the system is increased above normal usage patterns, in order to test the system's response at peak loads


33. What is sanity testing?
Sanity testing is performed whenever cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. It normally includes a set of core tests of basic functionality to demonstrate proper implementation


34. What is smoke testing?
A quick-and-dirty test that the major functions of a piece of software work without bothering with finer details. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.


35. What is boundary value analysis?
Boundary value analysis is a technique for test data selection. A test engineer chooses values that lie along data extremes. Boundary values include maximum, minimum, just inside boundaries, just outside boundaries, typical values, and error values. The expectation is that, if a systems works correctly for these extreme or special values, then it will work correctly for all values in between. An effective way to test code, is to exercise it at its natural boundaries


36. What is acceptance testing?
Acceptance testing is black box testing that gives the client/customer/project manager the opportunity to verify the system functionality and usability prior to the system being released to production. The acceptance test is the responsibility of the client/customer or project manager, however, it is conducted with the full support of the project team. The test team also works with the client/customer/project manager to develop the acceptance criteria


37. What is the ratio of developers and testers?
This ratio is not a fixed one, but depends on what phase of the software development life cycle the project is in.
When a product is first conceived, organized, and developed, this ratio tends to be 10:1, 5:1, or 3:1, i.e. heavily in favor of developers.
In sharp contrast, when the product is near the end of the software development life cycle, this ratio tends to be 1:1, or even 1:2, in favor of testers.
38. Contents of Test Plan / Test cases / Test Design?
Software test cases are in a document that describes inputs, actions, or events, and their expected results, in order to determine if all features of an application are working correctly.
Test case templates contain all particulars of every test case:
Ø       Test case No
Ø       Test Case ID
Ø       Test Description
Ø       Test Precondition
Ø       Test Procedures/Steps
Ø       Test Case code
Ø       Expected Result
Ø       Remarks/Notes


All documents should be written to a standard and template. Standards and templates maintain document uniformity.


39. What is the Contents of Test Report?
Software test report are in a document that describes out put of tested actions, or events, and the version/label, in order to determine if all features of an application are working correctly.
Test report templates contain all particulars like:
Ø       FRS version / unique reference
Ø       Functionality / Feature
Ø       Test case ID
Ø       Test Inputs
Ø       Test Steps
Ø       Expected Outputs
Ø       Test Result
Ø       Remarks / Observed outputs / comments
Ø       Developers Response on obtained output  Also
Ø       Rounds of testing mentioning Label / Version no with Date,
Ø       Tester Name & Effort taken


Some of Common Testing Tools
Quick Test Professional (QTP) is an automated functional Graphical User Interface (GUI) testing tool that allows the automation of user actions on a web or client based computer application.
WinRunner, Mercury Interactive’s enterprise functional testing tool. It is used to quickly create and run sophisticated automated tests on your application. Winrunner helps you automate the testing process, from test development to execution. You create adaptable and reusable test scripts that challenge the functionality of your application. Prior to a software release, you can run these tests in a single overnight run- enabling you to detect and ensure superior software quality.
LoadRunner is a performance and load testing product for examining system behaviour and performance, while generating actual load. LoadRunner can emulate hundreds or thousands of concurrent users to put the application through the rigors of real-life user loads, while collecting information from key infrastructure components (Web servers, database servers etc).


TestDirector Its four modules Requirements, Test Plan, Test Lab, and Defects are seamlessly integrated, allowing for a smooth information flow between various testing stages. The completely Web-enabled TestDirector supports high levels of communication and collaboration among distributed testing teams, driving a more effective, efficient global application-testing process.


Silk Test is a tool specifically designed for doing REGRESSION AND FUNCTIONALITY testing. Silk Test is the industry’s leading functional testing product for e-business applications, whether Window based, Web, Java, or traditional client/server-based. Silk Test also offers test planning, management, direct database access and validation, the flexible and robust 4Test scripting language, a built in recovery system for unattended testing.
RT RT: Cross-platform solution for component testing and runtime analysis. Designed specifically for those who write code for embedded and other types of pervasive computing products. Supports safety- and business-critical embedded applications.


40.  What is Difference between Verification and Validation?
Verification ensures the product is designed to deliver all functionality to the customer; it typically involves reviews and meetings to evaluate documents, plans, code, requirements and specifications; this can be done with checklists, issues lists, and walkthroughs and inspection meetings.

           Validation ensures that functionality, as defined in requirements, is the intended behavior of the product; validation typically involves actual testing and takes place after verifications are completed.



41.   Supprose if X version as 15 bugs found and it is fixed, and how will
   come to know in next version say Z version those are fixer or not?
Interviewee is expecting about release note at this point. Because for Z version in the release note they mentioned those are fixed.

3 comments: