Functional Testing - Part I

SOFTWARE FUNCTIONAL TESTING – PART I

v What is Software Testing?
Software Testing is the structured and organized process of evaluating and rating the software by manual or automated means, to ensure and check whether the Software exhibits the quality characteristics as specified in the requirement specification.
      (OR)
Software Testing is the process of exercising or evaluating a system or the  software component by manual or automated means to verify that it satisfies the specified requirement.[According to IEEE 83a].
                                            (OR)
Software Testing is the structured process that uncovers the defects in the software and ensures that the software is behaving according to the customer requirement specification.

v Why Software Testing? (OR) What are the goals of Software Testing?
The Software Testing is performed to meet the following important and  fundamental goals:
·        Detect fault
·        Establish Confidence in software
·        Evaluate the properties of the software like
--Reliability, Performance, Security, Memory usage, Usability.


v What is Manual Testing?
   Manual Testing is systematic process of testing the software where the tester manually test the software by first thoroughly understanding the requirement, deriving maximum possible scenarios, converting those scenarios into test cases based on which the software is tested. Since the entire process is done manually, it is called as Manual Testing.

v What is Automation Testing?
As the name itself indicates, Automation Testing is the process of     automatically testing the software based on the automation test scripts that are generated using the automation testing tools like QTP, Load Runner etc.

v What is Functional Testing?
Functional Testing involves testing each and every component in the  application thoroughly against the requirement specification. It is testing the feature and operational behaviour of an application by ignoring the internal mechanism of the software system and focuses solely on the outputs generated. The outputs are generated in response to the selected inputs.

v What is White Box Testing? What are its types?
White Box Testing is one of the categories of testing done by developers. Here the developers tests each and every line of the code. Developers look into the source code by testing the logic of the program.
Types of White Box Testing are:
§  Path Testing – Testing done based on the flow graphs and independently testing the paths separately.
§  Conditional Testing – Testing all the logical conditions for both true and false values.
§  Loop Testing – Test the loop for all the cycles
§  Memory based Testing – Testing the code from the memory point of view.
§  Performance based Testing – Testing the code to check the performance characteristics like response time of an application.

v What is the difference between White Box Testing and Black Box Testing?
White Box Testing :
Ø Developers involved in testing.
Ø Developers look into the source code and test the logic of the program.
Ø Programming knowledge needed to test the code.
Ø Should have knowledge of internal design of the code written to build an application.
Black Box Testing :
Ø Testers involved in testing.
Ø Testers test by verifying the functionality of an application against the specified requirement.
Ø Programming knowledge is not mandatory.
Ø No need to have knowledge an internal design of the code.

v Define the following:
                                i.            Over Testing – Testing by entering same types of values in different ways. Lot of time wasted in over testing the application.
                              ii.            Under Testing – Testing by entering insufficient set of data into the application. Chances of missing lot of bugs
                            iii.            Optimised testing – Testing the application by entering all possible values which makes sense. Minimum time spent in catching maximum bugs.

v Differentiate between Positive testing and Negative testing.
Positive testing – Also called as ‘Test to Pass’. Testing that is aimed to show that the software works by entering valid test data or expected test data.
Negative testing – Also called ‘Test to Fail’. Testing aimed to show that software does not work by entering invalid, inappropriate test data.

v What is Integration Testing?
Integration Testing is the process of testing the data flow between the modules of an application.
Consider 2 modules, Module A and Module B, A sends the data to B. Testing whether A is sending the data and testing whether the B is receiving the data sent by A is what we call as Integration Testing.

v What are the different types of Integration Testing?
Integration Testing can be broadly classified as:
1.      Incremental Integration Testing
                                                                                i.            Top down Incremental Integration Testing
                                                                              ii.            Bottom up Incremental Integration Testing
2.      Non – Incremental Integration Testing
1.      Incremental Integration Testing – It involves adding one module to another and performing integration testing to test the data flow between the modules.
                               i.            Top down Incremental Integration Testing – Identify the parent module then incrementally add the child module. Incrementally add and test the data flow between the module, make sure the module that you add is the child of its parent.
                             ii.            Bottom up Incremental Integration Testing – Identify the child module then incrementally add its parent module. Incrementally add and test the data flow between the module, make sure the module that you add is the parent of its child
2.      Non – Incremental Integration Testing – In here, it is difficult to identify parent and child modules. Because the data flow between the modules is very complex. It involves combining all the modules in a shot and test the data flow between the modules.

v What is System Testing?
System Testing is an ‘end – to – end testing’, where in ‘testing environment’ is just similar to the ‘production environment’.
          End – to – end testing – Navigating through all the features and test whether the end feature is working according the specified requirement.              Note : Here we are only navigating from one feature to another feature but not worried about dataflow or functionality of the modules.
          Testing environment – Entire set up that is specifically dedicated to perform testing.
          Production environment – Live environment where the actual business flow happens.

v When is System Testing done?
Ø Start System Testing when minimum set of features are ready.
Ø Functionality and integration between all the modules are working fine.
Ø Environment similar to the production environment in available.

v What is Acceptance Testing?
Acceptance Testing is a type of testing done by the clients that is client’s  IT engineers involve in testing(specifically end – to – end testing) the complete software to check whether it is developed and tested according to the requirement or not.

v Why Acceptance Testing is done?
§  Software with defects can be delivered the company that develops the software mainly because of the business pressure.
§  Misunderstanding the requirement and developing incorrect and unwanted features in the software.
§  Software with defects if implemented in the production environment directly it may result in the huge business loss to the client.

v What is Smoke Testing?
Smoke Testing means, testing the basic or critical features of an application before starting the rigorous or thorough testing. Smoke testing should be done at the beginning of each and every testing cycle.

v What if Smoke Testing not done?
§  Critical bugs are identified at the end of the test cycle.
§  Fixing that critical bug at the end of test cycle leads to the extension of the test cycle.
§  Product release will be delayed.

v What is the difference between Smoke Testing and Sanity Testing?

Smoke Testing
Sanity Testing
·        Smoke testing involves testing the basic or critical features of an application at the beginning of every test cycle.
Sanity testing is done to check the new features or to check whether the bugs have been fixed.
·        Smoke testing done by developers and testers
Sanity testing done by testers
·        Shallow and wide approach of testing
Narrow and deep approach.
·        Smoke testing can be scripted
Sanity test cannot be scripted
·        Smoke testing exercises the entire system from end to end
Sanity testing exercises only the particular component of the entire system.
·        Smoke testing is like general health check up
Sanity testing is like specialized health check up.



v What is Ad hoc Testing?
Testing the application randomly by coming up with some creative scenarios and testing the application without using any logic. It is negative testing, intension is to somehow break the product, it is to be noted here testing is done not according to the requirement.

v What is Exploratory Testing?
Exploratory testing involves understanding the application by writing the test cases and testing the application according to the test cases. Explore the application by identifying all possible scenarios, record those scenarios, test the application by referring the documented scenarios.

v Differentiate between Smoke Testing, Ad hoc Testing and Exploratory Testing.

Smoke Testing
Ad hoc Testing
Exploratory Testing
Testing the basic features of an application before thorough testing
Testing the application randomly.
Understand the application, identify the scenarios and test the application
Smoke testing is done to ensure the product is testable
End-user can use the application randomly and can detect the defect, to avoid this Ad hoc testing is done.
Whenever there is no requirement, or no time to refer the requirement or difficult to understand the requirement.
Smoke testing is done at the beginning of every new test cycle, whenever the new build comes.
Whenever we come up with the creative scenarios and have time
When there is no requirement
Testing according to the Requirement
Testing done not according to the requirement
Requirement is not followed.
Positive testing
Negative testing
Positive and/or Negative testing 



v What is Globalization Testing? What are its types?
Testing the software that is developed in multiple languages for multiple     nationalities of different cultural backgrounds is what we call as Globalization Testing.
Types of Globalization Testing are:
1.      Internationalization Testing [I18N Testing]
2.      Localization Testing [L10N Testing]

v What do you mean by I18N Testing?
Internationalization Testing is also called as I18N Testing, it is one of the type of 118N Testing, where the main objective of this testing is to ensure that right contents are displayed in the right language and right contents are displayed at right place.

v What do you mean by L10N Testing?(Note: It is not lion testing, it is  L TEN N Testing!)
Localization Testing also called as L10N Testing, it is one of the type of Globalization Testing. The main objective if the L10N Testing is to test the localized features like countries currency format, date format, time format, pin code format etc., such a type of testing is called as L10N Testing

v What is Usability Testing? Mention the Usability factors.
Testing the ease in which the application can be used by the end users is called as Usability Testing.
          Important Usability factors:
ü Easy to understand the application
ü Easy to find the features quickly in the application
ü Rich set of features
ü Look and feel
ü Easy to navigate
ü Use of simple language

v What do you mean by Accessibility Testing?
Testing the application from the physically challenged persons point of view is known as Accessibility Testing.
It is also called as ADA(Americans  with Disability Act) Testing or 508 Testing. ADA – 508 is a compliance policy having certain set of rules based on which the software should be developed and tested from physically challenged persons point of view like,
§  Background of the web pages should not be vibrant colours like red or green.
§  All the images should have alt tag or tool tip.
§  All the objects in the application should be accessible by keyboard.

v How is Accessibility Testing performed?
§  Based on ADA – 508 compliance testing format , test whether all the features are developed and working accordingly
§  Accessibility testing can be done either manually or automatically.
§  Automatically done by using tools like Aprompt, Infocus, Bobby, Wave.

v What is Compatibility Testing?
Testing the behaviour of an application in different software and hardware environment is called as Compatibility Testing.

v Why Compatibility Testing is done?
§  To check the compatibility of an application at different platforms or operating systems (OS).
§  Application developed on one OS(say XP) may not be compatible on other OS(say VISTA)
§  If the application is launched without performing compatibility testing, application may not work those users using VISTA.
§  As a result business will be hit, to overcome this Compatibility Testing should be done.

    v What are the different types of Compatibility Testing?
The different types of Compatibility Testing are:
1.      Software Compatibility Testing
2.     Hardware Compatibility Testing

v What are Compatibility issues and Functionality issues?
Compatibility issues – Features not properly working on a specific platform
Functionality issues – Features not working in almost all platforms.

v Mention some of the defects that can be found while performing Compatibility Testing?
§  Change in look and feel
§  Alignment issues
§  Scattered contents
§  Broken tables
§  Disabled objects
§  Object overlapping
§  Change in font size, colour and style
§  Images of different formats not supported(GIF, Bitmap, JPEG)
§  Broken frames
§  Links displayed as plain text

v What is Regression Testing?
Testing the unchanged features to ensure that the application is  unaffected by the changes, the changes could be addition of the new features or modification of the existing feature or removal of the old features or bug fixes.
                                  (OR)
Re-execution of the same test cases in different builds or release, to ensure that the changes are not introducing defects in the unchanged features.

v What are the different types of Regression Testing?
Regression testing can be classified into 3 different categories:
1.      Unit Regression Testing – Only re – test the specific unit(that is modified/bug fixed/ removed/ added features) ignoring the other unrelated features.
2.      Regional Regression Testing – Testing the modification/ changes and its impact regions is called ‘Regional Regression Testing’.
3.      Full Regression Testing – Testing the changes and all the remaining features is called ‘Full Regression Testing’.

v What is the difference between Re – testing and Regression Testing?
Re – testing : Testing the changed/modified feature once again is called  Re – testing.
Regression Testing : Testing the unchanged feature to make sure that the product is not broken because of the changes made in a particular feature.
v What do you mean by ‘Fixing the Bug’?
Fixing the Bug means:
§  Modifying the source code or the program to correct the defects(bugs) reported by the test engineer.
§  It is to be noted that the developers are the one who involve in fixing the defect and tester involve in identifying and reporting them.

v Why new Bugs are created?
New Bugs are created mainly because:
§  Fixing old bug might result in the new bug.
§  Adding new feature might introduce new bugs in the other features.
§  Missed bugs in the previous test cycles resulted in the new bugs.

v When is product release to the customer?
The product is said to go when:
§  When all the critical/showstopper/blocker bugs are fixed.
§  When all the features requested by the customer is ready.
§  When all the end-to-end business scenarios are working fine.
§  Functionality and integration between all the features is stable.
§  When pending bug count is very minimal.

v Define Test Cycle?
Duration and the effort involved in testing the product is called as ‘Test Cycle’.

v Explain Software Testing Life Cycle(STLC).
Software Testing Life Cycle(STLC) is a systematic and structured process which when followed will result in the effective testing of the software product or the application, thus meeting the customers expectation in terms of the quality.
STLC has 8 prominent stages, which is depicted in the below figure:

STLC Stages:
1.      System Study:
Ø First stage in the STLC, requirement of the application that needs to be developed is the input to this stage.
Ø This is an important stage where the tester needs to dedicate around 25% of his time initially to completely understand the application or the product thoroughly .
Ø Testers should carefully understand the requirement, ensure that there is no doubts regarding the requirement and the product features.
Ø In case of any doubts, testers should never ever assume any requirement, communicate with the right person who has good knowledge on the requirement.
Ø Testers should ensure that he/she has sufficient knowledge of the product or the application before moving on to the next stage, as this stage is said to be the foundation of the entire testing life cycle.
2.     Writing Test Plan:
Ø Once the sufficient knowledge of the product is acquired, it is now time to write the Test Plan.
Ø Test Plan which is the dynamic document, that consist of all the activities that needs to be performed during actual testing of an application.
Ø The activities include deciding the number of testers needed for the project, work allocation, testing techniques that needs to be employed, framework that needs to be implemented etc.,
3.     Writing Test Cases:
Ø Once the Test Plan is prepared, it is now time for writing the Test Cases.
Ø Since the tester now has the good knowledge about the product, it will be easy for him/her to derive the maximum possible test scenarios from the end users perspective.
Ø Testers should convert all the identified scenarios into Test Cases by employing the Test Case Design Techniques.
Ø Test Cases that are written needs to be reviewed, where the reviewer reviews the test cases identifies the defects in the Test Cases if there are any and tester should fix review comments if there are any.
Ø Test Case is now sent for the approval, once approved by the approver, the Test Cases are finally stored in the Test Case Repository.
4.     Writing Traceability Matrix:
Ø Traceability Matrix which is also called as the Requirement Traceability Matrix(RTM) or Cross Referential Matrix(CRM).
Ø It is a document that ensures that for every requirement there is a corresponding Test Case written.
Ø By writing Traceability Matrix, tester can ensure that for all the requirement there is a corresponding Test Case.
5.     Test Execution:
Ø Test Execution is the stage where the actual testing process(dynamic testing) begins.
Ø This is a stage where the test engineer will be more productive and more pro-active.
Ø This is the stage where the testers employ suitable testing methodologies and test the application based on the test cases which is already written.
Ø This is a very important stage, which is the heart of the entire test cycle, where the tester involve in identifying the defects or bugs.
6.     Defect Tracking:
Ø As the name itself indicates that, this is a stage in STLC where the testers should track the defects that are reported by him/her to the developers.
Ø Testers ensures that the defect which has been reported are fixed by the developers.
Ø Once the defect is fixed, testers should once again re-test the application to ensure that the reported defect is correctly fixed or not.
7.     Prepare Test Execution Report:
Ø Test Execution Report is also called as the Test Summary Report or the Test Report.
Ø It is a document that consist of summary of the Test Execution like Total number of the test case executed, total Test Case Pass/Fail, Percentage Pass/Fail and so on.
Ø This Test Report is sent to Testing Team(Test Lead), Development Team(Development Lead), Management, Customer(if needed).
Ø It is to be noted that the test engineer prepare Test Report at the end of each Test Cycle.
8.     Retrospect Meeting:
Ø This is the last stage of the entire STLC, this stage is also called as Post Mortem Meeting or the Project Closure Meeting.
Ø As the name indicates, it is done after the completion of the project or at the end of the release mainly conducted by Test Lead.
Ø Test Lead prepares a document called Retrospect Document – in this Test Lead enters the information about the achievements and the mistakes that are done by each test engineer in the team.
Ø Test Lead refers this Retrospect Document in the next release of the same project or the new project, while preparing the Test Plan.
Ø Test Lead ensures that mistakes done previously is not repeated again. Thus Retrospect Document plays an important role in fine tuning the testing activities, hence helps in performing effective Testing of an application.

v What is the difference between Quality Assurance(QA) and Quality Control(QC)?

Quality Assurance (QA)
Quality Control (QC)
·        QA is process oriented
QC is product oriented
·        QA is preventing defects
QC is detecting defects
·        QA for entire SDLC
QC for testing part in SDLC
·        QA focuses on building in quality
QC focuses on testing for quality
·        QA makes sure you are doing the right things, in the right way.
QC makes sure that the actual result is according to the expected result.
·        QA involves is establishing the process
QC involves in implementing the process.



v What is the difference between Verification and Validation?

Verification
Validation
·        Verification is a Static Testing Procedure
Validation is Dynamic Testing Procedure
·        It involves verifying(reviewing) the documents like CRS, SRS, Test Plans, Test Cases.
Validation involves actual testing process based on the reviewed documents
·        It is a Preventive Procedure
It is a Corrective Procedure
·        It involves finding mistakes as early in the requirement and design phase itself
It involves finding defects only during the actual testing process
·        It is all about are we building the Product Right?
It is all about are we building the Right Product?