Sunday, October 26, 2008

Agile Software Development

Summary look of Agile Software Development

What is it?
  • Perform task in small increments in iterations, over a short time frame (typically last from 1 - 4 weeks).
  • Each iterations is worked by a team through the full software development life cycle including planning, requirements analysis, design, coding, unit testing, and acceptance testing when a working product is demonstrated to stakeholders.

Objectives
To have an available release at the end of each iterations, with new requested features added to each iterations.


Principles
  • emphasize face-to-face communication over written documents.
  • Team size is typically small (5-9 people) to help make team communication and team collaboration easier
  • Agile team will contain a customer representative. This person is appointed by stakeholders to act on their behalf and makes a personal commitment to being available for developers to answer mid-iteration problem-domain questions.
  • Customer satisfaction by rapid, continuous delivery of useful software.
  • Most agile methods share other iterative and incremental development methods' emphasis on building releasable software in short time periods (weeks rather than months).
  • Working software is delivered frequently.

Benefits
  • Minimise overall project risk
  • Allow project to adapt to changes more quickly
  • Documentations is produced as required by stakeholders
  • Minimal bugs
  • Agile methods usually produce less written documentation than other methods. In an agile project, documentation and other project artifacts all rank equally with working product

Techniques
1. Test Driven Development (TDD)
2. Behaviour Driven Development (BDD)

Test Driven Development Best Practice

What is Test Driven Development (TDD)?
Testing methodology associated with Agile Programming in which every chunk of code is covered by unit tests, which must all pass all the time, in an effort to eliminate unit-level and regression bugs during development. Practitioners of TDD write a lot of tests, i.e. an equal number of lines of test code to the size of the production code.

TestDriven.NET is a good testing tools used for TDD.

Other TDD resources:
  1. Introduction to Mock & Stubs
  2. Why and When to Use Mocks & Stubs
  3. Besta & Worst Practice of Mock

Thursday, October 23, 2008

Software Testing Tools





















Testing types / activities
Testing techniques
  • Black Box / Gray Box / White Box
  • Ad Hoc / Exploratory testing
  • Scripted and automated testing
Automated testing toolsFunctional and regression testing:
- Mercury Quick Test Professional
- Selenium
- AutomatedQA TestComplete
- Mercury WinRunner
- Rational Robot
- Rational Functional Tester

Load & stress testing of Web applications:
- Mercury Load Runner
- Rational Performance Tester
Own automation toolsBugHuntress Test Suite (Palm applications testing)
Know-how,
special testing technologies
  • Load testing techniques
  • GPS positioning testing approach
  • Method of fast testing environment configuration
  • Hardware emulation methods, etc
  • NUnit
  • TestDriven .NET
Bug Tracking tools
  • Mantis
  • Jira
  • Mercury Test Director
  • BugZilla
  • Test Track
  • Miscrosft Visual Studio Team System 2008
Test planning and test cases preparations tools
  • MS Project
  • Microsoft Office, Visio
  • Rational Rose
  • BPWin, ERWin

Testing Types & Activities

Main testing terms, types and activities
> Manual and automated testing

> White-box and black-box testing
> White-box testing activities
> Black-box testing activities
> Other activities

Manual and automated testingManual testing is still very important and widespread because some kinds of tests cannot be automated (e.g., usability testing can be only automated partially). Moreover, some complicated faults are found by means of manual testing techniques only. Automation is realized with the help of special testing tools developed by such such manufacturers as IBM (Rational Robot, Rational Performance Tester), Mercury Interactive (Quick Test Professional, LoadRunner), Segue Software (Silk Software), AutomatedQA (TestComplete), and others (see also Software testing technologies, tools). Sometimes it is necessary to use specialized automation tools, for example, for unit or code style testing.


White-box and black-box testing
Black-box testing

Black-box testing implies that a tester doesn't know how an application is designed at the code level, i.e., it involves dynamic testing of compiled applications. The tester interacts with the software system via its interface and analyzes the application reaction. Therefore black-box testing is one of the most popular testing types as: 1) It doesn't require access to the code, algorithms, and internal data structure (all of them can be a closely guarded trade secret of a software development company); 2) It gives an opportunity to test software products from the point of view of the end-user.

White-box testing (also called glass-box or clear box testing)
In this case a tester knows the internal program structure and its code. As a result, the tester can execute each program statement and function; check each intended error handling, etc. This testing involves source code reviews, walkthroughs, as well as design and execution of tests based on the access to the program code. White-box testing requires deeper knowledge of programming languages and technologies than black-box testing.



White-box testing activities
Here are some of the white-box testing activities:
Code testing
The task of code testing is to test program classes, functions, modules as separate code units and check their interaction. As a rule, it is accompanied by development of special test classes, start functions, test data sets (including illegal/invalid data), design and execution of appropriate test cases.

Static testing
Static testing takes place without running an application or its modules. It can include source code reviews, code inspections, peer reviews and software walk throughs.

Code style testing
This type of testing involves the code check-up for accordance with development coding standards and guidelines, i.e. using the rules of code comments use; variables, classes, functions naming; the maximum line length; separation symbols order; tabling terms on a new line, etc. There are special tools for code style testing automation.


Black-box testing activities
Functional testing

It is testing of application functionality and examination of its compliance with the software requirements specification (SRS).

Regression testing
The main aim of this type of testing is to make sure that the bugs revealed in previous tests are fixed properly and no new bugs appeared during such bug fixing. If possible, it's recommended to automate regression testing, as the number of software development / bug fixing iterations is usually large.

Performance testing
Test of application productivity and its conformity to requirements. It is especially important for complex Web applications and mobile software. For example, graphics processing can be crucial on mobile devices, so, it is necessary to check if the application works properly and, e.g., doesn't lead to display "freezing". Special tools allow getting productivity metrics. One of the subtypes of this testing is benchmark testing.

Load testing
It tests system work under loading. This type of testing is very important for client-server systems including Web application (e-Communities, e-Auctions, etc.), ERP, CRM and other business systems with numerous concurrent users.

Stress testing
Stress testing examines system behavior in unusual ("stress", or beyond the bounds of normal circumstances) situations. E.g., a system behavior under heavy loading, system crash, and lack of memory or hard disk space can be considered as a stress situation. Fool-proof testing is another case which is useful for GUI systems, especially if they are oriented at a wide circle of users.

Boundary testing
It tests correctness of application work when entering the boundary values of input data as well as proper processing of over-boundary values. For example, for a percent entry field boundary checks can be 0% processing (it should be processed correctly) and -1% treatment (it should not be allowed to be entered). Boundary values can be much more complex and demand much more complex boundary testing.

Usability testing
This is one of the most complex and interesing types of testing. It is very important for all sorts of application but it is the most acute for Web, mobile/wireless and mobile internet systems (see more in the article Mobile usability testing: problems and solutions).

Configuration testing
It checks how an application works in different configuration environments (OSs, DBMS, peripherals, mobile carriers, network capacity, hardware, etc.). A typical example is a printing application: configuration testing would include test printing on all printers available in the market or the most popular ones.
Installation testingOne of the widespread problems with software products is the installation issue. You might have faced a situation when after buying an application which you liked at your friends' place; you had serious troubles with the installation. Installation testing is aimed at making the installation as simple as possible, so that you understand what is necessary to be done without quitting the installation process. This testing is often combined with documentation testing.

Documentation testing
The aim of this testing is to help in preparation of the cover documentation (User guide, Installation guide, etc.) in as simple, precise and true way as possible.

Security testing
Security testing is conducted to examine an application from the point of view of possibility to affect the user safety and/or make his/her data available to third parties. This testing is especially important for payment systems and other applications which use critical data about a user.

User Acceptance Testing

Involves running a suite of tests on the completed system. Each individual test, known as a test case, exercises a particular operating condition of the user's environment or feature of the system, and will result in a pass or fail outcome. The test environment is usually designed to be identical, or as close as possible, to the anticipated user's production environment. These test cases must each be accompanied by test case input data or a formal description of the operational activities (or both) to be performed—intended to thoroughly exercise the specific case—and a formal description of the expected results.



Testing Best Practice

Overview
This white paper looks at defining some guidelines for best practice when planning for the testing effort for any development project or when recruiting new testers. Guidelines that are covered here are:

1.1 The role of testing in an organisation or project
1.2 Planning the complete testing effort
1.3 Choosing the right people and resources for testing
1.4 Planning the test activities



1.1 The role of testing in an organization or project
1.1.1 The Test Team is not responsible for all Quality Assurance
Quality should be a project or organisational activity and driven by management not just the Test Team. Everyone taking part in the development process is responsible for the quality of their work. Systems that are developed from quality requirements, with quality development standards means that Testing is there to validate this quality rather than ensure the quality.

1.1.2 The purpose of testing is to find software bugs
While this statement is correct to an extent however the purpose of testing is not just to find bugs bet to ensure that the important bugs are found. It is often too easy to spend time testing less important functionality and log superficial bugs while critical bugs are being missed because they are not being tested.

1.1.3 Usability issues are important and should be raised
Testers are often the only people in an organization who will use the system as heavily as an expert. Formal usability testing should be recognised as part of the testing effort and if valid be addressed by the development team. Choosing to ignore usability issues may affect client satisfaction and lead to lost revenue.

1.1.4 Ensure that all bug reporting and metrics is put into context for management

When presenting bug reports all limitations to the data should be explained to eliminate the risk of management taking a “too optimistic” view towards the results.

1.1.5 Don’t leave testing to the end
An effective testing process involves extensive planning for the testing effort. Including testing earlier on in the development life cycle will promote a preventative approach to development with the test team providing a supporting role in the early stages with their input increasing as the project progresses. The test team will become familiar with the product and any issues identified can be resolved earlier rather than later.

Tests designed before the development team start coding can help to improve quality. The test team can inform the developer of the kinds of tests that they will run which in turn may help the developer during the design phases and in unit testing.


1.2 Planning the complete testing effort
1.2.1 Do not bias your testing effort to functional testing
This could be a danger as functional testing usually tests features in isolation. While this will ensure that the requirements for the project are met it means that critical bugs may be missed. Testers who are planning the testing effort need to be a little more insightful to identify test cases and scenarios that cover off critical paths that while they test a certain feature also ensures that dependant or affected functionality is also tested.

1.2.2 Do not underestimate the importance of configuration and integration testing
Often this type of testing is either forgotten or given less emphasis in the development process. These tests ensure that the developed product works on different hardware configurations and with different third party software.

While these tests are often forgotten because it is expensive to maintain test environments with the necessary hardware and software the use of virtual machines etc. will mean that most companies can cover a baseline for this type of testing.

1.2.3 Remember to factor in time for performance and stress testing
Often this type of testing is left to the end because it means that major development has stopped, most of the testing is complete and the system is the closest to it has ever been to being a potential release candidate. While this is the best time to start intensive performance or stress testing remember to factor in enough time for refactoring, optimisation and re-testing of the code in the instance that your product will just not scale to the level required.

Developers and testers should keep in mind any performance requirements while they are designing their code, planning their tests and performing unit testing or functional testing. Obvious performance issues are quite easy to detect in the early stages of development and testing.

1.2.4 Remember to test the installation procedures
To avoid embarrassing mistakes where the product won’t install on first go make sure that each release provides detailed Release Notes and Installation Procedures that have been verified by both the development and test teams.

1.2.5 Remember to test the supporting functionality such as online help or documentation
Testing the documentation means checking that all the procedures and examples in the documentation work. Remember to factor in enough time so that any modifications can be made. As often is the case the product may have changes from the original specifications so procedures and screenshots will need to be updated.


1.2.6 Ensure you have sufficient testing coverage
Plan to get coverage for all critical functionality across all areas of the system early in the piece. This may mean that you do not get to test all functionality on first pass but you should cover all the critical tests.

It is better to know the status of the system across all areas rather then falling into the scenario where you don’t start testing a new area until you finished the last. You will get a general feel for the stability of the system rather that having some areas that are tested to a high level while others are tested less thoroughly if shortage of time becomes a factor. You can also identify areas where there are high numbers of defects and work with the developers to mitigate this risk.

1.2.7 Correctly identify high risk areas
As part of the test planning all key stakeholders should work with the development and test teams to identify high risk areas in the product or system. These areas are typically high risk because they high visibility to system users or if failure could lead to either loss of property or in some cases even life. Some businesses have extremely complex business rules that need detailed tests.

Another good way to identify high risk areas is to review historical system data. Old bug reports and metrics may give you insight into the areas that require thorough testing.

1.2.8 Realise that testing requirements will change and be prepared to accommodate for this
It is inherent with development projects that requirements and hence testing requirements will change. Therefore no matter how well you plan your testing efforts make sure that you keep up to date with development changes so that you can make changes to your test plans and cases as necessary.


1.3 Choosing the right people for testing
1.3.1 Don’t use testing as a transitional job for new programmers
If you have hired someone who wants to be a developer don’t put them into a testing role. While some may argue that spending a couple of months as a tester will help the new person to understand the system and allow you to analyse how they grasp concepts before starting on the code this is not conducive to good quality. If a person has been hired as a programmer then that should be there primary focus.

1.3.2 Be careful of recruiting failed programmers as testers
While there are many good testers who are not good programmers and there could be equally as many good programmers who are good testers. Some programmers genuinely need a change programming and make a real difference to the testing effort.

However there are situations where the reasons that made a person a bad programmer will also make them a bad tester. For example, a programmer who had poor quality code with a high bug count because they had low attention to detail will more than likely be a bad tester as they will miss bugs for the same reason.

1.3.3 Look for the following qualities when recruiting your testers
When interviewing concentrate on the candidates intelligence and thought process. A good tester should have the following qualities:

> Methodical and systematic
> Tactful and diplomatic but firm when necessary
> Good troubleshooting skills. Able to notice and pursue oddities in the system.
> Skeptical about any assumptions and be able to validate whether they are true or not
> Good written and verbal skills in order to explain bugs clearly and concisely
> A knack for anticipating what others are likely to misunderstand this is useful for UAT and usability testing. If a tester has problems understanding then how will the client feel?
> An ability to think outside of the box to experiment and consider all possible scenarios.

1.3.4 Try and recruit testers who are domain experts in the area you need
While this is hard, especially in a consultancy firm where the domain expertise is varied try and identify the areas where Change is most likely to focus its work effort. Testers who do not have domain expertise find it hard to distinguish between important and irrelevant or superficial bugs.



1.4 Planning the testing activities
1.4.1 Plan tests before executing them
Historically poor test design like poor code design leads to poor quality. Planning testing activities allows the tester to analyse the scenarios and any special cases in advance. While jumping straight into testing activities may mean that these special scenarios are never identified and hence never tested.

1.4.2 Encourage peer review of test documentation
It is good practice to encourage reviews where a quick check of the testing approach and the defined test cases to ensure that sufficient coverage has been met and no critical tests have been missed.

1.4.3 Set guidelines and implement tools for good issue tracking and issue reporting
An integral part of testing is how to report and track the issues that are a result of the testing process. There are a number of free test bug tracking tools that can help you to manage this activity. Similarly there are many off the shelf products that offer the same functionality and then some.

What needs to be kept in mind is that while tools can help they need to be implemented and used properly. Guidelines need to be set so that testers and anyone reporting issues provide specific data that will help developers to investigate and if necessary fix the issue. Workflow should also be set so that issues don’t get lost in the system and are routed to the correct individuals as needed. An administrator may need to be set to oversee the whole issue tracking process.