More Answers About the Myths of Agile Testing

Earlier this week we held a webinar - "7 Myths about Agile Testing - Busted!" - that generated a lot of good questions about how best to conduct testing while in a fast-paced, fluid Agile environment. We didn't have time to get to all the questions, so the Aricent experts who ran the webinar, Gopinath Ramachandran, Gayatri Singla, and Srimanta Kumar Purohit, have gone through the unanswered questions and answered them below.

You can find out more about Aricent's testing services here.

Q: Iterative processes would also result in ever changing requirements. How would Agile ensure that a project is not at risk in a situation where there may be 10 to 15 other interfacing applications, especially in the telecommunications industry?

When there are multiple teams involved then Scaling of Scrum can be quite helpful. Depending on the type and level of interactions or sync up required, Scrum of Scrum (representative from each Scrum Team) or Meta Scrum (product owner from each Scrum team) can be considered.

On top of this the key team members from each scrum team can meet in a workshop to freeze or align from the epic level (product functionality, architecture and interfacing level). The periodicity of this work shop can be overlapped over 2-3 sprints/iterations.

Where multiple interfacing is required, the Product Owner plays a key role in deriving and prioritizing the requirements.

Q: With pair testing, it looks like Agile testing requires more resources. Your views?

Intent of Pair Testing is to achieve higher quality of testing and hence product.  It is not mandatory that you do all the testing always in pairs. When the product to be tested is complex in nature or when knowledge sharing is required for achieving better testing scope/focus, pair testing is quite useful. Also the basic aspect is that the test to be done should encourage lateral thinking.

Pair Testing is quite useful for exploratory testing as it brings different ideas, visions, and skill sets of two individuals involved in testing the same thing. It also includes online reviews.  It is an effective way of knowledge sharing and on the job training of new team members.

We normally do Pair Testing with a charter (i.e. What area to focus on, what is the prime scope, what is the intended output of testing, how much time should we spend for testing etc.). While testing, the testers exchange ideas/views/thinking and try to track down hard to find defects in the product.

Q. Would you elaborate more on the process for operational feature and operational product?

The illustration in the webinar depicted that operational features are governed by user stories for a specific feature (using business requirements to derive a use case). But when we want to have a product ready for deployment it needs to cater to additional aspects like non functional software requirements, feature interaction, reliability & performance, usability & security. In such cases the deployment requirements have to be considered along with business requirements.

Q: In general over communication is always considered to be better than low communication.  So in the context of Agile I think it just assumes higher significance. Your views?

Yes we agree with the statement since success in Agile gets enhanced with effective communication & collaboration. So in our presentation we formulated the statement that “agile encourages communication and collaboration for better project execution rather than creating time and effort overhead”. Having said that, if the team does not approach communication and collaboration effectively then it may create chaos. For example, Daily Scrum Meetings should be targeted to last at most around 15 minutes. The Scrum Master should not try to resolve the impediments in that meeting itself, as it takes the time of all Scrum team members and for many the issue at hand may not be important. So the team should follow these basic rules:

  • All meetings should be strictly time bound
  • Roles and responsibility for the meeting should be clearly defined
  • Agenda should be pre-shared and well understood

Q: What is the maximum size sprint team that you have managed -- was it successful? ­

For a particular product line we had 12 Scrum teams each with around 8 scrum team members. It was successful.

Q: What do you apply as 'done criteria' to determine if a story has been completed or not within a scrum­?

“Done Criteria” is purely dependent on the nature of the project & varies significantly from Project to Project.

Following is just an abstract or sample of Definition of Done (DoD) derived from one of our Messaging domain engagements (major testing DoDs are in italics). DoD checklist reviewed by Scrum Master, team and approved by Product Owner.

  1. Test description is written in QC & reviewed
  2. Code produced (all 'to do' items in code completed)
  3. Builds without errors and Basic Unit tests are passing
  4. Deployed to system test environment and passed system tests
  5. Related ATP tests are written and reviewed
  6. ATP tests are executed and status updated in QC

Q: Myth #3 was about Poor Code Coverage. When moving into an Agile development process, is not "code coverage" an obsolete concept? We go with exploratory testing and quality comes more by feature exploration than by code coverage (monkey checking). Your views?

We have seen that code coverage plays a vital role in Agile projects as development is incremental. User story testing is generally ensured by the SCRUM team but it is difficult to gauge the code quality. In a few projects, we have seen that the customer has put >95% code coverage as part of DoD.

Besides code coverage, the project also has to address compatibility & integration issues. Otherwise performance & reliability get impacted during future integration.

Feature exploration cases ensure quality execution of user story and find hidden defects which may be less probable to discover with strategized testing. Exploratory testing may not focus on enhancing the code coverage.

Q: Is it truly possible to consider Agile techniques for a large back-end test organization which covers non-functional requirements such as performance, load, and duration test objects?­

Based on our experience, it is very much possible. We have success stories in similar engagements.

Q: Can formal methods for validation and verification be used with Agile? ­

Yes, formal methods i.e. systematic techniques are used partially in Agile. In Agile, use cases with testability criteria (user stories) are targeted through scripts & testing phases. In Agile, RISK assessment technique is also used to define systematic test scope.

Ideally, Agile leverages benefits of all the three testing techniques – systematic, exploratory & automation.

Q: What do you think about sprint teams that use developers in tester roles -- do you see any success in that configuration? ­

Depending on the project requirements, we do opt for developers taking care of testing needs especially in unit/feature testing. But in such projects we also include a phase where system testing is taken care by a tester or user story testability is governed by a tester.

Experienced testers bring unique skills to the table like the ability to visualize what may break the system, the expertise on test tools, knowledge on testing methodologies and process, understanding deployment aspects, and managing end user expectations.

Q: Should most of the defects in any iteration be detected during UT or ST?

Intent should be to cover most of defects during UT phase, as early defect detection helps in reducing rework & defect fixing cost.

That’s why Agile focuses on implementing engineering practices like TDD, Refactoring, and CI, which can help to make the UT phase effective.

But you should also consider that the granularity and objectives of white-box unit tests serve a different purpose from downstream black-box testing. So we should be open to find system level defects during the ST phase.

Q: What is the best automated test tool for wider test coverage within an agile environment? ­

The test tools vary from project to project. We have used standard tools like the HUDSON based framework for many CI automations.  But standard tools pose a challenge to customization so we also use proprietary frameworks in Agile projects.  Freeware’s like cpp-unit, j-unit, FIT, and TTCN3 based frameworks are also picking up.

Q: Do you use or have you used the view of a Validation Sprint that is testing only focused - that would be defined as a customer acceptance test or alpha test?­

Yes. We have also used Hardening sprint only for testing.

Q: Can the agile team be focused across one release or is it specifically focused towards one feature in a release­?

We have seen both the scenarios – an Agile team focusing on multiple features of a product release or only on one feature of the product release. The choice is predominantly determined by feature complexity for that particular release.

Q: In the SCRUM of SCRUMs, is this built up with only SCRUM masters from the SCRUM teams?  Do you normally have a "master" SCRUM master for this team? How about the Project Owner for this team?

In some service projects, we have a Master SCRUM master who coordinates with & assists the product owner.

For Scrum of Scrum (SoS) the size should not be more than 8-9. So we need to balance it out. But as such, any scrum team member with relevant skill sets can be part of SoS. Project Owner can also be part of the team but he should be clear that he is representing the Scrum team.

Q: What is the future of Agile testing in terms of how will this become more effective in the future with widespread adoption? ­

Agile methodologies already have a set of best testing practices. With wider adoption, Agile testing will mature. We are already seeing techniques like risk based testing, writing testable requirements etc. being applied in Agile.

Q: What is the best way to organize testing if there is one tester and 5 scrum teams? Of course automation testing is the answer but what is the role of this one tester?

We feel that in such a case the tester has to play the role of an analyst only where he can take it up to ensure that the testability part of the user stories are well defined, defects are regularly reviewed. He can optimize the testing by contributing effectively at retrospective meetings.

Q: Who is normally responsible for designing/running/analyzing/maintaining automated test cases? Is it the SCRUM team or do you normally have separated test automation SCRUM teams? ­

It varies depending on the complexity & criticality of the project. We have projects where independent SCRUM teams are responsible for maintaining the automation framework & automated test suites in different SPRINT cycles.

But in case the project scope is customization & maintenance, this role is performed by the test SCRUM teams.

Leave a Reply

Your email address will not be published. Required fields are marked *