I was cleaning out a folder on my machine when I stumbled on an oldie but goodie PDF called “Classic Testing Mistakes” by Brian Marick.
In the paper he states how he breaks the classic mistakes into 5 themes and how to resolve the mistakes:
1. The Role of Testing: who does the testing team serve, and how does it do that?
2. Planning the Testing Effort: how should the whole team’s work be organized?
3. Personnel Issues: who should test?
4. The Tester at Work: designing, writing, and maintaining individual tests.
5. Technology Rampant: quick technological fixes for hard problems.
Common mistakes that I see are:
- Communication: All team members need to be able to communicate with each other. A tester needs to know what to test, when to test, and the importance of what is to be tested
- Ability to change: Schedules & resources change so everyone on the team need to be able to understand that change is inevitable.
- Documentation: this is key especially in test steps, defect reports and findings reports. If you find a defect you have to be able to allow the next person to recreate it by providing them with detailed steps.
- Automation: Not everything can be automated nor should it be. Pick the tests that you find you are testing the most, smoke tests are great candidates for automation.
- Results: Don’t be fooled by the results. If a result shows that a test failed, investigate it before throwing up a flag.
All projects have a mistake of some kind so I thought I’d see what other people felt were common mistakes as well. I would like to thank all the contributors on Twitter & LinkedIn for taking the time to respond to the question/tweet. Below are the responses I received back with my comments (in purple of course):
Response from @michael_d_kelly on Twitter:
A common mistake I make is to get tunnel vision on the specific changes being made in a release, ignoring other quality criteria I agree, there are times that a person is so focused on one piece that they forget to look at the big picture.
Response from @aegeansys on Twitter:
Ignoring the fact that a certain bug fix is a feature, treating it as a normal bugfix and not adjusting the testing accordingly. When a bug/defect is a marked as a feature the requirements should also be updated. I have seen times where test cases are updated but the requirements are not.
Response from Vinodh Sen Ethirajulu on LinkedIn:
1. Functional test plan must be written by a tester and reviewed by a domain expert /business analyst. This must be withheld from the developer. This is intentional. A test plan should be sent to all project team members so everyone knows what the testing group is planning on testing.
2. Code checkin must be allowed only after all probablem scenarios is tested and passed in unit test. I agree but there will be times that an emergency fix needs to be put in. Also, I don’t think you can say no checkins allowed since there are pieces of code that are not tied directly to others. I think the response here should be that the code should not be deploed/deliverd to the test group until until testing has been completed
3. Some tools are there that can anticipate runtime errors better than a human reader/tester. All modules under current release must be subjected to such tools. code checkin must be allowed only after this. This would be more up to the company since not all tools are allowed within an organizations framework.
All three points are common sense. But some projects these are not followed strictly due to time constraint or resource constraint. Example of tool as in 3 is findbugs for eclipse. I am sure such tools exists for other platforms. Yes, there are tools out there but are not always allowed due to security.
Response from Geoff Feldman on LinkedIn:
1. Time and repetition is not relevant. Functions and boundary conditions on the functions including preserved state as it affects other functions is key. This occurs everywhere and all teams are impacted by this
2. Testers looking busy and unable to explain their coverage is a huge red flag. The bigger flag is when the defects keep showing up in production
3. Methodology is key. So is a testing process that begins with repeatable, maintained developer unit test. Agree, bigger issue I see is that not everyone agrees on which process/methodology is going to be followed. Everyone needs to agree and if it needs to be changed everyone should be made aware of the change.
Response from Rick Kiessig on LinkedIn:
1. Unit tests written by the same person that wrote the code, because they often use the same faulty logic in their tests as in the code itself. Solution: have some unit tests written by other developers or QA. Great solution
2. Not testing for quality: including performance, scalability and security I think that all applications should be tested for these items.
3. Focusing on cases that are really “self-testing,” rather than on the much-less-visible corner cases. Agree
4. Failing to include code coverage measurements and targets in unit test guidelines. Agree
5. Not following a coherent testing strategy. Agree, consistency is key.
6. Testing low-level components only, and forgetting to test the system as a whole. This is not something a test team should forget, if this happens I hope you bring it up right away to get it corrected.
Response from Bill Rinko-Gay on LinkedIn:
1. Not involving the QA group at the requirements phase. QA should be reviewing and commenting on requirements and design. Agree
2. Writing tests to the implementation rather than the requirements. This not only leads to trash as the implementation changes, but validates the wrong thing. Developers validate the implementation. Testers should validate the requirements. Testers should validate that the implementation worked by executing smoke tests and then running the test cases that are based on the requirements
3. Automating only after the software is stable. The bulk of passed test cases should be automated. Disagree, you cannot automate everything. Just because a test case passed doesn’t mean it should be automated. What should happen is when pieces of functionality are stable and being repeated in testing then they should be automated.
4. Automation experts should write code that can handle instability of early builds. Automation experts should have development expertise, not just record/playback. Use humans for inventing new tests, not running old ones. Partially agree. Yes, automation experts should have more than just record & playback expertise but if the code is not stable they cannot automate it. You automate the tests that you know what the expected result will be not ones that you think you know.
Response from Jonathan Ross on LinkedIn:
1. The tester should have an appreciation of the real world use cases of the deliverable being tested. Agree
2. If something feels wrong it is probably a bug of some nature. This is not always the case, there has been times that I thought something was wrong but when I went back to the requirements it was correct and not a bug/defect. If it feels wrong check the documentation before writing up the bug/defect.
3. Don’t wait until the end of the dev cycle to test. Small test cycles for small changes. Everyone on the project should agree to using an agile/iterative approach for this.
4. Not maintaining test documentation and scripts/ tools. This is huge especially since test teams are usually on multiple projects so up to date documentation is very important to keep track of what has been done, why, and what is left to be done.
You can make sure of this and other common errors with good planning, close work with R&D / Product and a highly motivated QA team who are kept passionate about their work.
Response from Jim Hanlon on LinkedIn:
1. The most common mistake in the testing process is underestimating the number of test cases required to adequately test the Application Under Test. This could be due to the requirements constantly changing and/or the test team is not involved in the requirements phase.
2. A corollary to the first is underestimating the time required to test the AUT.
3. No automated testing strategy: generation/execution/reporting. Agree
4. Some larger organizational issues: poor coordination with the requirements process; poor coordination with the debug/fixpack/build/release process. When this happens someone should step up and bring this up as an issue so it can be resolved
Response from Mike Sax on LinkedIn:
1. Not finding bugs doesn’t mean that they’re not there. It’s easy to start a testing effort seeing a small volume of defects and think “wow, it must be pretty clean!” Don’t wait until the mid point of the effort to get on top of the testing process to ensure that testers are effectively executing scripts and workflows, as well as evaluating the scripts and process to ensure an effective test cycle. Agree
2. Communication: Testers and developers who will be fixing bugs should be in close communication with each other. IM, phone, in person rather than email that can get lost in the shuffle. Too often, bugs fall into a “researching” status for days while emails shuffle back and forth to answer a question that could possibly have been addressed quickly in a face to face chat. Agree, should have touchpoint meetings as well to help resolve this
3. Effectively describing bugs: Especially for non QA professionals participating in testing. They need to be educated prior to beginning testing on the expected standards for creating defect records to ensure that as much information is relayed to the developers as possible (detailed error messages, screenshots, steps to reproduce, etc). Capturing all of this information initially cuts down on the need for back and forth communication between the developer and tester. Agree, all bugs should have detailed steps & screenshots so they can be recreated
4. Managing scope through triage: Effective defect triage will help to minimize the presence of enhancement requests (either intentional or inadvertent) that make it into a developer’s queue and impacts the timeline to complete defect remediation rather than flowing into the project’s change management process for new or changed functionality. Agree
Response from Walter Tyree on LinkedIn:
Testing on a hardware/software configuration that is not representative of what the users have. The computers that developers and testers have are quite often not what the user community will be running. Having a test lab (and forcing people to test in the lab) of machines that are configured for all of the scenarios within an Enterprise can save lots of time (Ghosting can really help here). Agree, testing should be completed on a variety of different configurations to make sure end users can use the application being delivered.
What common testing mistakes do you see? How do you resolve theses issues so they don’t keep happening? What are your thoughts on the responses that are noted in this blog? Do you agree/disagree?