• SocialTwist Tell-a-Friend
  • View Aishwarya Mishra's profile on LinkedIn

Curse of the QA

I have been working as a software tester for close to 4 years now. For all these four years I have worked in India for American clients. I state this to state the kind of experience I have. I have noticed a discernible sense of gravitation towards things that involve coding or programming and an equally discernible sense of indifference towards things that involve less of programming and more of testing. Inside testing too, automation testing is a sought-after skill and manual testing is a taken-for-granted trait. This proves the propensity, especially among software professionals, towards computer knowledge compared to business functions.

Testing professionals are partly to blame and the rest of the blame can be attributed to institutes and companies. While studying Engineering in Computer Science there was no emphasis on Software Quality as a concept – obviously the usual knickknacks about ISO and SEI were there. But nothing to dispel the notion which led almost 90 percent of all engineering college pass outs to say “I do not want to be a software tester but I want to be a software developer”. Our training sessions at our first jobs did not do much to change this mindset. Again software testing was a tertiary function. Somehow, it was ingrained in our minds that “code writing” is the only actual component of software development (Maybe software n code & writing n development were used interchangeably).

This has led to some characteristics that one often gets to see in projects.

1. Software testing is not considered cool and if you are into Manual Software Testing well you may as well be a crocodile – the cold blooded reptile which has not evolved over thousands of years. To be sure, do not gloss over the fact that it also makes crocs one of the very few species which has thrived without evolving.

2. Many “code writers” are unable to visualize a software project as enabler of a real life function. This leads to questions like “Why would the user do something like that” in response to queries like “why doesn’t your software support this particular scenario”. To be sure, the former is a valid response to the latter, if it is indeed backed up by real data.

3. Automation efforts are left mid way. The most cited reason is – the automation effort was started but the application was changing continuously (was not stable) and hence it was halted. Obviously , this reason is mostly cited in case of testing UI interfaces.

4. Voluntarily or involuntarily software testers are unable to breach the upper ceiling when it comes to understanding the inner working of the applications. They are able to understand the flow of the software and are even able to wear the actual user’s shoes but they are not able to understand the engine behind the shining car. In more complex applications, software testers are unable to conjure scenarios akin to real world functions – for example generating actual mashup scenarios for an enterprise for an enterprise mashup Platform. This results in thorough testing of existing functionality but falls short when it comes to finding missing or required functionality.

5. Some software testers who are able to breach the ceiling and are able to demystify the inner workings are unable to keep their understanding of the functionality and inner working separate. This results in justifying to themselves missing parts of the software. This is what I called the problem of “understanding” the code. Software testers SHOULD be in a postion to understand the code but they should always remember that they are paid to “not understand it”. Which means they should not use their knowledge of the intricacies to justify the application’s shortcomings to themselves. On the contrary they should use it to find the shortcomings and bring them to notice.

Do let me know your thoughts on my thoughts.

Leave a comment