With software getting larger and larger, the interaction between engineers who work on various parts of a software has been getting more challenging. While this is not really a big problem for development engineers, the ones who primarily write code, but its indeed a growing problem for ‘Test Engineers’, the ones who test a software and ensure that it does what its supposed to do for the end-user.
Let me take a couple of examples to explain this better. In first case, we would take the case of ‘MS Word’ which is a part of ‘MS Office’ family of products. In second case, we will take a much smaller product like ‘Adobe Reader’ (more popularly knows by its former name ‘Adobe Acrobat Reader’).
MS Word has tons of modules or components or ‘section of codes’ which would have their own small teams. These may be File IO (Open, Save, SaveAs), Fonts, Print, Tables, Mail Merge, Formatting, Bookmarking, Help and so on. Some of these could be common across products, e.g. Fonts handling, Help etc. Each of these small teams will have their own test engineers and most of them test ‘their’ piece within a certain ‘Entry and Exit’ ideal boundary condition. Its always assumed that the guy who is before this node in the value chain would behave well and the guy who is next in chain would also behave well. Thats a wishful thinking.
Lets take an example
Enter Text -> Save File -> Select Text -> bold the text -> Undo bold -> Save -> Close.
In this workflow (a logical sequence of actions performed for a task), lets take the case of the team which is testing ‘Undo and Redo’ function.
There is a greater chance that this team would probably be doing some common coding which could be utilized by other products as well. To make it simple, lets assume that this ‘component’ expects the host application to ‘fill’ the stack with ‘actions’. When someone calls ‘Undo’ this function would simply ‘Pop’ the top item and pass back a small XML which would have details as filled by the host app. A simple program like that may actually need no testing. So when someone is doing an Undo the tester in this team would expect that they just need to pop the last value and supply the xml. Its for the host application to manage it. This way the testing gets much more simpler and a linear problem to solve. Though in ideal world this would the best way to test our software but in real world this is like inviting disaster, more so when a s/w is not a mature one and is still evolving. There would be hazaar cases which will happen when Undo would be called. For Undo tester, it would be impossible to ever think of those case because he is always working in that ‘Ideal Entry and Exit Condition’ mindset. What happens if some one who is testing ‘Save As’ would actually want to do ‘Undo’ before and after a ‘Save As’. What happens if we do an ‘Undo’ after we close a document. What happens if we ‘Undo’ as the first thing. In all these cases, ‘Undo’ code might be doing the right thing technically but they may not be the correct things to do. In the course of software development, lots of code is written after bugs are reported. Imagine a scenario where some unwanted things are happening when we ‘Undo’ when no action has really happened. ‘App’ coder would expect that ‘Undo’ coder should handle this scenario where as ‘Undo’ coder would expect ‘App’ coder to do this. While the bug gets fixed eventually, we might not know who fixed this problem. So it may happen that same bug might occur in a different scenario if its has been fixed from the ‘App’ side. The only chance of catching this bug is actually test both the features in one go viz. ‘Save As’ and ‘Undo’.
Such form of testing where we are not really testing one feature rather a set of features is usually called ‘System Testing’. But then ‘System Testing’ is everything. This kind of testing is more better known as ‘Workflow Testing’ since we are exercise the complete workflow of opening a doc, doing something , saving it, doing a mistake, undo-ing it, save again, Close.
To ensure that you find all these bugs, institutionalize regular sessions of ‘Workflow Testing’ in your test team. These can be held in a single place say a ‘Test Lab’ so that it also helps to build some camaraderie among team. Make it regular, say every friday from 3 to 5 PM and have a simple mechanism to measure success say ‘no of bugs reported’.
Let me talk briefly about the other example and then I would try to give you a 10 point summary on how best to do ‘Workflow’ testing.
In Adobe Reader, there is a feature called ‘Print’ and there is a feature called ‘Open PDF’. Both the testers would do extensive testing of these by trying to get all kinds of PDF but probably ‘Print’ tester wont really focus on ‘Read’ part, so he wont notice a bad glyph (which might print well) or an improper kerning or encoding issues. Same would be the case with ‘Open PDF’ tester. Now imagine a bug which goes like “…..can’t print high ascill characters…..”. It may happen that the underlying library which encodes and render is same and the bug may get fixed in one of the workflows.
I hope you are getting my basic point of doing more workflow driven testing then feature-focused testing. Here are some tips on how best to do this.
1. Make it regular, fixed time, fixed place.
2. Try to do it at a single place, build team camaraderie, also makes the whole team more efficient and knowledgeable as they listen and contribute to different discussions.
3 . Dont make it an ad hoc workflow testing, rather identify and write some of the key workflows and let the whole team exercise only those workflows.
4. Measure the success through a simple mechanism. e.g. – ( total no of valid bugs reported / no of people ) per session.
5. Make it fun 🙂
Its not 10 points , 5 is better. Thats all about workflow testing which I have been upto. How many of you actually do it, share your thoughts.