Working in the crowd is all about discovering defects – True
At the same time, the whole process is very different from the standard QA procedures. Let me walk you through a typical crowd cycle. Will skip for now all the bottlenecks that could happen. For the sake of this example, the environment is working, the product can be downloaded and some basic specification is available.
Beginning of the crowd test cycle
Few seconds before the test.
Let the testing begin.
The timing and exact countdown matters literally. Quite many testers go bug hunting as soon as the test started. The first defects normally show up within the first minutes or sometimes even seconds (in case you wonder – how come? wait a bit, dark secrets are about to be disclosed)
All the most obvious defects will be discovered within the first hour or so.
- Unable to register a new account?
- The product can’t be deleted from the cart?
- Search doesn’t return the corresponding item?
Be sure those defects will be logged several minutes after the testing is allowed. Not surprising – there is a bunch of
blood defects thirsty testers looking for their prey.
You get the idea. Competition is especially tough within the first hour. Duplicate defects are very common at this stage as two or even more testers will discover the same bug and post it almost simultaneously. The time difference between each defect logged is sometimes less than a minute. Which is enough to identify the outcomes. The payout will only be granted to the earliest bug. All others will be rejected.
Yeap, the crowd lives according to “the winner takes it all” principle.
Frankly, I always disliked this part as it’s not what I would call testing. “Speed bug raising” seems more appropriate to me here. Another reason for my negative attitude is that’s the stage where the dark magic happens. Let me elaborate on this one.
There are basically two ways for the defects to show up abnormally quickly:
Placeholder defects. These are the bugs that get logged with a dummy data (e.g. random text in steps or results, fake attachments) with a sole purpose to secure the fastest defect reporting time. Afterward, the issue is edited with a real input. Such procedure is restricted by all platforms, but some folks do this hoping no one will spot the ruse. Team Lead or bug reviewer may indeed overlook it, at the same time other testers are also vigilant to such incidents and will report them if notice. So in case, you decide to play with a fire, be ready to get burnt.
Defects known from the previous cycles. Surely, many cycles contain a known defect list. But here is the trick. Not all the defects reported by the crowd get transferred to that list. Hence, testers take advantage of that. They report defects found in the previous cycle – could be their own defects or other testers’. I’ve actually seen the tests where most of the issues would be like that. Boy, the customer will get thrilled reviewing these.
But once the crowd smells easy cash, it’s quite hard to get in the way. Nevertheless, as from the latest news this practice is getting eradicated.
Defect raising process
As for logging defects, the flow will be quite standard:
- Find a defect.
- Check if the bug is not a duplicate.
- Check if it’s in the scope of impact (related to the section in scope, matches the requested severity or type – functional/usability/GUI)
- Log a defect sticking to the rules of the specific crowd testing platform
- Repeat all over again.
Given that the whole process is time and efforts consuming you’d better have some strategy in order to be efficient. Let me suggest your you some hints as for the following aspects.
Reading the instructions
Standard exploratory tests don’t contain the detailed scenarios, but do include some information you need to be aware of. A common problem I’ve seen at crowd testing platforms is that they disperse the information across different areas, so it’s very easy to skip something important.
- Start by scanning the page as some of the requirements are duplicated from cycle to cycle.
- Focus on specification relevant to the precise project that you haven’t seen anywhere else
- Check any conversation threads, comments, chats
This will save you time later, trust me. You don’t want to chase the irrelevant defects or have them rejected later.
Checking the duplicates
Given that the bugs appear at a crazy speed, it may be quite challenging to follow them all. You will also need to check the known defects which are sometimes located in more than one place, for example, a known defects tab and a spreadsheet. You should take a look everywhere, but remember that it’s not where the diligence pays off. Don’t spend too much time over here. Search with a relevant keyword in existing/known defects, skim the list and if nothing alike your bug shows up, go ahead and post it. In some cases, another tester may log the same defect. He or she may do this faster than you, which hurts as your defect just became a duplicate. On some platforms, you may request it for removal (Bugfinders) or delete it yourself (Test.io). In case you don’t, your defect gets rejected and it affects your overall rating a bit.
Each platform has its specific requirements. A bug with generic expected result may be accepted by one crowdsourced testing company and rejected by another one. Your goal would be to figure out the minimally acceptable criteria in every case. Ideally, you should spend only a few minutes while reporting an issue. I learned this the hard way. Because of my perfectionism, I would meticulously craft each defect report in details disregarding if it was actually required. I can even recall the example when it directly bounced against me. I’ve taken some extra time to investigate the problem and described the exact reason for a faulty behavior. The defect eventually was rejected because TL considered it as usability which was out of scope. To my surprise, exactly the same defect in another cycle written in more generic words (not disclosing the real issue) got accepted without any problems. Therefore make your own conclusions.