Performance testing of asynchronous processes

Generally, the most complicated part of performance testing is getting the data shape right. Populating the system with thousands of users, records and so on is made easier by data generation tools, but it still requires a lot of thought. Often the testing simply involves making a request and asserting that the response time was within acceptable limits. Systems that validate the parameters, particularly time based parameters, in these requests make it a bit more complicated to automate the performance testing. Think of session tokens, or other sets of data that are only valid for a short period of time.

When working on Social CRM we had a similar challenge. There are a number of processes that are asynchronous and to further complicate matters, involve email. The reset password process is a good example. There are 3 steps:

  1. Initiate reset password – A user specifies an email address and the system sends an email asking do they really want to reset their password. This email contains a URL with a number of parameters to verify that only the person receiving the email can move to the next step.
  2. Confirm reset password – A user clicks on the URL link in the email to confirm that they want to reset their password. The system verifies the parameters, generates a new temporary password and sends the user an email. The user can not log in with this temporary password.
  3. Complete reset password – A user clicks on the URL link in the email, enters their temporary password and their new password in order to be able to log into the system.


A number of these parameters are not stored anywhere, except in the email. There are other system checks and balances to verify some of these parameters. Therefore, when automating the performance testing, it is not possible to populate the database with a full set of valid data and run the script. Access to the generated email content is required. This is where SubEthaSMTP and Wiser help out.

By having an implementation of Wiser to write the emails to a file, all of the time based parameters, that are not stored anywhere, are available for a subsequent performance testing script to refer to.

When is automated testing required?

This question gets asked a lot, particularly as there is a cost in implementing automated tests. There is also a benefit, but it is harder to quantify. When a development team has a limited amount of time between releases there is often a preference to implement new functionality rather than automate testing for existing functionality. With Agile Software Development methodologies, the role of the tester is more complex, but automation of testing is clearly a responsibility.

It is easy to say that 100% of the system functionality should be tested automatically and to ensure that regressions do not occur, that this testing should be performed by a developer before they check in changes. It is hard to justify the cost of doing so as the information on how many defects automated testing prevented does not often get recorded.

The simplest place to start is to review the regression defects that have been raised in the past. This should highlight the functionality that tends to break the most when changes are made. After implementing automated regression testing the journey starts in growing the range of automated test suites. Assuming, of course, that developers are using the automated testing before checking in their changes, one should also see a reduction of the regression defects being raised in areas with automated testing.