Defining strategy to manage test data is vital for seamless and successful development and maintenance of automation scripts. In this article, we will discuss few Test Data Management Best Practices for Automation. These practices are agnostic of automation design patterns, so you can implement these practices in whatever design pattern you follow
1.Externalise test data
Automation test data should always be externalized to some file(s) like .property, .csv etc. This is important from maintenance perspective as data may change in future and that should not affect our code and automation tester should be able to change the data easily without navigating through code.
Let’s say we are testing PayPal on a test environment with a test credit card. There are chances that this test card may change in due course of time. So, if we have stored that card in an external file like I mentioned above, we only need to change it there without the need to change in actual programming code
2. Generate unique prerequisite data through automation for each automation run, wherever possible
Always try to generate prerequisite data within automation, so that its unique for each automaton run. This way, you won’t face issues that comes with reusing the same data again and again. Also, if application is based on service oriented architecture, consider generating pre-requisite data using web services. This will be fast and maintainable as you don’t have to deal with UI changes and complexities.
Again, consider Paypal for example. If we want to add a particular card, we would first need to login to our PayPal account. Here, instead of creating a user manually and using it again and again in multiple automation runs, have a unique user register always within automation script and then have actual automation steps. This way we can make sure that we always have a unique user and we can use the same card to add in multiple automation executions (every time with a unique user credentials). Usually, Test cards are limited in number, so this approach will overcome that limitation also, because now since we are using unique user each time, we can use same card multiple times with those users.
3. Consider all test environments
Most of the projects have multiple test environments(e.g. Dev, Test, Sandbox etc.) and it’s crucial to consider this while defining test automation data strategy. So, how to manage our automation data for multiple environments so that its flexible, scalable and easily maintainable? Let’s have a look.
Well, if there are multiple test environments, split data into two categories:
- Common for all environments: Some data would be common to all environments
- Specific to a particular environment: Data that changes from environment to environment
Once you split data into two categories above, create file (or files, depending on your project) for common data and since it’s common for all environment, only one copy needs to be created. For environment specific data, create each file for each environment. So, for example, if you have 5 files and 3 environments, create 5*3=15 files. I.e 5 files each for 3 environments. Now, put common file(s) in your automation project structure as suitable. For environment specific file, put each file into its own environment folder. Below is a visual sample of what we discussed here.
Best example for environment specific data is URL of any web application as URL is always environment specific and should be kept in environment specific folder. Example for common data is shipping address, which can be common for different environments as we can keep the address same across different environments
If application is supposed to be accessed by global audience, you are likely to test application for various locale. For those who are not aware of locale, Locale=language+country. For e.g. en-US means US English and en-GB means UK english. en-US and en-GB are known as locales.
Alright, so what are the challenges when we have multiple locales? Well, there are many but here we are only concerned about automation data management for an application with multiple locales. So, the obvious challenge here is to manage large amount of locale specific data. For example, if we have 100 data rows and 30 locales, then our data set would be of 3000 rows. That’s where we need to smartly strategies our locale specific data.
Ok, so let’s talk about how we are going to approach this. Well, it’s similar to how we did for environment specific data but here, we will have only one file per locale and not multiple files. The reason is, we should carefully analyse locale specific data and keep only that data in locale files instead of flooding locale specific files and limit all data for one locale into one file only..
Card number, user credentials and mostly the data that a user enters is not locale specific, so we should NOT keep this data into locale files. Locale specific files should be left for messages that an application will produce(e.g error messages), labels etc. So if we carefully analyse and keep only locale specific data in locale files, we can manage to have one file per locale
Localisation+Environment combined strategy:
If you are someone like me, you can have an application with multiple test environment and multiple locales both. As we learned two strategies separately above, I will just summarise both of them combined in a single project by doing a little modification in the hierarchical chart we saw above. So if we combine both of the above strategies above, project structure should look something like this:
6. Never use excel as test data source for automation, unless that’s the only option
Using excel as datasource should be avoided because of below reasons:
- Performance: Reading/Writing operation for excels is comparatively very slow than lightweight file formats like .properties, .csv etc.
- Maintenance:If you are reading excel, you are most likely be using some external api to achieve this. For e.g. Apache POI or JExcel. Now, since these are external API, there may be version incompatibility in future like current version of excel files may not be compatible with older versions of APIs and vice versa. So, this means we will be having additional maintenance overheads.
Having said this, sometimes it is advisable to use excel files for reading data and one good example is if you are using macros to generate test data, then you don’t have choice but to use excel.
Hope this article will be helpful in deriving your automation test data strategy.
About Author : Mufaddal Munim
“I am a hardcore automation developer and programmer with over 9 years of experience in test automation. My work experience include test automation and framework development/customization for UI and web services, mostly for browser based and service oriented architectures. I am passionate about learning and adopting new technologies. I am also one of the contributor to open source framework serenity-demos.
I believe that programming language is just a means to achieve automation, so I am always keen to learn new programming languages but my favorites as of now are Java and Node.js.”
LinkedIn profile: https://www.linkedin.com/in/mufaddalmunim/