Jmeter is an excellent tool for performance testing, especially given the price. It does however have a small number of limitations. One of those is the lack of a decent analysis, results and reporting engine – which I guess is why a lot of companies use Blazemeter as their testing component. Another limitation in my […]
Loadrunner Parameters in Blocks not updating?
I’ve encountered an issue recently where in a loadrunner script, I wanted to execute a number of requests of one type, followed by a number of requests of another type, followed by a number of requests of a third, and subsequently fourth and final type. The whole thing is four blocks looping in sequence inside […]
Centralised Testing Functions – Oversight and Governance
I’ve been working on a project over in Sweden, and have found it unlike anything I’ve ever seen before. Sadly, this is for all too familiar reasons. I work for an organisation with a small performance testing function centralised across all the projects that can be conjured up. As a result, there are often a […]
Python Functions
Declaration of functions in Python is nice and simple. def my_function(): print(“Hello From My Function!”) The call is also uncomplicated my_function() With Arguments def sum_two_numbers(a, b): return a + b sum_two_numbers(10,8)
Extending Loadrunner Scripts with C – Function Library #1.2
I have now left the Cardiff JMeter role, and returned to Loadrunner work, this time in Dublin. With every new position I find myself re-invigorated to learn new things, update old code and write some articles for the site. I have a lot of ideas in my head at the moment, but my shower thoughts […]
Jmeter is an excellent tool for performance testing, especially given the price. It does however have a small number of limitations. One of those is the lack of a decent analysis, results and reporting engine – which I guess is why a lot of companies use Blazemeter as their testing component.
Another limitation in my eyes is the lack of a flexible iteration pacing controller.
If we build a load model allowing 45 seconds for an iteration, but that iteration takes 15 seconds, there’s is no built-in method to tell jMeter to wait 30 seconds on this thread. This is compounded by additional threads since they either all have to wait an arbitrary length of think-time which minimises the concurrent activity, or they don’t wait and the system is under higher loads than specified for the test.
There is a solution presented here using Groovy.
In a Pre-processor on the first action of an iteration – which is not necessarily the first action, but rather the first action we loop upon – put a snippet to capture the time of iterationStart.
In a post-processor capture the time of iterationEnd.
In a transaction controller, outside the loop but inside the thread group. Add a flow control action to pause for “${myDelay}”
Attached to the flow control action, a pre-processor will calculate the value of the myDelay variable with the code shown here:
“iterationPacing” in this example is coming from a user defined variable at the moment, but will eventually be coming from a Yml file using a taurus implementation.
I’ve encountered an issue recently where in a loadrunner script, I wanted to execute a number of requests of one type, followed by a number of requests of another type, followed by a number of requests of a third, and subsequently fourth and final type.
The whole thing is four blocks looping in sequence inside an outer loop of iterations in Loadrunner Vugen.
It looks like this:
One of the requirements is that the data table for each request is common, so the first element used in block 0 is also the first element in block 1, 2 and 3.
Each action is creating an xml file – the data between each file is common in some areas.
Originally, the whole thing ran in sequence without blocks but there is a pair of processes in the application under test which is consuming these files, it operates on a schedule and its logic dictates that the files are sent in order. The issue here being that it is theoretically possible to find the gap between the schedules and to process a 900 request before a 600 request – resulting in data which is no longer usable in the system.
Inserting think time between each block solved that problem but also reduced throughput to an unacceptable level, requiring vUsers far in excess of what the license would allow.
My solution then was to create the block design above and that seemed fine until we tried to validate the output.
It turns out that the iterations counter in loadrunner only applies to the outer loop of processing – and that the parameters will not update as expected as a result.
My First thought was to make the data iterate on every occurrence but the data is used multiple times throughout the xml requests so that didn’t seem viable either.
If I created a temporary parameter to store the value for use whist limiting the occurences of the master parameter, then I couldn’t share the value appropriately through the blocks – I couldn’t keep the data in sync for block 0, block 1, block 2 etc.
My second thought was to build an array on the fly in the first block, and to use independent counters in each block to access the data in that array in all the actions from there.
And that seems to work – but it looks crazy.
All parameters are set to use same row as Order_ID_TEMP which is set to sequential – Once, since we’re advancing it in code rather than letting vugen handle it.
There are parts of this that aren’t strictly necessary – The entire first block creating temporary parameters could go – all of those lr_save_strings are not needed if I reused the variables in the second section building the array.
The array building is neat though and I re-used szParamName1 for all of that to reduce the variables in scope.
The buffer sprintfs create a variable with a counter as an index. The lr_save_strings assign a value to that variable.
The final series of sprintfs and lr_save_strings consume from the array and assign the variable to a parameter which is named in the xml declaration.
i400 is the index used in this action
Subsequent requests only require that last section to repopulate the parameters with the variable data based on a local iteration counter. Essentially each block is it’s own for loop (without actually declaring a for loop).
This is how I developed a method to send synchronised data between blocks iteratively and it may not be the best solution but I’m really rather proud of it. Not least because I came up with the solution in the bar of an Edinburgh hotel and spent maybe 3 hours on it (which in Loadrunner terms is a mere moment in time).
I’ve been working on a project over in Sweden, and have found it unlike anything I’ve ever seen before. Sadly, this is for all too familiar reasons.
I work for an organisation with a small performance testing function centralised across all the projects that can be conjured up. As a result, there are often a lack of resources available to conduct testing, or even to conduct an assessment of whether testing is necessary and appropriate. The knock-on effect of this is that large third party consultants are drafted in to pick up the bigger projects and the centralised testing function – the Test Center – take a back seat.
In practice, that back seat is often in a different vehicle, and so the Organisation as a whole lose ownership and governance of their own IT projects, ceding it to the consultants.
It is in the nature of the larger consultancies to be concerned with one thing only – Successful Delivery. It is a known side effect of this concern that the contracts will often be drawn up to include bonuses and interim payments based on successful delivery.
In many ways this is absolutely fine, but it requires an oversight position from the test center to conduct reviews upon such deliverables as:
the test approach
the test scripts
the test results
defects
the deployment decision
The question is: What happens when you have an immature IT organisation, who have ceded control of the project to a large 3rd party supplier – focused on delivery at all costs – with no checks and balances in place to ensure the quality of that delivery.
We end up with an immature IT organisation, an obfuscating 3rd party vendor hiding meaningful results whilst ensuring that something (ANYTHING) is delivered, so that they can meet the project milestones and ensure payments are made.
Essentially, the organisation order 1 ton of oranges, the third party vendor deliver 1 ton of lemons and claim success.
Importantly it is in no small way the fault of the purchaser for ceding that control in the first instance.
Let’s be clear, the larger third party suppliers often use primarily low-cost offshore resources. These resources are typically low-cost due to a lack of experience and exposure to the larger business-world around them. Additionally, these resources are used as a delivery factory, churning out scripts of varying need and quality according to a vague set of requirements. The requirements are generally vague due to the IT immaturity of the organisation.
The 3rd party vendor then adds a layer of project management around release and defect management whose role seems to be to obfuscate clear communication, to actively seek to remove any review process and oversight from the organisation. They effectively become closed shops of development and testing who produce results that mean very little to the overall organisation but ensure that project milestones are met.
Everything becomes deadline-based, and measured in time rather than overall quality.
All of which can be avoided by maintaining a level of governance and oversight over the project.
The delivery factory will continue to churn out scripts of varying quality and need but this need and quality will be assessed and reviewed by an employee of the test center. The same applies to the approach documentation, the test scenarios, the load and data models.
I have now left the Cardiff JMeter role, and returned to Loadrunner work, this time in Dublin.
With every new position I find myself re-invigorated to learn new things, update old code and write some articles for the site.
I have a lot of ideas in my head at the moment, but my shower thoughts this morning were to finally transform the writeToOutputFile function into exactly that, a function.
A new article in the dev diary describes how I’ve gone about creating a fully automated performance test suite using jmeter for my latest clients. A further article will shortly appear here, although my concept of shortly may be somewhat different to most.
In any case I have plans to document the process of constructing JMeter test plans and fragments and then to document how to extend that into a full suite of tests, and then how to execute that without a lot of manual intervention, and potentially, I may revisit running that without ANY manual intervention So there’s that to look forward to in 2015, hopefully in Q1.
As a performance tester, most data arrives at me in a spreadsheet, or an enormous text file with little or no delimitation.
The previous article showed how to parse across columns and down rows to gather data.
Excel has a number of built in functions to manipulate that data before outputting it for us.
Functions like
MID
To pull a child string from a parent string specificied by the number and position of characters
LEFT / RIGHT
Similar to mid, but from either the 1st or Last character in the string
TRIM
Removes whitespace from the ends of strings (both ends)
These help immeasurably in cleaning up the data.
Tied into further functions like
We have the facility to parse through raw data, picking up the elements we need and outputting them in the form we require. Given that loadrunner uses .dat or .csv files for test data input, this is hugely useful.
Instr is specifically used within VBA, the other functions mentioned can also be used at the worksheet level and embedded into formulae.
I’ve often found, especially when starting at a new client, that real development tools aren’t provided for testers (even automation testers) as standard and have to be requested from the helpdesk. This can take a week or longer to sort out, depending on efficiency. It’s worse in financial institutions who seem to think Admin rights are golddust, who lock down online storage and usb ports, who scan and validate every email. That’s their prerogative, but if you’re going to pay me to do my job, maybe make it so I actually can rather than putting security in the way. A lot of it is foolish, I ask for read access to a customer database in a test environment. Now as long as you’re not using live customer data, there’s virtually no risk to granting that, it’s a 5-minute job – so why does it alway’s take a week and 3 managers to approve it?
In any case, my workaround for this is MS Excel.
There are a number of reasons for this:
Because of the structure of a workbook, and a worksheet, into tabs, rows and columns, it’s easy to visualise data structures like lists and arrays.
Because rows and columns provide an easy structure to navigate around in code
Because I have used Visual Basic for a long time, and Excel’s VBA is probably the easiest variant of this to use
Because every client I’ve ever been to has Excel.
Because you don’t need admin rights to run macros’ typically.