Performance Testing – 4 Questions (an approach)

I’ve been asked to give a talk to some of our off-shore developers. They’re sitting through a workshop on just about everything related to the client environment, and I’m talking for 15 minutes about performance testing with Loadrunner against web-baed sales flows.

I thought about winging it, but the manager has asked for a powerpoint deck, so I’m assembling that now. And as I get to the first page about entry criteria and intake activities, it occurs to me that I’ve not defined this before although it’s always in my head and is a very common interview question and answer.

I had a chat with my colleague and explained to him that I use 4 questions to gather all the information I need to build and execute a performance test. He didn’t believe me and say’s he uses nine questions to get that information. So we discussed it some more and reached an understanding.

I’m talking about the 4 top-level first questions to which I need an answer before I’m building anything. Other questions will be required as we move forwards with the project. And the answers to the 4 questions will evoke other questions and answers but at the highest level, I walk into that first client meeting armed with my 4 questions.

  1. How is it built? – The solution under test
  2. What request(s)? – the script(s)
  3. How many? – requests per hour – the scenario
  4. How quickly? – do they need to go? – the requirement(s)

MM (my colleague) wants to ask about network topology, about transactional drop-out rates, about reporting structure and team contacts etc. I argue that isn’t necessary in the first instance, but is useful to know as and when, or before the first execution.

Simply, those 4 questions define the solution, the scripts, the scenario and the pass/fail criteria. From that I can build the test. Maybe it’s experience, maybe I’m overstating my abilities, or maybe I’m wrong, all I know is that for 15 years, I’ve been a performance tester and my refined approach begins with those 4 questions.

Loadrunner misses extra resources

Recently, I’ve encountered an issue with an AUT where it crashes VUgen during the generation stage of a recording session. Now I’m a big fan of record-edit-playback. I find the recording and generation logs hugely useful and I find that recording is the easiest and quickest way to get a view of the application.
I call it record-edit-playback because the recording isn’t enough on it’s own for me to use as a performing performance test. But as a foundation, it’s fine. And they rarely playback immediately in any case. Session ID’s are designed to prevent that kind of thing.

The logs in particular are great for helping you find and construct parameter-capturing code for session ids, and identifying user variables that you may want to emulate.

My process for recording is very simple though I haven’t documented it here before so I will now:

  • Record the script once
  • Save it as Scriptname_DATE_RAW
  • Save it again as Scriptname_DATE_WIP (work in progress).

That means you can always roll back if you need to. The WIP copy is the working document until it’s finished when it becomes Scriptname_DATE_FINAL. At that point I hive it off to the controller as well as my local repository. I don’t like cluttering up the controller with WIP versions. And I don’t like the controller pulling scripts across the network, I just think it’s poor practice.

But I digress.

As a solution to the fact I couldn’t record the script, I used Fiddler as a proxy to capture the urls I visited when manually executing the script in Firefox. Over on the DevDiary there’s an article about this, but the point I wanted to make was this. Loadrunner doesn’t capture everything that’s on a page. Fiddler output was about 40 lines for the homepage, a Loadrunner visit to just the homepage was capturing 10 lines of resources (I managed to get LR to do that before it died again).

It seems that if a resource (for example a .css) contains sub-resources Fiddler will see that but Loadrunner won’t. I don’t know if that is by accident or design. I don’t know if it’s implied that LR is geettng them but not explicit in the results and the logs. I intend to find out in due course but it makes me wonder how I’ve not seen this before in 15 years of performance testing. Maybe it’s specific to this project, I could believe that, we are uniquely complicated from what I’ve seen. But what if it’s not. How many issues could have been avoided if I’d seen a bottleneck on one of those resources – maybe an underperforming java-script for example? It’s all academic now anyway but it’s certainly something I’ll look out for in the future. And as an aside, maybe Loadrunners recording engine isn’t as good as I’ve always thought it to be? Interesting times. In Belgium…

Dev Diary

There’s a link to the dev diary over there on the right, currently amongst the links to hosts and wordpress themes. I’ll be adding more relevant content as I find it.

I’ve added a few days of diary regarding test stubs, vbs and vb6. It’s coming along nicely.

The dev diary will be much more informal that the main AS.Org site, this one is going to remain as my more professional aspect.

UPDATE:

The first working version of the Loadrunner Batch Schedule (Excel version) is now available on Dev.

Extending Loadrunner Scripts with C – Function Library #1.1

Actually, this is more like 1.1. In as much as it ties into the previous post. I was blogging about building audit logs and data files via an “audit” script. That’s what I call them, not sure if there’s a full blown technical name but I use them to verify, validate and build data to be used in actual test scripts.
So let’s suppose you have an array of data you’ve captured with web_reg_save_param (x,y,z,"ord=all",last); this is how to handle that code into an audit log.

vuser_init{

// write file header once

WriteToOutputFile(lr_eval_string(“card,psn,status”));

return 0;
}
The function as defined in the previous post.
int WriteToOutputFile(char * string)
{

char *filename = “c:\\gemalto_audit.txt”;
long file_streamer;

if ((file_streamer = fopen(filename, “a+”)) == NULL)
//open file in append mode
{
lr_error_message (“Cannot open %s”, filename);
return -1;
}

fprintf (file_streamer, “%s\n”, string);
fclose(file_streamer);
return 0;
}

And finally, the function in use…

action
{
char szParamName1[128];
char szParamName2[128];
char szParamName3[128];
...
// get number of matches from ord=all

nCount = atoi(lr_eval_string("{available_cards_psn_count}"));

//"available_cards_count" = 22 - boundaries are insufficiently unique
//"available_cards_psn_count" = 11
//"available_cards_status_count" = 22 - boundaries are insufficiently unique

for (i=1; i<=nCount; i++ )
{
j = i * 2;
sprintf(szParamName1, "{available_cards_%d}", j);
sprintf(szParamName2, "{available_cards_psn_%d}", i);
sprintf(szParamName3, "{available_cards_status_%d}", j);
strcpy(strToOutput,lr_eval_string(szParamName1));
strcat(strToOutput,",");
strcat(strToOutput,lr_eval_string(szParamName2));
strcat(strToOutput,",");
strcat(strToOutput,lr_eval_string(szParamName3));
WriteToOutputFile(strToOutput);
}
}

I find more often than anything else, capturing the data is easy enough, but getting at that data in a structured way in order to use it effectively at a later point can be painful. The above is a real-life example – Developers implementing the content management inconsistently meant that there was nothing uniquely identifying 2 of the fields I needed. If I tightened the left boundary or the right boundary, elements were missed.
I’m not criticizing developers per se, they can’t really be expected to think about a performance tester a year down the life-cycle of the project looking at source-code structure.
The workable solution was to capture the 11 values I needed for one element, the 22 value-pairs for the other elements, and just skip every other element in 2 of the arrays. Inelegant perhaps, it works though and I built that today so it may become beatified over time.

And then there were 2…

I’ve just been asked by my boss to build a proof of concept for a scheduling system, not to execute performance tests but to book time slots on the shared controller.
It’s not unusual that a controller would be shared by a number of testers, very few projects require and use 24.7 access to Loadrunner, most scripting is done on an independent machine (VUGen can be installed and run independantly of the Controller) and there are often multiple workstreams.

Now I’ve worked in a million different places, and I’ve seen some booking solutions over the years and it is my opinion that most automated testers can’t be doing with them, they’re clunky, slow to complete and just another layer of irritating bureacracy. Mine will no doubt be the same.

Looking at 2 possibilities off the top of my head:
1. Spreadsheet on a shared drive. This has the advantage of being simple to build, but sharing requires opening and closing the document to prevent locking, and there’s usually one hog in my experience.
2. Online booking system with a php-based calendar and a webform tied into a mysql database at the back-end. I can build that with resources acquired on the net and customise to fit but it will still take longer than the spreadsheet.

As a POC-request, I’ll end up doing both and asking the testers which they prefer, and the answer will almost certainly be “Neither, can’t we just get together and figure out who needs it when they need it.”
Yes, yes we can. Over a beer?

New Development Project – Loadrunner Scheduler

Amongst all the other tasks I have with performance testing and automating on client sites, and sleeping, I once developed an excel-based scheduler for Loadrunner.

IT doesn’t rain but it pours…

Further to my last post, on the vague subject of a test run database and results repository. There are, of course, additional features to add to that. Especially if you wanted to provide it to clients at a cost.

It would need to be able to cross-match results on the fly, comparing like-for-like transactions/scripts/scenarios.

I’m not proposing to re-develop the analysis tool for loadrunner, that’s outside my scope at the moment (probably 😉 )
But something comparing Run A vs Run B of the same test in pure numerical terms sounds like a job for perl to me.
And yeah, maybe have it draw a graph, pretty sure that’s do-able on the fly.

So Working, Building and Researching. What’s new?

So what have you been doing?

It’s been a long time since I had the time and inclination to update the site. Partly because I’ve been busy working and partly because I was hideously aware that the next stage for the site was complicated and not exactly in my wheelhouse. I can code, clearly, since I automate everything and work as an automation expert internationally. But I’m not a business analyst, a technical architect, a data analyst or a developer. Not really, at least.

Extending Loadrunner Scripts with C – Function Library

So, I’m working at a new client, back doing the Loadrunner thing. One of the nice things about that is I get to re-use and refine code I’ve written previously for other clients. This article is going to contain some of these code snippets that I’ve used time and time again.

I’ve re-visited this code recently and found that a) it wasn’t very good, and b) I can do it better now – presented below is the better version. There will be an update inviting formatting etc.
And there’s no guarantee this is perfect.

Output to Text file

int WriteToOutputFile(char * string)
{

char *filename = "c:\\myfilename.txt";
long file_streamer;

if ((file_streamer = fopen(filename, "a+")) == NULL)
//open file in append mode
{
lr_error_message ("Cannot open %s", filename);
return -1;
}

fprintf (file_streamer, "%s\n", string);
fclose(file_streamer);
return 0;
}

Called like this:

WriteToOutputFile(lr_eval_string("bban_count: {bbanNumber_count} blah {bbanNumber_count}"));

I just added the function beneath vuser_init rather than creating a header file. For multiple vusers, it’s a good idea to parameterise the filename as they can’t all share. I recommend a vuser identification parameter as thats built in.
Or Timestamps for uniqueness.

I also have a dos script for joining them all back up again since I tend to use this function for creating custom audit logs to track test data states as it moves through a scenario / test cycle.

A failed web_find is not always an error


I’ve been working in Lithuania for the last 3 months. It’s cold over here. To keep warm (and paid), I’ve been writing some loadrunner scripts for a Scandinavian bank. One of them seeks to emulate a customer paying cash into their account. As you’d imagine, this is a high priority, high usage script so I’ve spent the last 3 weeks building this in the web protocol against a custom built CRM platform.

Powered by WordPress and ThemeMag