Amongst all the other tasks I have with performance testing and automating on client sites, and sleeping, I once developed an excel-based scheduler for Loadrunner.
There used to be (and as far as I am aware still is – I haven’t seen 11.5 yet, nor worked long enough with performance center to be sure) a limitation within Loadrunner than only one test could be scheduled.
Now if your tests are reliable and are testing, rather than tuning, or building benchmarks for example, and further if your tests are quite short, short enough that you could usefully run 3, 4 or more overnight while everyone else is sleeping, that’s always felt like a wasted opportunity to me.
I’ve actually built a few variations of this, most of them a decade or so ago, and of course, I don’t have the source code. Hard drives fail and the cloud wasn’t so readily available back in the day. In any case, I’ve decided to create it once more.
There are a number of variants I’d like to build over the coming year(s) – judging that timeframe by my previous level of effort and availability.
- An excel-based vba-equipped version. EVERYONE has excel so this is the easiest to roll out and as I’ll discuss later, it has the facilities to do the job
- A pure application version – probably built in VB as a standalone app.
- An outlook equivalent – because leaving outlook on and sending emails to it to fire tests sounds cool, impresses management and, whilst not strictly a scheduler allows for ad-hoc testing to be launched from anywhere with internet access.
- A web-based version – Just to see if I can mostly. Some sort of form with times and run names and a background service to actually launch the thing.
The whole solution is based on launching loadrunner from the command line and having it run the test, collate the analysis and close. So all 4 solutions will need to be able to run this command somehow:
Wlrun.exe –TestPath C:\Temp\Scenario1.lrs –ResultLocation C:\Temp –ResultCleanName Res1 –Run
This will be the first pure development project I’ll be doing for AS.O and so further posts will be wedged into the development diary that I’ll be launching at the back-end of next week.
Oh, and to clarify my point about tuning rather than testing, I use those terms in this way:
Testing – running a performance test to assess the behaviour of an AUT (application under test). Often a test is executed to construct a benchmark of application behaviour (which is my favoured approach)
Tuning – running a performance test to identify issues that are expected in order to investigate and identify what’s going wrong under the hood.
Tuning therefore requires more active monitoring, has a much more “change that setting, re-test, better or worse?” vibe to it. It can’t be reliably scheduled. It’s ultimate aim is to make things better, often in the environment as well as the AUT itself. Testing is also finding defects, but it’s different in that while we looking for defects and testing fixes, it’s used on more mature systems where the AUT is pretty reliable.
There are no doubt people who’d disagree and better actual definitions on the net, but given I work in an industry where people can’t reliably distinguish between a script and a scenario, I’m not arguing.
Scenario – collection of scripts and runtime settings to perform a test.
Script – Script that emulates user activity.Honestly, how hard is that?
So, I digress, but expect to see the informal dev diary soon. The test site and results database is also taking shape, I’ll post that over the weekend. It’s nowhere near finished, of course, but then that’s half the fun.