The Domino Translation Object reached its end of life in March 2002. At that time I was transitioned to the Learning Space team. My role on that team was to, once again, coordinate their automation efforts. The systems test team is responsible for stress testing the product. This involves simulating tens of thousands of users all pounding the server product. Existing software in the marketplace was insufficient for the needs of the group. So a collection of utilities, "eTool", was produced.
eTool consisted of automation software to service three areas: test automation, process automation, and performance measurements. Additionally there were a number of command line utilities. The latter all employed the same "look and feel" to allow users to query them for their arguments and even interactively supply those arguments. There was even an "überHelp" command that would list all of the available commands and give help on each one or any parameter of any command.
In the early stages of the project a eTool based smoke test was used to rapidly verify the basic functionality and breakages in builds as they were produced. This was plugged into a J2EE application that maintained a list of servers. It would periodically run against those servers and maintain results allowing group members, at a glance, to determine which servers were available and suitable for test. During the course of the project over 900,000 checks were run saving approximately 340 work-hours.
The group comprised many different skill sets. Some wished to have a great deal of control over the product as it executed, others just wished to create simple scripts. The compromise reached was to develop a two-tier approach to the automation libraries. The lower tier comprised of a Java library of methods that could be called to exercise the product. The user could create a script by subclassing a Java class, and fill in the calls and looping they wished to perform to conduct their test. By use of consistent naming conventions an upper tier was synthesized from this lower one using the Java feature of introspection. In this a user could write a "near-english" script. This was interpreted and each line associated with a lower level command. That way each level of skill set was satisfied.
Because we were testing against some very high powered severs, we needed the ability to create very large amounts of load through a massively parallel system. A client-server architecture was developed that allowed a centralized machine to communicate with any number of other test machines. Tests could be queued up and launched, and their results collated together. Initial exploration was done to determine the work to migrate this structure to use the IBM internal Grid.
On the process automation side, scripts were developed that did the job of populating a server with the appropriate users, courses, and enrollments. This saved an uncounted amount of time in configuration.
In the summer of 2002 I was given a little break from the automation tool development to work on creating a prototype implementing the current draft of the SCORM 1.3 Simple Sequencing specification. This was scheduled to be done for our product, but not in time for PlugFest6. I was able to produce this and a great deal of media attention was generated for IBM based on it. I continued to participate in the standards body on the specification giving my practical implementation input, and helped out a bit on the actual implementation in the product.
Since the product was launched I've given a series of lectures on Simple Sequencing and how to use it in our product to people throughout the engineering and sales teams.