How to run the tests (moodle 45)

Versión 5 (Emilio Penna, Jueves, 19 de Marzo de 2026 10:12:58 -0300) → Versión 6/9 (Emilio Penna, Jueves, 19 de Marzo de 2026 10:17:39 -0300)


h1. Moodle Quiz Load Testing — Setup and Procedure (Moodle 4.5, 2026)

This page describes how to set up and run the quiz load tests.
For actual results and infrastructure sizing data, see: [[Load_test_results_moodle45_2026]]

Attached to this page:
* Moodle quiz backup (16-question quiz, 8 pages)
* Test users file (CSV, for bulk upload and enrollment)
* JMeter script (jmx)

---

h2. Setting up test data in Moodle

# Create a course for the test (or use an existing one).
# Restore the quiz backup (@.mbz@ file) into that course.
# Upload test users via _Site administration > Users > Accounts > Upload users_.
** The CSV file contains 5000 users. Edit the @course@ column to match your course shortname.
** Set "Force password change" to _None_.
# Enroll all users in the course (handled by the upload if the CSV includes the course column).

Also, if upload of csv does not work, you can use cli script admin/tool/uploaduser/cli/uploaduser.php

After the test, verify that attempts and answers were recorded correctly in the quiz results.

*Resetting between runs:* if a test run is interrupted before finishing, attempts remain "in progress".
Users cannot start a new attempt in that state. Reset with:

<pre>
UPDATE mdl_quiz_attempts SET state='finished' WHERE quiz=<quiz_id>;
</pre>

---

h2. JMeter script configuration



The script requires the JMeter Plugins Manager. Download @plugins-manager.jar@ from
https://jmeter-plugins.org/install/Install/ and place it in @lib/ext/@ (restart JMeter).

Tested with JMeter 5.6.3 and Moodle 4.5.

h3. User-defined variables (Test Plan level)

All parameters are configured as user-defined variables at the top of the Test Plan:

|_.Variable|_.Description|_.Example|
|@curso@|Course ID (numeric)|@2@|
|@prueba@|Quiz module ID (numeric, the @cmid@)|@2@|
|@host@|Server hostname (no protocol)|@moodle-perf.example.edu.uy@| protocol)|@eva-perf.seciu.edu.uy@|
|@servidor@|Server URL with protocol|@https://moodle-perf.example.edu.uy@| protocol|@https://eva-perf.seciu.edu.uy@|
|@port@|HTTPS port|@443@|
|@csvfile@|Absolute path to the users CSV file|@/home/user/perftestusers5000.csv@|
|@twaitmin@|Minimum think time in milliseconds|@30000@|
|@twaitmax@|Maximum think time in milliseconds|@90000@|
|@assertiontext1@|Text asserted on quiz pages (English)|@page@|
|@assertiontext2@|Text asserted on quiz pages (Spanish)|@página@|

The @assertiontext1@/@assertiontext2@ pair handles Moodle installations in either language.
They verify that quiz attempt pages loaded correctly after each @processattempt.php@ call.

h3. Thread Group

Set the number of threads (VUs) and ramp-up time in the Thread Group. The script is configured
for a single iteration per thread (@LoopController.loops=1@): each virtual user logs in,
completes the quiz once, and exits. On error, the thread stops (@stopthread@).

We tested up to 3000 threads with a 180 s ramp-up.

Start with 1 thread to confirm the script runs without errors before scaling up.

h3. CSV Data Set Config

The script reads user credentials from the CSV file. Columns are: @user,n1,n2,mail,pwd@.
Set the @csvfile@ variable to the absolute path of your users file.

h3. Synchronizing Timer and Gaussian Random Timer

The script simulates students starting the exam at the same moment using two timers placed
inside the @startattempt@ transaction controller, just before the POST to @startattempt.php@:

* *Synchronizing Timer* (@groupSize=0@): holds all threads at this point until the entire
thread group has arrived, then releases them simultaneously. This models the worst-case
scenario of all students clicking "Start attempt" at exactly the same time.
* *Gaussian Random Timer* (deviation @30000@ ms, offset @5000@ ms): after release, each
thread waits a random delay before actually sending the startattempt request. With these
settings, approximately *66% of threads fire within the first 30 seconds*, and ~95% within
the first minute. This models the real-world observation that in large exams, 55–75% of
students tend to start in the first minute, making this a conservative worst-case scenario.

h3. Think time

Each quiz page uses a @TestAction@ pause of @${twait}@ milliseconds, where @twait@ is
initialized per thread at the start of the test by a JSR223 sampler (Groovy) that picks a
random value uniformly between @twaitmin@ and @twaitmax@. This means each virtual user has
a *fixed* think time for the entire test, but different users have different values — a
reasonable model for students answering at different speeds.

h3. Script coverage and limitations

The script covers the main HTTP interactions of the quiz flow: login, course page, quiz view,
exam start, answering all 16 questions across 8 pages (via @processattempt.php@), and
finishing the attempt. Quiz attempts and answers are saved correctly in Moodle — this has
been verified after each test run by reviewing the quiz results in the admin interface.

The script does not replicate every browser request. In particular, it omits most AJAX calls
(e.g. autosave, flag updates, analytics beacons) and does not fetch all embedded static
resources on each page. A real browser session generates significantly more requests per page.
However, the requests that drive server-side PHP processing — which is the actual bottleneck —
are all included. The "get static resources" controller in the script is disabled; static
asset delivery is better tested via @mod_cache@ or a dedicated CDN rather than through JMeter.

The @sesskey@ token (Moodle's CSRF protection) is extracted dynamically via regex after login
and after the quiz view page, and included in all subsequent POST requests. The @attempt@ ID
is similarly extracted after @startattempt.php@ and used throughout.

---

h2. Running the tests

From the command line (recommended for load tests — GUI mode has overhead):

<pre>
./jmeter -n -t /path/to/moo4quiz.jmx -l /path/to/output.jtl \
-Xms8g -Xmx8g -XX:+UseG1GC
</pre>

*JMeter machine sizing:* use a dedicated machine. We used 32 GB RAM / 8 CPU.
With a single client we ran up to 2000 VUs reliably. For 3000 VUs we did not observe
client-side errors, but if you do, consider splitting the load across two JMeter instances
or using JMeter's distributed testing mode.

---

h2. Server monitoring during tests

Capture server-side metrics during each run to correlate with JMeter results.
A simple shell script running at 10-second intervals collecting the following is sufficient:

* @uptime@ — load average (1, 5, 15 min)
* @free -m@ — RAM and swap used
* @ps aux | grep apache@ — Apache process count
* @curl -s 127.0.0.1/fpm-status@ — PHP-FPM pool status (total, busy, idle workers)
* @ss -s@ or @netstat@ — HTTPS connections (port 443), MySQL TCP connections

Write output as CSV (one line per interval) for easy post-processing.

For the database server, @nmon -f -s 5@ provides CPU, memory, disk I/O, and network at
5-second granularity.

---

h2. Tips and lessons learned

* *Repeat each test at least twice.* On shared VMware infrastructure, hypervisor contention
from other VMs introduces variability. Two identical runs let you distinguish real behavior
from noise.
* *The startattempt spike is the critical moment.* Everything else in the quiz is easy for
the server. Focus your analysis on the synchronized exam start — that is where the system
will fail first.
* *mod_cache on the reverse proxy* improves static asset delivery (10–35% latency reduction)
but can worsen the startattempt spike by delivering pre-exam pages faster, clustering VUs
more tightly at the synchronization point.
* *CPU, not RAM, is the bottleneck* with PHP-FPM + OPcache. Don't over-provision RAM at the
expense of CPU cores. See [[Load_test_results_moodle45_2026]] for detailed data.
* *InnoDB buffer pool sizing matters.* If the working dataset fits in the buffer pool,
the database server will show near-zero read I/O during the test. Ensure
@innodb_buffer_pool_size@ is set appropriately (we used 16 GB RAM on the DB server).
* *Check OPcache hit rate* during tests (@opcache_get_status()@). With a warm cache,
hit rate should be above 99%. A cold OPcache at test start will produce artificially
poor results for the first few requests.