How to run the tests (moodle 45)

Versión 1 (Emilio Penna, Jueves, 19 de Marzo de 2026 10:03:19 -0300) → Versión 2/9 (Emilio Penna, Jueves, 19 de Marzo de 2026 10:07:52 -0300)


h1. How to run the tests (moodle 45)

h1. Moodle Quiz Load Testing — Setup and Procedure (Moodle 4.5, 2026)

This page describes how to set up and run the quiz load tests.
For actual results and infrastructure sizing data, see: [[Load_test_results_moodle45_2026]]

Attached to this page:
* Moodle quiz backup (16-question quiz, 8 pages)
* Test users file (CSV, for bulk upload and enrollment)
* JMeter script (jmx)

---

h2. Setting up test data in Moodle

# Create a course for the test (or use an existing one).
# Restore the quiz backup (@.mbz@ file) into that course.
# Upload test users via _Site administration > Users > Accounts > Upload users_.
** The CSV file contains 5000 users. Edit the @course@ column to match your course shortname.
** Set "Force password change" to _None_.
# Enroll all users in the course (handled by the upload if the CSV includes the course column).

After the test, verify that attempts and answers were recorded correctly in the quiz results.

*Resetting between runs:* if a test run is interrupted before finishing, attempts remain "in progress".
Users cannot start a new attempt in that state. Reset with:

<pre>
UPDATE mdl_quiz_attempts SET state='finished' WHERE quiz=<quiz_id>;
</pre>

---

h2.
JMeter script configuration

The script requires the JMeter Plugins Manager. Download @plugins-manager.jar@ from
https://jmeter-plugins.org/install/Install/ and place it in @lib/ext/@ (restart JMeter).

Tested with JMeter 5.6.3 and Moodle 4.5.

h3. User-defined variables (Test Plan level)

All parameters are configured as user-defined variables at
Open the top of the Test Plan: @.jmx@ file and adjust:

|_.Variable|_.Description|_.Example| # *Test Plan* — set @course_id@, @quiz_id@, and server hostname.
|@curso@|Course ID (numeric)|@2@|
|@prueba@|Quiz module ID (numeric,
# *CSV Data Set Config* — set the @cmid@)|@2@|
|@host@|Server hostname (no protocol)|@eva-perf.seciu.edu.uy@|
|@servidor@|Server URL with protocol|@https://eva-perf.seciu.edu.uy@|
|@port@|HTTPS port|@443@|
|@csvfile@|Absolute
path to the users CSV file|@/home/user/perftestusers5000.csv@| file.
|@twaitmin@|Minimum think time in milliseconds|@30000@|
|@twaitmax@|Maximum think time in milliseconds|@90000@|
|@assertiontext1@|Text asserted on quiz pages (English)|@page@|
|@assertiontext2@|Text asserted on quiz pages (Spanish)|@página@|

The @assertiontext1@/@assertiontext2@ pair handles Moodle installations in either language.
They verify that quiz attempt pages loaded correctly after each @processattempt.php@ call.

h3. Thread Group

Set the
# *Thread Group* — set number of threads (VUs) and ramp-up time in the Thread Group. The script is configured time.
for a single iteration per thread (@LoopController.loops=1@): each virtual user logs in,
completes the quiz once, and exits. On error, the thread stops (@stopthread@).

We tested up to 3000 threads with a 180 s ramp-up.

** Start with 1 thread to confirm the script test runs without errors before scaling up.

h3. CSV Data Set Config

The script reads user credentials from the CSV file. Columns are: @user,n1,n2,mail,pwd@.

Set the @csvfile@ variable ** We tested up to the absolute path of your users file.

h3. Synchronizing Timer and Gaussian Random Timer

The script simulates students starting the exam at the same moment using two timers placed
3000 threads with 180 s ramp-up.
inside the @startattempt@ transaction controller, just before the POST to @startattempt.php@:

*
# *Synchronizing Timer* (@groupSize=0@): (before startattempt) — holds all threads VUs at this the exam start point until
the entire
thread group has arrived,
configured number arrive, then releases them simultaneously. together. This models simulates the worst-case real-world
scenario of burst when all students clicking click "Start attempt" at exactly the same time. Set the number of simultaneous
threads to match your VU count.

* # *Gaussian Random Timer* (deviation @30000@ ms, offset @5000@ ms): after release, each (after startattempt, σ=30s, offset=5s) — introduces post-start
thread waits a random delay before actually sending the startattempt request. With these
settings, approximately *66% of threads fire within the first 30 seconds*,
dispersion simulating students reading and ~95% within
answering at different rates.

---

h2. Running
the first minute. This models tests

From
the real-world observation that in large exams, 55–75% of command line (recommended for load tests — GUI mode has overhead):

<pre>
./jmeter -n -t /path/to/moo4quiz.jmx -l /path/to/output.jtl \

students tend -Xms8g -Xmx8g -XX:+UseG1GC
</pre>

*JMeter machine sizing:* use a dedicated machine. We used 32 GB RAM / 8 CPU.
With a single client we ran up
to start in 2000 VUs reliably. For 3000 VUs we did not observe
client-side errors, but if you do, consider splitting
the first minute, making this a conservative worst-case scenario. load across two JMeter instances
or using JMeter's distributed testing mode.


h3. Think time ---

Each quiz page uses a @TestAction@ pause of @${twait}@ milliseconds, where @twait@ is h2. Server monitoring during tests

Capture server-side metrics during each run to correlate with JMeter results.

initialized per thread A simple shell script running at 10-second intervals collecting the start of the test by a JSR223 sampler (Groovy) that picks a following is sufficient:

* @uptime@ — load average (1, 5, 15 min)

random value uniformly between @twaitmin@ * @free -m@ — RAM and @twaitmax@. This means each virtual user has swap used
a *fixed* think time for the entire test, but different users have different values * @ps aux | grep apache@a Apache process count
reasonable model * @curl -s 127.0.0.1/fpm-status@ — PHP-FPM pool status (total, busy, idle workers)
* @ss -s@ or @netstat@ — HTTPS connections (port 443), MySQL TCP connections

Write output as CSV (one line per interval)
for students answering at different speeds. easy post-processing.

h3. Script coverage and limitations

The script covers
For the main HTTP interactions of the quiz flow: login, course page, quiz view,
exam start, answering all 16 questions across 8 pages (via @processattempt.php@),
database server, @nmon -f -s 5@ provides CPU, memory, disk I/O, and network at
finishing the attempt. Quiz attempts 5-second granularity.

---

h2. Tips
and answers are saved correctly in Moodle — this has
been verified after
lessons learned

* *Repeat
each test run by reviewing at least twice.* On shared VMware infrastructure, hypervisor contention
from other VMs introduces variability. Two identical runs let you distinguish real behavior
from noise.
* *The startattempt spike is
the quiz results critical moment.* Everything else in the admin interface.

The script does not replicate every browser request. In particular, it omits most AJAX calls
(e.g. autosave, flag updates, analytics beacons) and does not fetch all embedded static
resources
quiz is easy for
the server. Focus your analysis
on each page. A real browser session generates significantly more requests per page.
However,
the requests synchronized exam start — that drive server-side PHP processing — which is where the actual bottleneck — system
will fail first.

are all included. The "get static resources" controller in * *mod_cache on the script is disabled; reverse proxy* improves static
asset delivery (10–35% latency reduction)
but can worsen the startattempt spike by delivering pre-exam pages faster, clustering VUs
more tightly at the synchronization point.
* *CPU, not RAM,
is better tested via @mod_cache@ or a dedicated CDN rather than through JMeter.

The @sesskey@ token (Moodle's CSRF protection) is extracted dynamically via regex after login
the bottleneck* with PHP-FPM + OPcache. Don't over-provision RAM at the
expense of CPU cores. See [[Load_test_results_moodle45_2026]] for detailed data.

and after * *InnoDB buffer pool sizing matters.* If the quiz view page, and included working dataset fits in all subsequent POST requests. The @attempt@ ID
the buffer pool,
the database server will show near-zero read I/O during the test. Ensure
@innodb_buffer_pool_size@
is similarly extracted after @startattempt.php@ and set appropriately (we used throughout. 16 GB RAM on the DB server).
* *Check OPcache hit rate* during tests (@opcache_get_status()@). With a warm cache,
hit rate should be above 99%. A cold OPcache at test start will produce artificially
poor results for the first few requests.