People say don't roll your own crypto but nobody ever warns you not to roll your own LMS (when you have minimal dev experience).
We’re a month into the launch of my course and, with around 140 paying students across red, purple, GRC, and blue team backgrounds, I’m genuinely really happy with how it’s going. I’ve had a lot of constructive feedback from the community that has formed around it, and that has been one of the best parts of the whole experience.
Thanks to every single one of you, I’ve been able to continue building this into something great. Many of you likely won’t know me in real life, but I’ve always been a massive advocate for helping build community. Because of that, I’ve taken some of the profits from the course and put them back into supporting the wider scene.
- Sponsoring Steelcon (https://www.steelcon.info/)
- BSides Leeds(https://bsidesleeds.com/) - Yet to be announced at time of writing
- Hack Glasgow(https://hackglasgow.live/) - Yet to be announced at time of writing
This year which are conferences in the UK, if there are cool conferences that need sponsorship and I can make it work I'll certainly support them!
I’ve also ordered stickers and some T-shirts for the brand, and I’ll be getting a run of challenge coins made up as well, with some unique designs and other bits in the works. If you have ideas for swag or things you’d like to see built, let me know. I’m not only a hacker, I’m also a creative, and I enjoy making things.
TL;DR - Rolling your own LMS is absolutely not the easy option, but the functionality and control that come with it are incredibly rewarding. Building a platform while also building a community around it has been even more rewarding. Thank you, genuinely, to everyone who has supported the course so far. It really does make me smile hearing that people are enjoying the content I’ve written. You can read the public testimonials here:
MAE - Malwareless Adversary Emulation
Advanced red team training — 13 modules on adversary emulation without traditional malware. Learn the techniques that bypass modern defences.
Malwareless Adversary Emulation

If you're more of a visual learner here's a 17 minute video overview of the entire platform and available functions:
ZephrSec LMS March 2026 walkthrough
Anyways if you want to get into the nitty gritty details of the journey of rolling my own LMS then here it is!
Learning Management Systems are everywhere, and ahead of launching Malwareless Adversarial Emulation (MAE) I spent a fair bit of time looking into what already existed. There are a lot of options, but many of them are either hilariously expensive for what they offer, or they lack features I considered important.
I got chatting to mr.d0x as they are one of the co-owners of MalDev Academy the LMS that they have built is custom and has some nice features that I wanted to replicate with my course. Upon chatting about the issues encountered and some considerations I embarked upon writing my own thing (probably not my smartest decision I've ever made but we move), using AWS serverless architecture and React to make things pretty, functional and secure.
What exists out there?
Most LMS platforms are, in my opinion, not great. I’ve hated pretty much every single one I’ve used or tried. Before committing to building my own, I wanted to properly map out what was already available so I could avoid reinventing the wheel if possible.
The Decision Matrix
| Platform | Typical Cost Reference (per year) | Source / Receipt(At the time of writing) | Notes |
|---|---|---|---|
| Moodle (self-hosted) | $0 software | Moodle is free, open-source GPL LMS | Open source LMS; self-hosting cost varies by infra oai_citation:0‡Moodle |
| MoodleCloud (hosted) | $170–$2,110+ | Official MoodleCloud pricing plans | SaaS offering of MoodleCloud with limits oai_citation:1‡moodlecloud.com |
| Leanpub | $0 (grandfathered) | Leanpub free plan available | Leanpub allows hosting/ebooks; not full LMS (no official LMS pricing page) |
| LearnWorlds | $29/mo+ ($348+/yr) | LearnWorlds pricing plans | Entry tier ~$29/mo; higher tiers include more features oai_citation:2‡LearnWorlds |
| Teachable | $29–$39+/mo | Various pricing outlines | Teachable Starter ~$29/mo (billed annually) + transaction fees oai_citation:3‡Zapier |
| Thinkific | Free to ~$39+/mo | Thinkific pricing tiers | Thinkific has a free plan; paid from ~$39/mo oai_citation:4‡Online Course Platform Reviews |
| Kajabi | ~$149+/mo | Kajabi pricing tiers | Basic ~ $149/mo; no free plan oai_citation:5‡Uteach |
| Custom Build (e.g., AWS) | Varies (hosting + dev) | Hosting cost guidance | Depends on chosen AWS services; no fixed platform fee oai_citation:6‡Educate Me |
My take on the major options was roughly this:
Moodle
Free is always tempting, but the UI looks like it is from 2005, and my late friend Paul Mason spent a lot of time hardening Moodle and pointing out its problems. Managing a LAMP stack, updates, and plugin compatibility issues was not worth the supposed savings. Video handling also felt mediocre at best.
LearnWorlds
Nice video features, but once you get into the tiers that matter, the pricing becomes painful. Add transaction fees on top and the maths quickly becomes less attractive.
Teachable
Very similar story. Decent enough for many use cases, but limited customisation and the same sort of fee structure. I could not build the protections or workflows I wanted, and you are ultimately locked into someone else’s platform decisions.
Thinkific
A bit more reasonable, but still generic in the areas I cared about. Limited flexibility, generic branding, and API constraints that would have ruled out some of the custom functionality I wanted.
Kajabi
A premium all-in-one option, but expensive enough that it did not make sense for a first course launch. Great if you are running a larger operation, less compelling when you want deep control and are still building out the model.
In the end, the maths was simple enough. A custom build meant complete control, lower long-term platform costs, better security, and far more room to expand because I know the codebase inside out.
How It Started
Originally I built a very simple MVP, which is usually how I approach scripts and projects. It worked, but it lacked a lot of the features I really wanted in a platform. After that, I sat down and started properly planning what I wanted to build, what features mattered, and why.
I also asked the community what they would want to see, and a few people came back with very solid suggestions. Translations and accessibility stood out immediately because they are not things many available platforms do particularly well. Another suggestion was to have badges because the logo I'd created for my course would sit nicely as a badge.

Architecture and Design
Once I had decided that rolling my own thing was what I was going todo, the next step was to throw up some designs for user interface (UI), user experience (UX), front and backend plus an admin UI.

I also needed to work out what stack I was going to use for a balance of usability, visuals and security and ensure that all of the designs I was putting together would stand up against... the internet. Because after all I was writing a security course for offensive security professionals and I'd like it to be a secure implementation and something I'd actually like to use.
The Stack
Frontend:
- React with Vite (very fast builds)
- Tailwind CSS (utility-first styling)
- React Router (client-side routing)
Backend:
- AWS Lambda (Node.js runtime, I've used it previously for pentesting and was familiar with how it works)
- API Gateway (RESTful endpoints, similar to Lambda in that I know how it works and was easier to adopt with APIs)
- DynamoDB (NoSQL database)
- S3 + CloudFront (content delivery)
- Cognito (user authentication, feature rich options and easier to implement with the existing stack)
Why Serverless?
Traditional servers are expensive and require constant maintenance. With serverless:
- Pay only for actual usage (not idle time)
- Auto-scaling built-in (handles traffic spikes)
- No server management (AWS handles it and thus securing it is slightly easier)
- Global CDN with CloudFront (fast content delivery worldwide)
Architecture Diagram
┌─────────────────────────────────────────────────────────────────┐
│ User Interface │
│ React SPA hosted on S3, delivered via CloudFront CDN │
└────────────────────┬────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ API Gateway │
│ RESTful API with Cognito Authoriser │
└────────────────────┬────────────────────────────────────────────┘
│
┌────────────┼────────────┐
│ │ │
▼ ▼ ▼
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ Lambda: │ │ Lambda: │ │ Lambda: │
│ Auth │ │ Content │ │ Progress │
│ Handling │ │ Delivery │ │ Tracking │
└──────┬───────┘ └──────┬───────┘ └──────┬───────┘
│ │ │
└────────────────┼────────────────┘
│
┌───────────────┼───────────────┐
│ │ │
▼ ▼ ▼
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ Cognito │ │ DynamoDB │ │ S3 │
│ User Pool │ │ (Data) │ │ (Videos) │
└──────────────┘ └──────────────┘ └──────────────┘
The Nested Stack Problem
AWS CloudFormation has a hard limit of 500 resources per stack which in the first instance sounded like plenty but because I'm not the most efficient coder nor am I actually a developer I hit that limit fairly quickly on my few first attempts at building and deploying a stack!
My initial monolithic stack had 527 resources (13 Lambda functions × 7 IAM roles × security groups × DynamoDB tables = way over limit).
Solution: Nested stack architecture
// Parent Stack: lms-stack-nested.js
const parentStack = new Stack(app, 'MaeLmsStackNested');
// Nested stacks for logical separation
const apiStack = new NestedStack(parentStack, 'ApiNestedStack'); // ~150 resources
const userStack = new NestedStack(parentStack, 'UserNestedStack'); // ~120 resources
// Parent stack handles shared resources // ~180 resources
// Total: Well under 500 per stack, scales beautifully
This architectural decision saved the entire deployment and made future scaling trivial plus because I moved to a nested approach it makes life easier when it comes to expanding functions.
Challenge 1: Dual Authentication System
Most LMS platforms use a single auth system. I wanted to have two for separation of privilege and a bit more obscured security:
- Students: Standard email/password (Cognito) + MFA
- Admins: Magic link authentication (no passwords to steal) + ZeroTrust via an extenal Auth provider + MFA and other stacked security
Why Magic Links for Admins?
Traditional admin panels are prime targets for credential stuffing. Magic links provide:
- No password database to breach or target
- Time-limited access (5-minute expiration)
- One-time use tokens
- HMAC signature validation (prevents tampering)
- Anti user-enumeration (same response for valid/invalid emails)
Security Benefits:
- No credential database
- Phishing-resistant (links expire in 5 minutes and can only come from the domain in approved list, have additional checks in place for spoofing too)
- Audit trail (every log-in logged and anomoly detection raises when things look off, geographic logins, odd times, the usual fun stuff)
- No password reuse attacks because there is no password to start with
Challenge 2: Serverless Cold Starts
Problem: Lambda functions "cold start" when inactive, causing 1-3 second delays.
Solution: Multi-pronged approach
- Provisioned Concurrency (for critical functions)
- Keep-Alive Pings (EventBridge scheduled every 5 minutes)
- Optimise Bundle Size (reduce initialisation time)
Results:
- Cold start: 2000ms → 400ms
- User-facing impact: negligible
Challenge 3: Video Content Delivery at Scale
Problem: Storing and streaming high-quality videos for 100+ students is expensive and slow.
Initial Approach (Failed):
- Direct S3 URLs → slow, expensive bandwidth, no anti-piracy security
- Cost: $200/month for 100 students
Final Solution:
- S3 (storage) + CloudFront (CDN) + HLS streaming
- Cost: $40/month for same usage (85% reduction)
Architecture:
- Upload: Admin uploads MP4 to S3
- Processing: MediaConvert generates HLS segments
- Delivery: CloudFront serves with caching
- Security: Signed URLs (expire after viewing session)
Benefits:
- Global CDN (fast delivery worldwide)
- Automatic caching (repeat views are free)
- Adaptive bitrate (HLS adjusts to connection speed)
- Signed URLs (prevent hotlinking)
Challenge 4: Closed Captions, Transcription, and Translation at Scale
Problem: The course covers advanced red teaming concepts with heavy technical terminology tool names, attack technique names, niche jargon — and a chunk of the audience isn't native English speakers. Accessibility matters, and so does reach.
The Pipeline:
-
Transcription with Whisper.cpp (local, offline)
Rather than paying for a cloud transcription service I ran Whisper.cpp locally against every video. It's a C++ port of OpenAI's Whisper model, runs on CPU, and produces WebVTT subtitle files directly. Quality is surprisingly good for plain English but cybersecurity content is a different beast. -
Manual QA for Technical Terms
This step was non-negotiable. Whisper confidently transcribes things like "Mimikatz" as "me me cats" and "LSASS" as "lasses". Every generated VTT file got a manual pass to catch and fix technical terms, tool names, and acronyms before they went anywhere near a translation engine. Rubbish in, rubbish out feeding bad transcriptions into AWS Translate would have produced unusable results in 8 languages simultaneously and cost me even more. -
Translation via AWS Translate
Once the English captions were QA'd, a Lambda function processes them through AWS Translate to produce subtitles in 8 languages:
| Language | Code |
|---|---|
| Chinese (Simplified) | zh |
| Japanese | ja |
| Korean | ko |
| Spanish | es |
| Portuguese (Brazil) | pt-BR |
| Arabic | ar |
| German | de |
| French | fr |
The translated VTT files are stored alongside the English originals and the HLS player serves whichever language the student selects from the CC menu, whcih meant no extra front end work required since the player already iterates available subtitle tracks dynamically and it also meant that adding a transcription button underneeth the video was easily done too for UX.
The Cost Reality:
There are cheaper ways of doing this for sure, running AWS Translate across 52 videos worth of subtitles in 8 languages adds up really fast but short term pain in the wallet for long term gain in students' experience of the course, I never set out to make tonnes of money from this course, much the same as my books the goal has always been pay it forward and make it as accessible to as many people as I can plus get people the best experience possible and fix bugs as and when they arrise in as quick and efficient manner.
But the reasoning is straightforward: the course content is highly technical and deeply niche, and if someone in Japan or Brazil wants to learn this material, bad subtitles are worse than no subtitles. I'd rather eat the translation cost once than have someone struggle through the content or give up entirely.
Lessons:
- Whisper.cpp is excellent for a first pass but always needs a human review for domain-specific content
- Manual QA before translation is critical in english as errors multiply across every language if not fixed first time in the whisper run(which I found out with one of the module 0 videos as a test run which cost me a few $ but alas better that than chaos at scale)
- The upfront translation cost is a one-time hit; the benefit compounds across every student who needs it, plus the transcriptions are available in written form to follow along too.
Challenge 5: AWS SDK v2 to v3 Migration Trap
Bug: Lambda Node.js 24 runtime doesn't include AWS SDK v2
// This FAILS in production (Runtime.ImportModuleError)
const AWS = require('aws-sdk');
const dynamodb = new AWS.DynamoDB.DocumentClient();
// This WORKS
const { DynamoDBClient } = require('@aws-sdk/client-dynamodb');
const { DynamoDBDocumentClient, GetCommand } = require('@aws-sdk/lib-dynamodb');
const client = new DynamoDBClient({});
const dynamodb = DynamoDBDocumentClient.from(client);
const result = await dynamodb.send(new GetCommand({ TableName, Key }));
Lesson: Always test with the correct runtime. Local development was fine (SDK v2 installed), production crashed hard and I learnt the hard way that local only works so well but deploying a test harness makes life 10x easier for debugging. This is where using claude and other local LLMs came into their own by writing a full test suite for the LMS to evaluate things before pushing to live and check my own work!
Challenge 6: N+1 Query Problem in Admin Panel
Problem: Loading 100 users = 200+ API calls (classic N+1 query)
Lesson: Denormalisation (yes, it's a real word — also spelt "denormalization" in American English) in NoSQL is your friend. DynamoDB isn't SQL design for access patterns, not normalisation by doing this it trimmed down the requests being made significantly.
Security and Features
Building a course aimed at offensive security professionals means your audience will try to break it. Some out of curiosity, some because that's just what we do. I designed the security with that very much in mind and also put as many protections in place as possible.
The stack has several controls in place to prevent FAFO, but also to maintain usability.
Web Application Firewall (WAF)
CloudFront sits in front of everything and WAF rules filter traffic before it hits the API layer. This covers rate limiting at the edge, geographic restrictions (the course isn't available in sanctioned countries by the UK both legally required and practically sensible), and custom rules that would catch common automated attack patterns. The WAF is something I revisit frequently to ensure that it steps up aggressively (too aggressively in a recent update but that's been since fixed).
Rate Limiting and CSRF Protection
Every Lambda endpoint has rate limiting applied. Brute force attempts, credential stuffing, and API abuse all get caught before they cause damage. CSRF protection is enforced on all state-changing endpoints not because Lambda is inherently vulnerable, but because defence in depth is the whole point and having popped many apps in my time without CSRF it's an important control. There's also rate limiting in place for if you decide to click through the modules too quickly to prevent scraping.
IDOR Protection
One of the earlier security reviews I ran on myself caught several Insecure Direct Object Reference vulnerabilities where a user could theoretically access another user's progress data or certificates by guessing an ID. Multiple rounds of fixes went in to ensure every data access endpoint validates that the authenticated user is authorised to access the requested resource.
MFA Enforcement
Students can enable multi-factor authentication via TOTP. It's not forced, but it's available and I wanted to give everyone options as there's few platforms out there that offer it from a LMS perspective. Given that course content has real-world operational value, I didn't want accounts to be trivially compromised by password reuse (I also contemplated integrating HIBP's database as a lookup at password creation but the API overheads were a little too heavy with the current setup, I may review this in the future).

Cookies and CAPTCHA
Session tokens are stored in HttpOnly cookies rather than localStorage this prevents any JavaScript-based exfiltration even if an XSS vulnerability somehow existed. Cloudflare Turnstile and hCaptcha handle bot detection on the signup and login flows, which has dramatically reduced fake account registrations, I opted for a two captcha system in the event one fails it rolls over to the second and also steps up aggressively if bots are detected, there are also some additional anti bot protections in place but I'm not going to tip my entire hand in this post.

Content Protection
There's a layer of copy protection on written course content text selection restrictions, right-click blocking, and keyboard shortcut interception on sensitive pages. I'm not naive about this: someone determined enough will find a way around it. The goal isn't to make it impossible, it's to raise the cost of casual copying to the point where it's not worth the effort compared to just... buying the course. Those who know me have heard me rant and rave about the stuff I've designed on this front, I think I probably spent a large chunk of platform development and planning just on content protection and platform protections as I have my blog out here and my books, but the course has a lot more content in it and it's at a premium (£349 for lifetime access for individuals at the time of writing!).
Audit Trail and Logs
I've spent enough time around Incident Response (IR) folks to realise that having good logging is something I wanted from the outset both from a security perspective and a debugging perspective, more is certainly better than less.
Every admin login, every admin action, every significant action event gets logged with timestamp and IP, I factored in emailing users when they perform actions as a backup as well to act as an early warning system if for whatever reason their account had been potentially compromised(if password re-use in use or similar). Geographic anomaly detection raises alerts if an admin account attempts to log in from a country it's never been used from before, that gets flagged. The audit trail has already caught a couple of suspicious access patterns during the beta period and I stepped up admin access to sit behind certain allowlists and zero trust networking that actually works (as I tried to check something from a recent hack thursday event and was blocked by the WAF over 5G, so have added additional controls to allow access from fingerprinted devices).
The Battle with CORS and CSP
I want to give this its own section because it was genuinely one of the most time-consuming and painful parts of the entire build. CORS and Content Security Policy are two of those things that seem straightforward until you're staring at a browser console full of errors at 4am wondering why an endpoint that worked yesterday has stopped working.
The architecture has multiple API Gateways the student API, an admin auth API, an admin content API, and an admin management API each on its own subdomain and in some instances fail over domain. CORS needs to be configured correctly on every single one, and the rules for each are slightly different depending on which origins are allowed to call them.
The first major mistake was starting with Access-Control-Allow-Origin: * across the board. It worked, everything loaded, and I moved on. From a security perspective this is obviously a terrible decision because on authenticated endpoints this not acceptable. Replacing it with explicit origin validation across 14 Lambda functions and multiple API Gateway stacks took a full evening and introduced a new class of bug: missing origins.
The pattern that kept biting me was adding a new Lambda function, wiring it to API Gateway, and forgetting to add SITE_URL to its environment variables which unfortunately security != usability therefore without it, the Lambda had no origin to return in the CORS response header and would either return nothing or fall back to a hardcoded value which was wrong for one of the two domains the platform runs on. Every time this happened it manifested as an inexplicable CORS error in the browser that looked identical to a dozen other CORS errors I'd already fixed.
A related problem: the frontend routes API calls by matching the endpoint path prefix against a list to decide which API base URL to use. Add a new endpoint and forget to register it in that list and it silently routes to the wrong API entirely which is a different domain, which CORS blocks. This happened at least three times with different features (ratings, testimonials, piracy stats) before I learnt to check the routing list every single time.
API Gateway has a binaryMediaTypes setting for handling binary responses like PDFs and images. At some point */* ended up in that list, I added it at some point for testing probably copied from a Stack Overflow answer. This silently broke every single CORS preflight request across the entire platform. API Gateway handles CORS OPTIONS preflight using a MOCK integration a built-in response with no Lambda involved. When */* is in binaryMediaTypes, API Gateway tries to base64-encode the mock response body, which corrupts it, and every preflight returns a 500. Every API call from the browser started failing with a CORS error even though the CORS configuration was completely correct.
Took about two days to track down because the error message was "CORS error" not "your binary media type configuration is mangling mock integration responses.", another thing I learnt from this entire ordeal is that browser error messages aren't all that helpful at times.
Next up, CDK's CORS Preflight Bug
While I'm listing CORS horrors: CDK's defaultCorsPreflightOptions with allowCredentials: true has a bug (at least as of early 2026) where it sets the OPTIONS integration response to return status 204 but leaves the request template mapped to status 200. The mismatch means the integration response never matches and preflights fail. The fix requires going into the API Gateway console or CLI after every deployment and manually updating the OPTIONS method request template.
CSP Woes:
Content Security Policy headers are set in two places: the CloudFront response headers policy (applied at the CDN edge) and in Lambda middleware (applied in API responses). These need to stay in sync and I found this out the hard way many times. They did not always stay in sync due to code changes, quick fixes and many other factors at play. The Lambda middleware CSP is necessary because API responses need their own headers for certain browser security checks. The CloudFront one is what the browser actually sees for HTML pages. When I updated one and forgot the other, things broke in interesting ways sometimes immediately visible, sometimes only triggered by a specific browser or feature.
The directives that caused the most pain:unsafe-inline in script-src - I started with it because various third-party scripts needed it. A pentest finding (correctly) told me to remove it. Removing it broke several things that had been relying on inline scripts without me realising, including some Stripe elements and parts of the HLS player initialisation. Getting everything to work without unsafe-inline required a proper nonce-based approach and several iterations.

Cloudflare Turnstile (CAPTCHA) - Turnstile renders its challenge inside an iframe and uses about:blank frames internally for script execution sandboxing. Without about:blank in frame-src, browsers block the internal script execution and the CAPTCHA never loads. The error message "Blocked script execution in 'about:blank' because the document's frame is sandboxed" is not immediately obvious as a CSP problem if you're not looking for it and what this meant for me was that logins would silently fail at 'failed captcha' but not tell me much more.
HLS video player - hls.js creates blob URLs for video segments. The media-srcdirective needs blob: or the browser refuses to load the video . Found this one in production after a deploy when every video broke simultaneously which is the reason original publish date was pushed back.

Cross-Origin-Embedder-Policy - I wanted to set require-corp for maximum isolation but the platform embeds Stripe, Cloudflare Turnstile, and various other third-party content that doesn't set the required cross-origin headers. require-corp would have broken all of them. Ended up with unsafe-none which was the unhappy medium between overly paranoid security and usability, felt like a defeat but alas we move.
Stripe's Permissions-Policy - Early in development I had a Permissions-Policy header that used wildcard syntax (https://*.stripe.com) which is not valid in that header. Browsers silently ignored it rather than erroring, which meant I didn't notice for weeks until I started doing prod testing of Stripe payments.
One more CORS ballache: when a Lambda returns an error (4XX, 5XX) through API Gateway, the response goes through a different code path than a successful response. By default, API Gateway's error responses don't include CORS headers. So a 401 Unauthorized response would arrive at the browser without `Access-Control-Allow-Origin`, which the browser treated as a CORS error rather than an authentication error. The actual error was invisible; the browser just said "CORS." This took an embarrassingly long time to diagnose. The fix for this one was enabling API Gateway Responses with CORS headers on all error codes, across every nested stack.
What I'd Do Differently
Set up a browser-based integration test that hits every API endpoint from the actual frontend origin on every deploy and checks for CORS errors. I have this now but added it reactively using Playwright. If I'd had it from the start I would have caught most of these issues within minutes rather than hours but that's down to me not being a dev.
For CSP, use a reporting endpoint from day one (report-uri or report-to). Rather than hunting for CSP violations in the browser console, you get them reported automatically. I added this late and immediately saw a handful of violations I hadn't known about.
| Metric | Details |
|---|---|
| 1,698 commits | September 2025 to present (~5.5 months of development) |
| 342 Lambda functions | Covering auth, content delivery, progress, admin, anti-piracy, and more |
| 443 API endpoints | 182 GET, 159 POST, 37 DELETE, 30 PUT, 2 PATCH |
| 32 DynamoDB tables | From user sessions to leak detections |
| 298 React components | Plus 56 custom hooks and 12 context providers |
| 217 test files | Unit tests + 43 Playwright E2E specs |
| 9 CDK constructs | WAF, monitoring, MFA, admin auth, secrets, and more |
| 21 WAF rules | Including geo-blocking for 24 sanctioned countries |
| 13 CloudWatch alarms | Automated monitoring and alerting |
| 25 languages supported | 3,825 translation keys per language |
| 20 accessibility settings | Colourblind modes, dyslexia fonts, reading tools, motion controls |
| ~650,000 lines of code | Across JS, JSX, CSS, and infrastructure-as-code |
| 962 hours of music streamed | Listened to pretty much everything and anything on repeat throuhgout dev |
| 150 cans of Monster Ultra(aka White monster) | Consumed to keep me sane |
| 1 Shoulder surgery | Not related to dev of platform but certainly a hurdle in the dev |
Watermarking and Friends
This is probably the most over-engineered part of the platform and I have zero regrets because I enjoyed building it but also it was a lot of learning from a prior life of watermarking of tooling and other things. I hope todo a proper talk about it at some point this year at a conference! Now it's worth noting that this is not everything documented, I'm not going to show you my whole hand.
Every video stream displays a dynamic visible watermark containing the student's details. It's subtle enough not to be distracting but prominent enough to make piracy obviously traceable. The watermark is generated server-side and composited into the stream rather than being a simple CSS overlay, which makes it significantly harder to remove cleanly, sure it won't stop some people but if I can deter people enough it stands as a nice challenge. On the topic of watermarking, here's a brief diagram to detail how A|B marking works:

Forensic Image Watermarking
I've spent countless hours playing around with steggonography in my career and I wanted to get this into my own platform for proper hardcore watermarking, all course images (diagrams, screenshots, attack path illustrations) pass through a watermarking Lambda before being served. Each image gets a user-specific invisible watermark embedded in the pixel data. If a watermarked image shows up on a paste site or forum, I can extract the watermark and identify which account it came from. This has already deterred a few people who I caught sharing content in private chats, turns out if you politely tell people please don't FAFO they learn from their actions.
Content Fingerprinting
Module API responses include a hidden fingerprint in the response metadata an HMAC-based signature tied to the requesting user. If anyone publishes the raw API response content, the fingerprint identifies the source account. This was inspired by canary token techniques that red teamers will already be familiar with.
Leak Detection Dashboard
The admin panel has a forensic lookup tool and a leak detections tab. A daily automated crawler checks for content fingerprints appearing on paste sites and known file-sharing platforms and writes any matches to a detections table. When a match comes in, I can run a forensic lookup that correlates the fingerprint against user records and gives me the source account. It also correlates against download tracking records to build a timeline of when content was accessed and from where.
GeoIP Correlation and Escalation
When a piracy event is detected, the system correlates the requesting IP's geolocation, cross-references it with the student's known access locations, and generates an escalation alert if it looks like account sharing or credential leaking. This feeds into an admin alerts feed automatically and sends me push notifications when identified. Here's an example of a user racking up strikes for certain actions:

User Experience and Interface
While having robust security controls is important, this is a course about technical offensive security. The audience is experienced practitioners people who are used to terrible developer tooling, clunky interfaces, and documentation written by engineers for engineers. I wanted to do better than that and do what I do best, bring every person along for the ride to make sure there's no degree of difficulty using the platform.
The platform tracks progress at the section level, not just the module level. If you're halfway through a long module and close the browser, when you come back the Continue button takes you exactly where you left off within that section. Scroll position is restored on return, and reading progress within sections persists across sessions. This sounds trivial but required multiple iterations to get right the first few implementations broke in various creative ways depending on browser, connection speed, and how aggressively the browser was caching state.

As the course is made up of written content and video (plus deployable labs in your own environment) I wanted to have a robust video player setup, I contemplated hosting on an external site but that opened the potential doors and risks of piracy so decided, F it I'm rolling my own!
Beyond the HLS streaming and closed captions already covered:
Playback speed persistence - if you prefer 1.5x, it stays at 1.5x across every video.
Interactive transcript - every video has a scrollable transcript panel below it. Click any line to jump directly to that point in the video. The transcript auto-scrolls to follow playback position. This is particularly useful for the technical content where you want to quickly reference a specific command or concept without scrubbing through video.

Videos playlist tab - modules with multiple videos have a playlist sidebar so you can jump between them without losing your place in written content, I did this for people who prefer video learning and don't want to scroll everything (there will be more videos coming in later months)

- iOS/Safari fullscreen - getting fullscreen to actually work consistently across iOS browsers took an embarrassing amount of time and several dedicated commits, taking commits for a second, I thought I was done at 100 commits when I built the MVP back in October, however we've gone 15x that and more, I have a full CI/CD pipeline setup and unit tests(thanks to Claude for writing unit tests it's made life a lot easier to debug and push fixes).

- Closed captions, 1080p, rendering and all sorts of fun - Originally I had the videos with hard coded captions into the video stream but once the course was live several pieces of feedback were 'can you turn these off they're distracting', 'I can understand you I don't need captions!' - so I opted to go down the
vttroute and make separate subtitles tracks, in addition when I originally shipped the course last week, all videos were in 720p but many users requested 1080p, so I did the decent thing and re-rendered them out in 1080p for better consumption.
Accessibility Settings
For those who may have read my post from last year you'll be aware I operated with one arm for a period of time which made me far more aware of functions that are worth considering when delivering content, especially when it comes to making things accessible. I'm also dyslexic which adds a layer of complexity to everything I do in my daily life for my $dayjob but having functions to help with that is what I wanted to cover on my platform and make it as accessible as possible. Here's the section of the full platform walkthrough that steps through the accessability settings (3:57-
Dark Mode, High Contrast & Colourblind Modes
One of the first things I built into the platform was proper theme support. You can toggle between light and dark modes, and if your OS is already set to one or the other, the platform will respect that automatically via prefers-colour-scheme. But I didn't want to stop there. There's also a dedicated high contrast mode that targets WCAG AAA compliance it picks up your OS-level prefers-contrast setting too, so if you've already told your system you need more contrast, the LMS will honour that out of the box. Borders get sharper, text gets bolder, and the distinction between interactive elements and background becomes much more pronounced.
Beyond contrast, I added four colourblind modes: Protanopia and Deuteranopia for red-green colour blindness (which affects roughly 8% of men), Tritanopia for blue-yellow colour blindness, and a full Monochromacy mode for those with complete colour vision deficiency. The default colour palette throughout the platform was also designed with a colourblind-friendly set of blues, oranges, teals, and ambers so that even without a specific mode enabled, the UI avoids relying purely on colour to convey meaning.

Dyslexia-Friendly Fonts & Text Controls
This one is personal. Being dyslexic, I know how much of a difference the right font makes. The platform ships with three font options: the default Inter for general use, OpenDyslexic which uses weighted letter bottoms to reduce the visual "flipping" that many dyslexic readers experience, and Lexie Readable which is a high-readability sans-serif designed for extended reading. You can switch between them instantly from the accessibility settings and the change applies across the entire platform module content, quizzes, navigation, everything.

Alongside font selection, there are controls for text scaling (100% up to 200%), line spacing, letter spacing, word spacing, and content width. These map directly to WCAG 2.1 Success Criterion 1.4.12 which specifically addresses text spacing for readability. If you find that tightly packed text makes it harder to track lines or distinguish individual words, you can loosen everything up to a level that works for you. The content width control is useful too some people find narrower columns easier to read, while others prefer the text to stretch wider so there's less vertical scrolling.
Reading Support Tools
These are probably my favourite features on the platform and the ones I use most myself to help better consume content. There are four reading support tools and they're all designed to help you keep your place and focus on what you're actually reading.
Paragraph Highlight lights up the paragraph you're hovering over, giving you a clear visual indicator of where you are on the page. When you're working through a dense module on adversarial tradecraft, it's easy to lose your place this solves that. You can toggle it with Alt+H or via the accessibility menu.

Focus Mode takes that a step further as I wanted to have a nice view to see things clearly in reading view. When enabled, everything on the page except the paragraph you're currently hovering over gets dimmed. It's like putting blinkers on your attention is drawn to the one block of text that matters right now. Toggle it with Alt+Shift+F.

Reading Guide places a horizontal highlight line that follows your mouse vertically across the page. If you've ever used a physical ruler or strip of paper to track lines while reading a book, this is the digital equivalent. It's subtle but genuinely useful for long-form content.
There's a full set of keyboard shortcuts too. Press ? from anywhere and a help modal pops up showing everything available. In module views you've got N and P for next/previous module, H to go home to the dashboard, B to bookmark, and T to toggle the table of contents. Quizzes support arrow key navigation between options. The accessibility toggles themselves have dedicated shortcuts Alt+H, Alt+Shift+F, and Alt+R as mentioned above. For anyone who needs it, there's also an enhanced focus indicators option that makes the focus rings larger and more visible so you can always see exactly where keyboard focus is.

Motion & Wellbeing
Not everyone wants animations. Some people find them distracting, others experience motion sickness from certain transitions. The reduced motion setting has three options:
- System (follows your OS prefers-reduced-motion setting)
- Reduce (always disables animations)
- Normal (always allows them).
When reduced motion is active, the platform skips staggered list animations, transition effects, and the confetti celebration that normally plays when you complete a module.
Speaking of which, there's a separate "disable celebrations" toggle if you want the animations elsewhere but don't want confetti going off every time you finish a quiz. And because staring at a screen for hours isn't great for anyone, I added break reminders. You can set them at 20, 30, 45, or 60 minute intervals and the platform will pop up a gentle reminder to take a break, stretch, and look away from the screen for a bit. The messages rotate so it doesn't feel repetitive.
Video Accessibility
All video content on the platform supports captions and subtitles through WebVTT tracks, with automatic detection of embedded CEA-608/708 captions in the HLS stream. You can toggle captions on and off and select your preferred language. Alongside the video player, there's a transcript panel that parses the WebVTT file and displays it as clickable timestamped text click any line and the video jumps to that point. This is useful both as an accessibility feature and as a way to quickly navigate longer videos to find the section you need.
The video player itself is fully keyboard accessible: Space or K to play and pause, arrow keys for seeking and volume, M to mute, and F for fullscreen. Playback speed is adjustable from 0.5x to 2x, which is helpful if you need content delivered more slowly or want to skim through something you've already watched.
Settings Persistence
One thing I wanted to get right was that your accessibility settings shouldn't disappear when you switch devices or clear your browser. All 20+ settings are persisted server-side in your user profile, with a localStorage fallback for immediate responsiveness. Change your font to OpenDyslexic on your laptop and it'll be waiting for you when you log in on your tablet. The settings sync is debounced so it doesn't hammer the API on every toggle it waits for you to stop making changes, then saves everything in one go.
The entire accessibility interface is also translated across 25+ languages with over 80 accessibility-specific translation keys, so the settings panel itself is accessible to non-English speakers. It is translated using i18N api so certainly done with AI and won’t be 100% accurate but at least gives a good baseline of things.
The Accessibility Settings Panel
All of these features live in a dedicated Accessibility tab in your profile, organised into three clear sections: Visual Settings, Text & Reading, and Navigation & Interaction. There's also a floating accessibility button on every page for quick access to the most common toggles (theme and colourblind mode), and a full modal version of the settings panel if you want to make changes without navigating away from what you're reading.

What surprised me after launch
There are a few things that stood out very quickly once real students started using the platform.
The first is that people notice the small things more than you expect. As someone building it, I naturally obsessed over the architecture, the security model, the infrastructure, the admin tooling, and all the hidden bits that make everything work. Students, quite reasonably, cared more about whether the platform felt good to use. Could they pick up exactly where they left off. Could they read for long periods without the interface fighting them. Could they turn captions off if they did not want them. Could they watch videos in better quality. Could they move through content without friction. Those details matter more than a clever backend ever will so I built and added things as we went and continue to.
The second is that feedback arrives fast and it is usually right. Within days of launch I had people asking for things like higher video quality, more control over subtitles, and small usability improvements that only become obvious once dozens of people with different devices, habits, accessibility needs, and learning preferences all start using the same system. Some of those changes were minor on paper, but they made a real difference to the overall experience.
The third is that launching is not the end of building. If anything, it is where the real work starts. Until people are actively using something at scale, a lot of your assumptions are still just assumptions. The month since launch has involved plenty of fixes, refinements, and iterations, but that is not a bad thing. It is the reality of building something properly and continuing to improve it in response to the people it is actually for.
One of the unexpected outcomes of launching the course and running a Discord server has been the number of new connections, friendships, and shared interests that have come out of it. The Discord community continues to grow and it has been great to see discussions forming organically across a wide range of topics. What has made me smile the most is how many people genuinely want to help and see the project succeed. There have been feature requests, bug reports, and people volunteering to help beta test new features before they are pushed out to everyone else. Thanks to that community involvement, the platform already has more functionality than it did at launch and continues to improve week by week.
The obvious metric people look at is student numbers, and I am genuinely grateful to have around 140 paying students in the first month. But the more meaningful part has been everything around that. People have been giving constructive feedback, sharing what they enjoy, pointing out issues, suggesting improvements, and helping shape what the platform becomes next. That matters to me far more than anything else.
Community has always been important to me in this industry. A huge amount of what I have learned over the years came from people sharing knowledge, writing things down, giving talks, running events, and creating spaces where others could learn. I wanted this course to contribute back into that same ecosystem rather than just take from it.
That is one of the reasons I have put some of the course profits back into sponsoring UK community events this year, including SteelCon, BSides Leeds, and Hack Glasgow. Conferences and community events like these are incredibly important. They are where people give their first talks, meet peers, learn something new, and often feel like they belong in the industry for the first time. If I can help support that in even a small way, I want to.
What I got wrong
It would be very easy to write this post as if everything went to plan. It did not.
I underestimated how much time would disappear into edge cases, browser quirks, API plumbing, deployment gotchas, and all the tiny bits of glue that sit between “it works on my machine” and “this is stable enough for paying students”. I knew writing a custom LMS would be a lot of work, but I still do not think I fully appreciated just how many moving parts I was signing up for.
I also got a few product decisions wrong on the first pass. Some things I shipped made sense to me as the builder but did not fit how students actually wanted to consume the content. Captions are a good example. I originally hard-burned them into the videos, thinking that would be a net positive, only to have people immediately ask for the ability to turn them off. Likewise with video quality. I launched in 720p, then quickly realised people wanted 1080p and that they were absolutely right to ask for it.
On the engineering side, I should have invested earlier in more aggressive integration testing around frontend behaviour, CORS, and deployment validation. A lot of pain would have been caught sooner if I had put those checks in place from day one rather than reactively after being bitten by them.
Still, I do not see that as failure, one of the most important things in this job is learning from failings and improving upon them, it is after all just part of building something real. You do not get every call right first time. The important bit is listening, fixing, and improving.
Why students should care
From the outside, a custom LMS can sound like needless overengineering. For students, though, the point is actually very simple: the platform exists to make the course better to use.
Because it was built specifically for this course, I have control over the things that matter. That means I can improve the learning experience directly rather than waiting for a third-party vendor to maybe add a feature six months later. It means accessibility settings can be treated as first-class features rather than afterthoughts. It means I can respond quickly to feedback, adjust how content is delivered, improve progress tracking, refine the video experience, and build features around how technical people actually learn.
Most course platforms are built to serve the broadest possible market. This one was built for practitioners working through dense, technical content and wanting a platform that gets out of the way.
What comes next
The first month has been about getting the course out into the world, listening carefully, and refining the platform based on how people are actually using it. The next phase is about continuing to improve both the course and the surrounding experience without losing sight of why I built it this way in the first place.
That means iterating on usability, expanding accessibility features, improving the video and written learning experience, and adding more downloadable resources where they make sense. It also means continuing to harden the platform, smoothing out rough edges, and making sure the experience keeps evolving rather than stagnating after launch.
Beyond the platform itself, I want to keep investing in the community side of things as well. That includes supporting conferences where I can, getting more creative with swag, and hopefully turning some of the slightly weirder ideas I have for challenge coins and other bits into something tangible. If there are good events that need sponsorship, or things people would genuinely like to see built, I am always open to hearing about them.
Most of all, I want this to feel alive. Not a one-and-done launch, but something that keeps growing, improving, and giving back. I will also keep writing blog posts and doing the other things I have always done, the course launch is just one part of the wider work I enjoy doing.
The platform will continue to evolve as time goes on. I want to keep improving the user experience and how people interact with the content. I already have a running list of things I would like to add, including:
- more videos across modules to add
- continued UX and accessibility improvements
- more downloadable resources content
- further platform improvements
- more community and swag ideas on top of stickers and challenge coins
- future conference talks about lessons learned from building it (have submitted already to a UK con to hopefully talk about it later in the year).
Final thanks
A month in, I am honestly just grateful.
Grateful to everyone who took a chance on the course early. Grateful to the people who sent kind messages, reported bugs, suggested improvements, shared feedback, and helped shape the platform into something better than it would have been if I had built it in isolation. Grateful as well to the wider community that has supported my work for years, because a lot of this would not exist without that encouragement.
Rolling my own LMS has, at times, been a complete pain in the arse. It has been more work, more debugging, more late nights, and more learning than I probably needed to sign myself up for. But even with all of that, I am still proud I did it. Building something custom gave me the freedom to shape the platform around the course and the students rather than the other way around, and that has made the effort worth it.
So thank you, genuinely, to everyone who has supported MAE so far. It means a lot.
And if you have ideas, whether that is feature suggestions, accessibility improvements, swag ideas, community initiatives, sponsorship opportunities, or things you would love to see built next, send them my way. A lot of the best parts of this platform came from people speaking up, and I would like that to continue.
If you've gotten this far and read all of this, thank you, go check out the course, go share it and help it grow to help the broader community: