It began, as an engineer’s attempt to fix a nagging problem often does, with irritation.
Each night, automated test pipelines ran across an expanding surface area of the Aembit Workload IAM Platform, validating that core components behave as expected across environments.
By morning, the results existed, but they were scattered across interfaces and notifications that required patience to reconstruct into a coherent picture. TestOps platform Qase.io stored the data, and Slack delivered partial summaries from individual repositories and pipelines – but neither provided a clear answer to the question engineers needed to answer at the start of each day: Is everything actually working like it should?
Sebastian Ostrowski, Aembit’s lead test automation engineer, decided to build a dashboard to bring those results into one place.
Almost immediately, the work drifted into familiar territory: The application would run inside Kubernetes and depend on app-to-service connections (interactions that have become routine as non-human workloads take on more responsibility.)
It would need access to Qase.io and Slack every night. But rather than introducing environment variables and long-lived tokens, Sebastian chose to use the Aembit Workload IAM Platform itself to handle access for the dashboard.
At the time, this decision did not feel especially consequential. It was simply the cleanest option available. The dashboard would live in Kubernetes, it would be deployed through Argo CD, and it would need to authenticate itself repeatedly to external services. Using Aembit meant those credentials could be injected at runtime, defined centrally and based on policy, and enforced through the platform rather than embedded into application code or configuration files.
The implications of that choice became clearer as the work progressed.
Sebastian built the dashboard as a Python Flask application with a Vue.js front end, backed by MongoDB. The service pulled test automation results from Qase.io on a regular cadence, stored them locally, and rendered them in a format that made nightly runs easier to interpret. It also posted summarized results into Slack, providing the team with a single, consistent signal each morning.
Throughout that process, Sebastian never handled a Qase.io API key or a Slack token. He did not copy credentials between environments or keep temporary secrets on his machine, avoiding a set of practices that remain common in many engineering workflows. Developers often end up handling credentials and writing authorization logic themselves, tolerated because it is familiar and expedient, despite the risk and operational inefficiencies.
In this case, it simply never entered the picture.
“As a developer, I didn’t have to worry about secrets,” Sebastian said. “I just built the dashboard.”
The dashboard Sebastian built addressed one kind of failure: understanding test outcomes. Each morning, he could see what ran overnight, which components passed, and where tests failed. That clarity made regressions easier to spot and reduced the time spent reconstructing what happened across multiple systems.
A different class of failure sat beneath the test results themselves. When a run failed because a service could not be reached or an API call was rejected, Sebastian could inspect the Aembit tenant to determine whether access had been granted as expected. He did not need to log in to machines or trace environment variables across repositories. The access layer was visible, inspectable, and separate from application logic.
That separation mattered in practice. Sebastian did not begin the project as a Kubernetes specialist, and part of the work involved learning how an application fits into a real deployment workflow where responsibilities are divided and access is treated as shared infrastructure rather than developer-owned configuration.
Using Aembit internally turned the dashboard into a practical test of that separation.
The distinction grows more important as software takes on more autonomous behavior. Non-human workloads already outnumber human users by orders of magnitude in most environments, and agentic AI systems amplify that imbalance and introduce greater liability.
Access needs to be scoped, short-lived, and enforced through identity-based policyrather than static secrets.
Sebastian experienced that reality firsthand. Even though the dashboard was not an AI system, it behaved like one in the ways that mattered operationally. It ran on a schedule, acted without human intervention, and required trusted access to external services.
Now to start each day, the team checks the dashboard and the clean Slack summary it produces. Typically everything is green. But if something does break, the signal is immediate and clear.
“It sure made mornings easier,” Sebastian said.
Get started in minutes, with no sales calls required. Our free- forever tier is just a click away.