API gateways act as kind of a reverse proxy, communicating back and forth between clients and services. These clients could be internal—within the network or the perimeter network — or external, accessing the services remotely. Instead of a direct connection between clients and services, the API gateway acts as an intermediary.
But how does it function? There isnʼt a direct connection between clients and services because API Gateway handles many functionalities that we donʼt want to overload individual services with.
Previously, in a one-to-one client-server architecture, a server would host services and multiple clients could connect directly to get requests served. But, each service had to implement additional functionalities beyond business logic, like protocol translations, verifying authentication, and ensuring security.
If someone tried to hack into the services, each service had to handle that independently. This is where the API gateway introduces a middle tier between clients and services, hosting non-functional business logic like protocol translations, security, request routing, encryption and decryption. The real purpose of an API gateway is that when a client calls a business service, the request first goes through the API gateway, which then authenticates and authorizes it, translates network protocols, encrypts or decrypts data and forwards the request to the appropriate service.
And by offloading these additional functionalities to the API gateway, it becomes a single point of contact for external integrations. Services no longer need to worry about implementing security protocols, handling external requests, or managing complex integrations. That is the essence of API gateways.
But because “APIs inherently expose application logic and sensitive information, including Personally Identifiable Information (PII), they are increasingly attractive targets for malicious actors,” says Steve Rodda, CEO of Ambassador, which provides a suite of tools designed to empower API developers and simplify the development lifecycle. “API security is a growing trend in the market, dominating the executive mindshare,” he tells SecurityBoulevard.
And in today’s sprawling, AI-driven threat landscape, API security with a zero-trust strategy is not an option. “Failing to address the imperatives,” Rodda reminds, “can lead to severe repercussions, from costly data breaches and financial losses to damage to reputation and regulatory penalties.”
Zero-trust. The name itself says it: There’s no trust. It works on the principle of “never trust anybody, always verify.” Before COVID, in the traditional office culture, people worked within a single network, often using a VPN or within the office premises. If they were there, they used the internal office network. This meant that if an employee requested services from within the internal network, they could directly connect and get verified because the model was based on perimeter security. If you were inside the perimeter, you didn’t have to do additional verifications. The model trusted the person calling the APIs or the client invoking the business logic because they were part of the same perimeter.
But that’s no longer the case. “The flaws are now apparent with remote work and hybrid cloud environments,” says Karan Ratra, a seasoned technology leader and a senior software engineering leader currently at Walmart. “We don’t know who the actors are or where the call is coming from. We can’t simply trust them, even if we know they’re part of our VPN or authenticated network, because we don’t know if they’re using hybrid cloud integrations in between,” he reasons. “And calls can be routed through these environments, creating many vulnerabilities in the API world.”
“That’s where the zero-trust security model comes into play,” Ratra articulates. “It says that even if the call is coming from a known network or perimeter network, it still has to be verified. Every call, every request, must be verified and authenticated.” This means that even though everyone wants API security and generally implements basic authentication — adding SAML, JWT tokens, or using mutual TLS (two-way authentication between a client and a gateway) — this common practice is not enough.
Akash Agarwal, who leads Engineering & DevSecOps at LambdaTest, a cross-browser cloud testing platform, shares insights on the evolving API security landscape. On the surface, implementing basic authentication might seem sufficient, but in reality, API security with zero-trust is hard work.
To wit, “This might mean using single sign-on, OAuth, or OpenID Connect. And instead of just TLS, mutual TLS should be used, where the client has access to their private and public keys for authentication, so they can manage and change these keys as they want,” he illustrates.
These measures include enforcing strict authentication and authorization, continuously monitoring API traffic and applying least privilege management to minimize potential attack surfaces. “A zero-trust approach implies least privilege management, where roles are assigned judiciously,” Agarwal highlights. “And network segregation is exceedingly important, separating networks for different authentication needs,” he expounds. “This granular control over access ensures that even if a malicious actor gains access to one part of the system, their lateral movement is severely restricted.”
It’s about minimizing the blast radius of any potential breach,” Agarwal elaborates. “By isolating sensitive resources and data, network segmentation further limits the impact of a compromised account or vulnerability. Development, testing and production environments should also be strictly segregated, each with its own set of authentication and authorization policies.” However, this can introduce challenges when balancing security with developer productivity (DevProd). And so, API development platforms (like Blackbird) allow developers to focus on what they do best—writing code—while abstracting away the complexities of managing development environments.
Continuous authentication monitoring is also important. “This goes beyond initial login and involves actively monitoring user behavior and API traffic for suspicious activity,” Agarwal details. “Unusual access patterns, sudden spikes in requests, or access attempts from unfamiliar locations can all be red flags that warrant investigation. And real-time monitoring and alerting systems are very useful for detecting and responding to threats quickly.”
A key aspect often overlooked with API gateways, Agarwal specifically pointed out during our Zoom interview, is rate limiting for preventing DDoS attacks and API abuse. Through his DevSecOps engineering purview at LambdaTest and his focus on hardening security for AI-driven security automation, he exposits – rate limiting allows you to control the number of requests a user or IP address can make within a given timeframe. This helps prevent malicious actors from overwhelming your APIs with traffic and disrupting your services.
Input validation and threat protection are also necessary. API gateways should come with tools to inspect incoming requests for malicious code or patterns, such as SQL injection or cross-site scripting attacks. This helps to prevent attackers from exploiting vulnerabilities in your backend services. And, as Agarwal mentioned earlier, with mutual TLS, the complete communication should be encrypted, ensuring that data in transit is protected from eavesdropping.
This multi-layered approach to API security — incorporating zero-trust principles, multiple layers of defense and continuous monitoring — is an absolute necessity for protecting sensitive data and maintaining the integrity of your systems in today’s complex threat landscape.
Best practices for ensuring zero-trust for API Gateways include standard authentication, rate limiting and authorization. Beyond these, “one of the most important is how the API gateway hosts the zero-trust security policy while maintaining sensitive data protection,” Ratra stresses. “Since each call is routed through the API gateway and subject to zero-trust policies, it’s critical to avoid unnecessary exposure of personal information like credit card or Social Security numbers. How we handle sensitive data within the zero-trust policy is very important,” he emphasizes.
“One best practice involves data identification,” Ratra shares, drawing from his extensive engineering leadership experience at companies like Walmart and PayPal. “What personal information needs to be masked before applying the zero-trust policy? We need to identify PII data. Conversely, we must also know when data doesnʼt need to be checked,” he cautions. “Because if we start checking everything, we risk interfering with personal data, which could lead to leaks and a loss of customer trust.”
Another involves rules configuration. Rodda (of Ambassador) outlines the specific rules to consider. He describes, “In some scenarios, we might have to publish everything to external clients, like in the case of a credit bureau. And for banks, not everything needs to be masked; we have to send the complete data. Using secure data transport protocols like TLS and SSL ensures that the data isn’t intercepted. But when sending updates to banks or other clients, we must determine who should receive the complete data and for whom the data should be masked.”
In addition to these, protecting sensitive information is another key best practice. Anyone hosting an API gateway should ensure this. This includes encrypting sensitive data and safeguarding it. Many regulations, like GDPR in Europe, CCPA in California (Customer Care Protection Act) and HIPAA in the US for healthcare records, require this implementation at the API gateway level. “This ensures that everything is masked before being transmitted to the internal or external network. And the ability to configure sensitive data protection is another best practice when hosting an API gateway with a zero-trust policy,” Rodda underlines.
When selecting an API gateway, Agarwal (of LambdaTest) suggests that the best approach depends on your specific needs and infrastructure. There are many options available, but key factors in this decision also relate to your deployment model. For instance, how you deploy your microservices in the cloud is one aspect you should evaluate before choosing the right gateway for any architecture.
Ratra, during our interview call, brought up considerations for the performance and scalability of the gateway being offered — specifically, how fast you can route requests and how many requests can be handled simultaneously, which is concurrency. How many requests can the gateway handle? If there are 1,000 requests, is the gateway able to handle 1,000 requests per second, or can it handle 2,000 requests per second?
Security and authentication are the most important factors, Rodda asserted in our email interview. When choosing any gateway from the market, you should always check: Does it have all the latest protocols available in the gateway that you can just configure, like OAuth, JWT, or key-based authentication? Are the many market standards supported by the gateway you’re getting?
Rate limiting is a best practice, but it’s also one of the key points to distinguish and consider, all three leaders [Steve Rodda, Akash Agarwal, Karan Ratra] affirm. How is the gateway able to do rate limiting? Is it efficiently managing the quota allotted to the services and raising alerts, giving you the metrics so that you can troubleshoot what is causing the bottleneck in the microservices, and why your microservices are taking time? Can it give you that rate limiting and traffic management data in real time? These are the major things anyone should look into before choosing an API gateway.
Recent Articles By Author