@DanLebrero.

software, simply

Per user rate limiting with OpenID connect and Istio in Kubernetes

How to do rate limiting with Istio on Kubernetes a per user basis, using OpenID Connect to identify it.

Image attribution: Photo of The Queues by Mark Walley.

This article originally appeared on Akvo’s blog

To make sure that each of our partners is able to use Akvo’s API, we need to ensure that nobody is able to abuse it. We want to ensure that each partner has access to a fair share of the servers’ resources.

In the case of HTTP APIs, this usually means limiting the rate at which partners can make requests. A system that performs rate limiting needs to:

  • Identify who is making the HTTP request.
  • Count how many requests each user has made.
  • Reject any user request once that user has depleted their allotment.

There are plenty of open source products and libraries out there that you can choose from, but we decided to give Istio a try.

For such a task, Istio is a little bit heavy-handed. However, since Istio is a service mesh, it also provides routing, load balancing, blue/green deployment, canary releases, traffic forking, circuit breakers, timeouts, network fault injection and telemetry. What’s more, it also offers internal TLS encryption and Role-Based access control, which is very important for us given our commitment to the upcoming GDPR legislation.

Identifying the user

Akvo’s API already uses the OpenID connect standard and Istio comes with a handy JWT-auth filter, so we just need to configure the filter to point to our OpenID provider:

apiVersion: config.istio.io/v1alpha2
kind: EndUserAuthenticationPolicySpec
metadata:
  name: flow-api-auth-policy
  namespace: default
spec:
  jwts:
    - issuer: https://kc.akvotest.org/auth/realms/akvo
      jwks_uri: https://kc.akvotest.org/auth/realms/akvo/protocol/openid-connect/certs

And then we need to tell Istio to apply the authentication spec to our backend service:

apiVersion: config.istio.io/v1alpha2
kind: EndUserAuthenticationPolicySpecBinding
metadata:
  name: flow-api-auth-policy-binding
  namespace: default
spec:
  policies:
    - name: flow-api-auth-policy
      namespace: default
  services:
    - name: flow-api
      namespace: default

With this, if there is a JWT access token present in the request, Istio will validate it and will add the principal to the request, but if there is no token, the requests will still go through.

Enforcing a user

Given that any access to the API must be done with an access token, we can add a policy rule to enforce it. To configure a policy we will need:

A handler, which in this particular case is a Denier adapter that will return a 401:

apiVersion: "config.istio.io/v1alpha2"
kind: denier
metadata:
  name: flow-api-handler
  namespace: default
spec:
  status:
    code: 16
    message: You are not authorized to access the service

An instance, which in this case is a Check Nothing template as the handler requires no data:

apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
  name: flow-api-deny
  namespace: default
spec:
  match: destination.labels["run"] == "flow-api" && (request.auth.principal|"unauthorized") == "unauthorized"
  actions:
  - handler: flow-api-handler.denier.default
    instances: [flow-api-denyrequest.checknothing.default]

See the Istio documentation if you are not familiar with the handler, instance or rule concepts.

Counting usage

First, we need to define what we want to count:

apiVersion: config.istio.io/v1alpha2
kind: quota
metadata:
  name: requestcount
  namespace: istio-system
spec:
  dimensions:
    destination: destination.labels["run"] | destination.service | "unknown"
    user: request.auth.principal|"unauthorized"

We are using two dimensions, the user and the destination service so that we can have different limits for different backend services.

To do the actual counting:

apiVersion: config.istio.io/v1alpha2
kind: QuotaSpec
metadata:
  name: flow-api-quota
  namespace: default
spec:
  rules:
    - quotas:
        - quota: requestcount.quota.istio-system
          charge: 1

Istio rate limiting gives you the flexibility to “charge” more for requests that could be more expensive to execute, but in our case, we’ve decided to treat all the requests the same.

And last, we need to wire the counting with the backend service:

apiVersion: config.istio.io/v1alpha2
kind: QuotaSpecBinding
metadata:
  name: flow-api-quota-binding
  namespace: default
spec:
  quotaSpecs:
    - name: flow-api-quota
      namespace: default
  services:
    - name: flow-api
      namespace: default

Enforcing usage quotas

Now that we know who you are and how to count, we need to define what is a reasonable usage. We do this through a Memory Quota adapter:

apiVersion: config.istio.io/v1alpha2
kind: memquota
metadata:
  name: handler
  namespace: istio-system
spec:
  quotas:
  - name: requestcount.quota.istio-system
    maxAmount: 60
    validDuration: 10s
    overrides:
    - dimensions:
        destination: flow-api
      maxAmount: 20
      validDuration: 10s

So we allow up to ten requests per second for each user, except if the requests go to the Flow API, in which case we allow up to two requests per second.

Note that in production you will want to use a Redis Quota instead of a Memory Quota, as the Memory Quota is ephemeral and local to the Mixer instance.

Finally, we create a policy rule to wire up the quota with the counters:

apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
  name: quota
  namespace: istio-system
spec:
  actions:
  - handler: handler.memquota
    instances:
    - requestcount.quota

Testing

Now, we can check that everything is working as expected and that no user is able to abuse the system. For the testing, we changed the quota to one request every three seconds. Here is the result:

You can find a version of the test script here and all the code above here.

Are we done?

Istio allows us to ensure that all of our partners get a fair share of the resources, with a little bit of configuration and without having to modify or change any of our existing code, which is a big plus.

But rate limiting is just one part of making Akvo’s platforms more stable. Istio also comes with a lot more goodies to add to that stability, and to make it more secure, which for sure we will investigate in the near future.


Did you enjoyed it? or share!

Tagged in : Architecture Kubernetes