The split token approach explained.

The Split Token Approach

On this page

You might have read before about the Phantom Token Approach, which is a privacy-preserving token usage pattern for securing APIs and microservices that combines the security of opaque tokens with the convenience of JWTs.

The Phantom Token approach takes the burden of token introspection from the API microservice to the API gateway. There are still setups, though, where even with this approach, the network traffic between the API Gateway and the Token Service can be quite substantial. For example, when your API Gateway is spread across many instances worldwide, whereas the Token Service is not. There are also APIs where latency is an important factor. In such situations, having an additional request to the Token Service from the API might be a problem.

This is where the Split Token approach comes into play - a modern API security pattern designed to balance performance, scalability, and confidentiality in distributed systems. It offers an alternative to token introspection that reduces latency and infrastructure load in geographically diverse API ecosystems.

What is the Split Token Approach, and HOW Does it Work?

The Split Token approach is based on the same principals as the Phantom Token approach - the client still gets an opaque token, and the API gets a JWT. But in this approach, there is no need for the API Gateway to exchange the opaque token for a JWT. What is more, the JWT is not simply cached in its entirety in the API Gateway, which helps increase security.

When the Token Service issues a token for the client, it splits the JWT into two parts:

  1. The signature of the JWT
  2. The head and body of the JWT

Then, it sends back the signature part of the JWT to the client, to be used as the opaque token. At the same time, the Token Service hashes the signature part and sends the hash together with the second part of the JWT (head and body) to the API Gateway. The gateway then caches the token using the hashed signature as the key for the cache. The value is cached for as long as the expiration time of the token.

When the client sends a request, the API Gateway takes the signature part sent by the client, hashes it, and looks it up in its cache. Then it can glue back the token - the head, body and signature, and forwards it to any API service handling the request. Thus, the API gets a whole JWT, ready to be deserialized and used as needed.

  1. The Token Service sends the client the signature part of the token.
  2. At the same time, the Token Service sends the API Gateway a hashed signature and the head and body parts of the token.
  3. The API Gateway caches the token parts.
  4. The client uses the signature as an opaque access token when sending requests to the API.
  5. The API Gateway hashes the signature and looks up the token in the cache.
  6. The head, body and signature of the JWT are glued together and forwarded to the API service.

Benefits of the Split Token Approach

Strengthens Token Security

The Split Token approach further improves the security of your tokens. Neither the client nor the API gets the full information required to prepare a signed JWT usable with the API. Even if someone manages to break into the API Gateway's cache database, the information stored there will not be useful without the original signature part, which is only available to the client. Should whole JWTs be cached by the gateway, such a data breach could pose a great danger to your users.

Optimized Token Handling

Furthermore, this approach eliminates the need to ask the Token Service for a JWT in exchange for the opaque token, so the API Gateway won't have the additional overhead of asking a remote service. This can be especially beneficial in setups where the API Gateway operates on numerous instances spread across the world and the Token Service is deployed on just a few.

Improves Multi-Region Access Security

This token fragmentation strategy is especially useful in high-availability, multi-region deployments, where low-latency and decentralized access control are essential. The Split Token approach aligns well with zero-trust architectures, reducing the attack surface and maintaining token confidentiality without relying on continuous introspection. Less network traffic going between the API Gateway and the Token Service means also that less resources is needed to operate the Token Service.

OAuth 2.0 Compliant

The Split Token Approach is compliant with the OAuth 2.0 standard. Neither the client nor the APIs have to implement any proprietary solution for this pattern. This makes the pattern vendor-neutral and applicable for any OAuth 2.0 ecosystem.

Considerations

As with many architectural approaches, some considerations should be taken into account when applying the Split Token approach.

Note on Hash Collisions

The pattern described in this article uses a hashing function and then uses the hashed value as the key to the cache. This may raise some concerns about possible collisions of the hashes. You should remember though, that each JWT contains a random ID (the jti claim), which means that no two access tokens have the same payload. This fact, together with using a good hashing algorithm, like SHA-256, means the probability of a hash collision is close to zero.

Safeguarding from Cache Poisoning

As the Split Token pattern will usually be used together with CDNs serving as the API Gateway it means that very often the Gateway and its cache will be outside of your infrastructure, e.g. provided by a third party. This means that you should treat with caution the contents of this cache, as you will not be in charge of safeguarding it from poisoning. In order to maintain a high level of security, it's good to whitelist the issuer of your JWTs (value of the iss claim) and the algorithm used to sign the JWT (value of the alg claim found in the JWTs head). Thanks to this, you will be sure that even if someone manages to swap the contents of the JWT value kept in cache, you will still use proper data to verify the token's integrity.

Another concern might be the data that is kept together with the token in the cache. If your access tokens contain any sensitive data or Personally Identifiable Information, you might want to consider using encrypted tokens so that the datain the cache would remain safe, even if the cache itself becomes breached.

Working with Caches

The cache used by the API Gateway might need to be invalidated if an access token is revoked before their expiration time. If it's not invalidated, then the API Gateway may still create a JWT and forward it to the API, which will believe the token is still a valid one.

The other thing that should be considered is the cache population. Especially in a global setup, the population may take some time, and the client might not be able to use a generated token straight away. If that is a concern, then you should consider mechanisms to fall back to a classic Phantom Token approach.

Key Takeaways: Split Token Approach in Modern API Security

  • Ensures token data confidentiality outside the API infrastructure.
  • It removes reliance on token introspection.
  • Ideal for decentralized, high-throughput environments that require low latency.
  • Works within existing OAuth 2.0 standards, preserving interoperability and vendor neutrality.

Further Reading

Cloudflare

Apigee

AWS API Gateway

Photo of Michal Trojanowski

Michal Trojanowski

Product Marketing Engineer at Curity

Newsletter

Join our Newsletter

Get the latest on identity management, API Security and authentication straight to your inbox.

Newsletter

Start Free Trial

Try the Curity Identity Server for Free. Get up and running in 10 minutes.

Start Free Trial

Was this helpful?