Ory Oathkeeper as a Reverse Proxy in Front of Third-Party APIs: Secure Credential Injection with Time-Limited JWTs

READER BEWARE: THE FOLLOWING WRITTEN ENTIRELY BY AI WITHOUT HUMAN EDITING.

Introduction

Modern organizations rely on dozens of third-party SaaS APIs—Jira for project tracking, Okta for identity management, Datadog for observability, Salesforce for CRM, and many more. Every one of those APIs ships with its own credentials: long-lived API tokens, OAuth client secrets, or service account keys. Distributing these secrets to every application, script, and developer that needs them creates a large and difficult-to-audit secret sprawl. A leaked token can silently exfiltrate data for months before anyone notices.

Ory Oathkeeper is an open-source Identity and Access Proxy (IAP) that sits as a reverse proxy between your callers and backend APIs. It can:

  1. Authenticate inbound requests using short-lived, time-limited JWTs.
  2. Mutate forwarded requests—adding the real third-party credentials to the outgoing request headers—so callers never see the underlying API keys.
  3. Apply fine-grained authorization rules per route.

This post walks through the architecture, explains how to issue time-limited JWTs, shows concrete use cases with Jira, Okta, and Datadog, provides a local proof-of-concept setup, and finishes with a production-grade deployment on AWS EKS.


How Ory Oathkeeper Works

Oathkeeper treats every proxied route as an access rule. Each rule defines three pipeline stages:

StagePurpose
AuthenticatorVerifies who is making the request (e.g., validates a JWT)
AuthorizerDecides whether the caller may proceed (e.g., checks scopes or OPA policies)
MutatorTransforms the request before forwarding (e.g., strips the caller’s JWT and injects a real API key)
Caller ──JWT──► Oathkeeper ──API key injected──► Jira / Okta / Datadog
                    │
                    └── JWT validated, API key never exposed to caller

This architecture cleanly separates two concerns:

  • Callers prove their identity with a short-lived JWT they obtained from your internal token issuer.
  • Backend credentials live exclusively in Oathkeeper’s configuration (or a secrets store). Callers never receive them.

Issuing Time-Limited JWTs

Why Time-Limited JWTs?

A long-lived static API token leaked from a client can be used indefinitely. A JWT with a short expiry (exp claim set to, say, 15 minutes or 1 hour) drastically limits the blast radius of a token leak. Combined with a rotation strategy, you can build a credential lifecycle that matches your security policy.

Minimal JWT Structure

{
  "iss": "https://token.internal.example.com",
  "sub": "ci-pipeline-staging",
  "aud": "oathkeeper-proxy",
  "iat": 1740528000,
  "exp": 1740531600,
  "scope": "jira:read datadog:metrics:write"
}

Key claims:

  • iss — your internal token issuer URL (must match Oathkeeper’s jwks_urls configuration).
  • sub — identity of the caller (service account, pipeline, or user).
  • aud — the proxy audience; Oathkeeper will reject tokens intended for another audience.
  • exp — expiry, expressed as a Unix timestamp. Keep this short (≤1 hour for automated systems; ≤15 minutes for interactive use).
  • scope — optional custom claim that your authorizer can enforce.

Simple Token Issuer with Python + PyJWT

import datetime
import jwt  # pip install PyJWT cryptography

# Load your RSA private key (store in AWS Secrets Manager or Vault in production)
with open("issuer_private_key.pem", "rb") as f:
    private_key = f.read()

def issue_token(subject: str, scopes: list[str], ttl_seconds: int = 900) -> str:
    now = datetime.datetime.now(datetime.UTC)
    payload = {
        "iss": "https://token.internal.example.com",
        "sub": subject,
        "aud": "oathkeeper-proxy",
        "iat": now,
        "exp": now + datetime.timedelta(seconds=ttl_seconds),
        "scope": " ".join(scopes),
    }
    return jwt.encode(payload, private_key, algorithm="RS256")

# Example: CI pipeline requests a 15-minute token for Jira read access
token = issue_token("ci-pipeline-staging", ["jira:read"], ttl_seconds=900)
print(token)

The corresponding public key is exposed at a JWKS endpoint (e.g., served by your IdP or a lightweight FastAPI service) that Oathkeeper polls to validate tokens.

JWKS Endpoint (FastAPI Example)

from fastapi import FastAPI
from cryptography.hazmat.primitives.serialization import (
    load_pem_public_key, Encoding, PublicFormat
)
from cryptography.hazmat.primitives.asymmetric.rsa import RSAPublicKey
import base64, json

app = FastAPI()

with open("issuer_public_key.pem", "rb") as f:
    pub_key: RSAPublicKey = load_pem_public_key(f.read())

pub_numbers = pub_key.public_numbers()

def _b64url(n: int, length: int) -> str:
    return base64.urlsafe_b64encode(
        n.to_bytes(length, "big")
    ).rstrip(b"=").decode()

@app.get("/.well-known/jwks.json")
def jwks():
    return {
        "keys": [{
            "kty": "RSA",
            "use": "sig",
            "alg": "RS256",
            "kid": "internal-issuer-v1",
            "n": _b64url(pub_numbers.n, 256),
            "e": _b64url(pub_numbers.e, 3),
        }]
    }

Oathkeeper Configuration Deep Dive

Global Configuration (oathkeeper.yaml)

serve:
  proxy:
    port: 4455          # Port callers connect to
  api:
    port: 4456          # Admin/health endpoint

access_rules:
  matching_strategy: glob
  repositories:
    - file:///etc/oathkeeper/rules.yaml   # Local file; swap for S3 or HTTP in production

authenticators:
  jwt:
    enabled: true
    config:
      jwks_urls:
        - https://token.internal.example.com/.well-known/jwks.json
      scope_strategy: exact
      target_audience:
        - oathkeeper-proxy
      token_from:
        - header: Authorization   # Bearer <token>

authorizers:
  allow:
    enabled: true

mutators:
  noop:
    enabled: true
  id_token:
    enabled: false   # Not used here; we use header injection instead
  header:
    enabled: true
    config: {}       # Per-rule config overrides this

errors:
  handlers:
    json:
      enabled: true
      config:
        when:
          - error:
              - unauthorized
              - forbidden

Use Cases and Access Rules

Use Case 1 — Read-Only Jira Access for CI Pipelines

A CI pipeline needs to query Jira for ticket status. The pipeline receives a short-lived JWT (scope: jira:read). Oathkeeper strips the JWT and injects a service-account Jira API token.

Rule:

- id: jira-read-only
  match:
    url: "http://oathkeeper:4455/jira/<**>"
    methods: [GET]
  authenticators:
    - handler: jwt
      config:
        required_scope: [jira:read]
  authorizers:
    - handler: allow
  mutators:
    - handler: header
      config:
        headers:
          Authorization: "Basic {{ env \"JIRA_BASIC_AUTH\" }}"
          X-Atlassian-Token: "no-check"
  upstream:
    url: "https://your-org.atlassian.net"
    strip_path: /jira
    preserve_host: false

What happens:

  1. Pipeline calls GET http://oathkeeper:4455/jira/rest/api/3/issue/PROJ-123 with Authorization: Bearer <jwt>.
  2. Oathkeeper validates the JWT signature, audience, and jira:read scope.
  3. Oathkeeper replaces the Authorization header with a Jira-specific Basic Auth credential stored in the environment variable JIRA_BASIC_AUTH.
  4. The real request goes to https://your-org.atlassian.net/rest/api/3/issue/PROJ-123.
  5. The pipeline receives the Jira response but never learns the Jira API credentials.

Use Case 2 — Okta Admin API for User Provisioning

An internal user-provisioning service needs to create and update Okta users. The service holds a JWT with scope okta:users:write. Oathkeeper injects the Okta API token.

- id: okta-user-provisioning
  match:
    url: "http://oathkeeper:4455/okta/api/v1/users<**>"
    methods: [GET, POST, PUT]
  authenticators:
    - handler: jwt
      config:
        required_scope: [okta:users:write]
  authorizers:
    - handler: allow
  mutators:
    - handler: header
      config:
        headers:
          Authorization: "SSWS {{ env \"OKTA_API_TOKEN\" }}"
          Accept: "application/json"
          Content-Type: "application/json"
  upstream:
    url: "https://your-org.okta.com"
    strip_path: /okta
    preserve_host: false

Security note: The Okta API token in OKTA_API_TOKEN lives only in Oathkeeper’s environment. Services calling through the proxy never access it.


Use Case 3 — Datadog Metrics Submission from Edge Services

Edge services running in restricted network segments need to push custom metrics to Datadog. They receive a JWT with scope datadog:metrics:write.

- id: datadog-metrics-ingest
  match:
    url: "http://oathkeeper:4455/datadog/api/v2/series"
    methods: [POST]
  authenticators:
    - handler: jwt
      config:
        required_scope: [datadog:metrics:write]
  authorizers:
    - handler: allow
  mutators:
    - handler: header
      config:
        headers:
          DD-API-KEY: "{{ env \"DATADOG_API_KEY\" }}"
  upstream:
    url: "https://api.datadoghq.com"
    strip_path: /datadog
    preserve_host: false

Use Case 4 — Read-Only Salesforce Access for Analytics Pipelines

- id: salesforce-analytics-read
  match:
    url: "http://oathkeeper:4455/salesforce/services/data/<**>"
    methods: [GET]
  authenticators:
    - handler: jwt
      config:
        required_scope: [salesforce:read]
  authorizers:
    - handler: allow
  mutators:
    - handler: header
      config:
        headers:
          Authorization: "Bearer {{ env \"SALESFORCE_ACCESS_TOKEN\" }}"
  upstream:
    url: "https://your-instance.salesforce.com"
    strip_path: /salesforce
    preserve_host: false

Proof-of-Concept Setup (Docker Compose)

The following Docker Compose setup lets you run Oathkeeper locally in minutes. It uses a mock JWKS endpoint and mocks Jira with a simple echo server so you can see credential injection in action without a real Jira account.

Directory Layout

oathkeeper-poc/
├── docker-compose.yaml
├── oathkeeper/
│   ├── oathkeeper.yaml
│   └── rules.yaml
├── token-issuer/
│   ├── Dockerfile
│   ├── main.py            # FastAPI JWKS + token endpoint
│   ├── issuer_private_key.pem
│   └── issuer_public_key.pem
└── scripts/
    └── get_token.py

Generating RSA Keys

openssl genrsa -out token-issuer/issuer_private_key.pem 2048
openssl rsa -in token-issuer/issuer_private_key.pem \
    -pubout -out token-issuer/issuer_public_key.pem

docker-compose.yaml

version: "3.9"

services:
  token-issuer:
    build: ./token-issuer
    ports:
      - "8080:8080"
    environment:
      PRIVATE_KEY_PATH: /app/issuer_private_key.pem
      PUBLIC_KEY_PATH: /app/issuer_public_key.pem

  oathkeeper:
    image: oryd/oathkeeper:v0.40.6
    command: serve --config /etc/oathkeeper/oathkeeper.yaml
    ports:
      - "4455:4455"   # Proxy
      - "4456:4456"   # Admin
    volumes:
      - ./oathkeeper:/etc/oathkeeper
    environment:
      JIRA_BASIC_AUTH: "ci-service-account@example.com:ATATT3xFfGF0..."
      OKTA_API_TOKEN: "00AbCdEfGhIjKlMnOpQrStUvWxYz"
      DATADOG_API_KEY: "abc123def456..."
    depends_on:
      - token-issuer
      - echo-server

  echo-server:
    image: ealen/echo-server:latest
    ports:
      - "3000:80"

oathkeeper/oathkeeper.yaml

serve:
  proxy:
    port: 4455
  api:
    port: 4456

access_rules:
  matching_strategy: glob
  repositories:
    - file:///etc/oathkeeper/rules.yaml

authenticators:
  jwt:
    enabled: true
    config:
      jwks_urls:
        - http://token-issuer:8080/.well-known/jwks.json
      target_audience:
        - oathkeeper-proxy
      token_from:
        - header: Authorization

authorizers:
  allow:
    enabled: true

mutators:
  header:
    enabled: true

errors:
  handlers:
    json:
      enabled: true

oathkeeper/rules.yaml

- id: echo-jira-mock
  match:
    url: "http://oathkeeper:4455/jira/<**>"
    methods: [GET, POST]
  authenticators:
    - handler: jwt
      config:
        required_scope: [jira:read]
  authorizers:
    - handler: allow
  mutators:
    - handler: header
      config:
        headers:
          Authorization: "Basic {{ env \"JIRA_BASIC_AUTH\" }}"
  upstream:
    url: "http://echo-server"
    strip_path: /jira
    preserve_host: false

Token Issuer (token-issuer/main.py)

import os, datetime, base64
import jwt
from fastapi import FastAPI, Query
from cryptography.hazmat.primitives.serialization import load_pem_private_key, load_pem_public_key

app = FastAPI()

_priv = load_pem_private_key(open(os.environ["PRIVATE_KEY_PATH"], "rb").read(), password=None)
_pub  = load_pem_public_key(open(os.environ["PUBLIC_KEY_PATH"], "rb").read())
_pub_numbers = _pub.public_numbers()

def _b64url(n: int, length: int) -> str:
    return base64.urlsafe_b64encode(n.to_bytes(length, "big")).rstrip(b"=").decode()

@app.get("/.well-known/jwks.json")
def jwks():
    return {"keys": [{"kty":"RSA","use":"sig","alg":"RS256","kid":"v1",
                      "n":_b64url(_pub_numbers.n, 256),
                      "e":_b64url(_pub_numbers.e, 3)}]}

@app.post("/token")
def token(sub: str = Query(...), scope: str = Query("jira:read"), ttl: int = Query(900)):
    now = datetime.datetime.now(datetime.UTC)
    payload = {
        "iss": "http://token-issuer:8080",
        "sub": sub, "aud": "oathkeeper-proxy",
        "iat": now, "exp": now + datetime.timedelta(seconds=ttl),
        "scope": scope,
    }
    return {"access_token": jwt.encode(payload, _priv, algorithm="RS256"), "expires_in": ttl}

Running the PoC

cd oathkeeper-poc
docker compose up -d

# 1. Get a time-limited JWT
TOKEN=$(curl -s -X POST \
  "http://localhost:8080/token?sub=ci-pipeline&scope=jira:read&ttl=900" \
  | jq -r .access_token)

# 2. Call Jira through the proxy (echo server mirrors request headers)
curl -s -H "Authorization: Bearer $TOKEN" \
  http://localhost:4455/jira/rest/api/3/issue/PROJ-1 | jq .

# 3. Verify the echo server received the Jira Basic Auth header (not the JWT)
# Look for "Authorization: Basic ..." in the echo response headers

You should see the Authorization: Basic ... header in the upstream echo response, confirming that Oathkeeper injected the Jira credential while stripping the caller’s JWT.


Production Deployment on AWS EKS

Architecture Overview

Internet / Internal VPC
        │
        ▼
AWS Network Load Balancer  (port 443, TLS termination via ACM)
        │
        ▼
Kubernetes Service (oathkeeper-proxy, port 4455)
        │
        ▼
Oathkeeper Pods (Deployment, HPA)
        │
        ├── Reads secrets from AWS Secrets Manager (via External Secrets Operator)
        └── Reads access rules from S3 bucket (polling every 60s)
        │
        ▼
Third-party APIs (Jira, Okta, Datadog, Salesforce …)

1. Store Credentials in AWS Secrets Manager

# One secret per third-party integration
aws secretsmanager create-secret \
  --name oathkeeper/jira-basic-auth \
  --secret-string "ci-service-account@example.com:ATATT3xFf..."

aws secretsmanager create-secret \
  --name oathkeeper/okta-api-token \
  --secret-string "00AbCdEfGhIjKlMnOpQrStUvWxYz"

aws secretsmanager create-secret \
  --name oathkeeper/datadog-api-key \
  --secret-string "abc123def456..."

2. External Secrets Operator

External Secrets Operator (ESO) syncs AWS Secrets Manager values into Kubernetes Secret objects.

# external-secret.yaml
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: oathkeeper-api-credentials
  namespace: oathkeeper
spec:
  refreshInterval: 1h
  secretStoreRef:
    name: aws-secrets-manager
    kind: ClusterSecretStore
  target:
    name: oathkeeper-api-credentials
    creationPolicy: Owner
  data:
    - secretKey: JIRA_BASIC_AUTH
      remoteRef:
        key: oathkeeper/jira-basic-auth
    - secretKey: OKTA_API_TOKEN
      remoteRef:
        key: oathkeeper/okta-api-token
    - secretKey: DATADOG_API_KEY
      remoteRef:
        key: oathkeeper/datadog-api-key

3. Access Rules in S3

Store rules.yaml in a versioned S3 bucket. Oathkeeper polls it on startup and periodically:

# oathkeeper.yaml (production)
access_rules:
  matching_strategy: glob
  repositories:
    - s3://your-company-oathkeeper-rules/rules.yaml

Use S3 bucket policies and IAM roles (via IRSA—IAM Roles for Service Accounts) so only the Oathkeeper pods can read the rules bucket.

4. Helm Chart Deployment

The community maintains an Ory Helm chart:

helm repo add ory https://k8s.ory.sh/helm/charts
helm repo update

helm upgrade --install oathkeeper ory/oathkeeper \
  --namespace oathkeeper --create-namespace \
  --values values.yaml

values.yaml:

oathkeeper:
  config:
    serve:
      proxy:
        port: 4455
      api:
        port: 4456
    access_rules:
      matching_strategy: glob
      repositories:
        - s3://your-company-oathkeeper-rules/rules.yaml
    authenticators:
      jwt:
        enabled: true
        config:
          jwks_urls:
            - https://token.internal.example.com/.well-known/jwks.json
          target_audience:
            - oathkeeper-proxy
          token_from:
            - header: Authorization
    authorizers:
      allow:
        enabled: true
    mutators:
      header:
        enabled: true
    errors:
      handlers:
        json:
          enabled: true

deployment:
  replicaCount: 3
  resources:
    requests:
      cpu: 100m
      memory: 128Mi
    limits:
      cpu: 500m
      memory: 256Mi
  envFrom:
    - secretRef:
        name: oathkeeper-api-credentials   # injected by ESO

service:
  proxy:
    type: ClusterIP
    port: 4455
  api:
    type: ClusterIP
    port: 4456

autoscaling:
  enabled: true
  minReplicas: 3
  maxReplicas: 10
  targetCPUUtilizationPercentage: 70

5. Network Load Balancer and TLS

# nlb-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: oathkeeper-nlb
  namespace: oathkeeper
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-east-1:123456789012:certificate/..."
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
spec:
  type: LoadBalancer
  selector:
    app.kubernetes.io/name: oathkeeper
  ports:
    - name: proxy-tls
      port: 443
      targetPort: 4455

6. IRSA for S3 and Secrets Manager Access

# Create IAM policy
aws iam create-policy \
  --policy-name OathkeeperPolicy \
  --policy-document '{
    "Version": "2012-10-17",
    "Statement": [
      {
        "Effect": "Allow",
        "Action": ["s3:GetObject"],
        "Resource": "arn:aws:s3:::your-company-oathkeeper-rules/*"
      },
      {
        "Effect": "Allow",
        "Action": ["secretsmanager:GetSecretValue"],
        "Resource": "arn:aws:secretsmanager:us-east-1:123456789012:secret:oathkeeper/*"
      }
    ]
  }'

# Associate IAM role with Kubernetes service account (IRSA)
eksctl create iamserviceaccount \
  --name oathkeeper \
  --namespace oathkeeper \
  --cluster your-eks-cluster \
  --attach-policy-arn arn:aws:iam::123456789012:policy/OathkeeperPolicy \
  --approve

7. Observability

Oathkeeper exposes Prometheus metrics on the admin port. Add a ServiceMonitor for Prometheus Operator:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: oathkeeper
  namespace: monitoring
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: oathkeeper
  namespaceSelector:
    matchNames: [oathkeeper]
  endpoints:
    - port: api
      path: /metrics
      interval: 30s

Key metrics to alert on:

  • ory_oathkeeper_requests_total{outcome="error"} — authentication/authorization failures
  • ory_oathkeeper_request_duration_seconds_bucket — proxy latency
  • Pod restarts and HPA scaling events

Security Considerations

Token Lifetime Strategy

Caller TypeRecommended TTLNotes
CI/CD pipeline step15 minutesIssued at step start; pipeline fails fast on expiry
Background worker1 hourRefresh before expiry using a sidecar or init-container
Interactive developer8 hoursSuitable for a full workday; revocable via key rotation

Credential Rotation

Because callers never hold the real API credentials, rotating a third-party API token requires only:

  1. Update the secret in AWS Secrets Manager.
  2. Wait for ESO to sync the new value (up to 1 hour with the default refresh interval, or trigger a manual sync).
  3. Oathkeeper picks up the new environment variable on the next pod restart or rolling update.

No changes to callers. No re-issuance of JWTs.

Scope Enforcement

Define a closed set of scopes that map 1:1 to backend API operations. Avoid broad scopes like admin. Examples:

  • jira:read → only GET on /rest/api/3/**
  • jira:writePOST/PUT on /rest/api/3/issue/**
  • datadog:metrics:write → only POST on /api/v2/series
  • okta:users:read → only GET on /api/v1/users/**

Audit Logging

Enable structured JSON logging in Oathkeeper and ship logs to your SIEM. Each request log includes:

  • subject (the JWT sub claim)
  • rule_id (which access rule matched)
  • outcome (allowed or denied)
  • upstream_url

This gives you a full audit trail of who accessed which third-party API, and when, without ever logging the underlying credentials.

Network Isolation

In EKS, use Kubernetes NetworkPolicy (or Cilium network policies) to:

  • Allow only approved namespaces to reach the Oathkeeper proxy service.
  • Block direct egress from application pods to third-party API endpoints (force traffic through Oathkeeper).
# Deny direct access to third-party APIs from app namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-direct-external-api
  namespace: my-app
spec:
  podSelector: {}
  policyTypes: [Egress]
  egress:
    - to:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: oathkeeper

Summary

Ory Oathkeeper provides a clean, auditable pattern for granting applications time-limited access to third-party APIs without ever distributing long-lived credentials. The key benefits are:

  • Credential isolation: API keys live in one place (Oathkeeper’s environment, sourced from AWS Secrets Manager). Rotation affects only that one place.
  • Short-lived access: Callers hold JWTs that expire quickly, dramatically reducing the impact of a stolen token.
  • Scope-based authorization: Each caller’s JWT encodes exactly which operations it may perform—Oathkeeper enforces this before proxying.
  • Full audit trail: Every proxied request is logged with caller identity and matched rule, without logging secrets.
  • Scalable on EKS: The Helm chart, HPA, and NLB setup supports production traffic while remaining operationally simple.

For organizations already running on AWS EKS, this pattern integrates naturally with the existing IRSA, External Secrets Operator, and Prometheus Operator toolchain, keeping the operational overhead low while significantly raising the security bar for third-party API access.