Building a Proxy for AWS Private API Gateways
TL;DR
- AWS Private API Gateways are only reachable from within the VPC, making local testing difficult
- A kubectl-inspired proxy solution bridges this gap using a local Go server and Lambda function
- The tool enables testing complete request flows that span both public and private endpoints
Working with AWS Lambda and API Gateway in a typical setup means dealing with two separate environments: public endpoints for external access and private endpoints restricted to VPC-internal traffic. While this separation makes sense from a security perspective, it creates a practical problem when testing complete request flows that span both worlds.
The problem: isolated networks
My web service ran in two separate Lambda functions fronted by their respective API Gateways. The public one was secured with an authorizer and accessible from anywhere. The private one remained locked inside the VPC, only reachable from resources within that same network. For most operations, this worked fine. But when I needed to test specific request flows in Bruno that required calling both gateways, I hit a wall. My local machine simply couldn't reach the private API Gateway.
Inspiration from kubectl proxy
The solution came from remembering how Kubernetes handles a similar challenge. The kubectl proxy
command allows authenticated users to access cluster-internal services as if they were running on localhost. It's elegant in its simplicity: one command, and suddenly you can reach dashboards and services that would otherwise be completely isolated within the cluster.
If Kubernetes could solve this with a proxy, why couldn't AWS?
Architecture overview
The solution consists of two components working together to bridge the network boundary:

Local Proxy Server
Written in Go, this server listens on http://localhost:8001
and acts as the entry point. It uses the AWS SDK to communicate with the AWS Management API, translating incoming HTTP requests into Lambda invocations. When it receives a response from the proxy Lambda, it converts that back into an HTTP response for the client.
The local proxy is started with awsctl proxy
, following the kubectl pattern. Configuration happens through command-line flags including the Lambda function name, AWS region, profile, port, and verbosity settings.
Requests follow a structured URL pattern: http://localhost:8001/api_url/<url-encoded-internal-api-url>/proxy/<path>
. The internal API URL is URL-encoded and passed as a path parameter, with the target path appended after /proxy/
.
AWS Proxy Lambda
This Lambda function receives the invocation payload from the local proxy, reconstructs it as an HTTP request, and forwards it to the specified target URL with all headers, query parameters, and body intact. It evaluates the response and returns it as its invocation result. This is a central component deployed once per VPC that needs proxy access.
Real-world usage
The tool has been running in production use for my testing workflows. I can now execute complete request flows in Bruno that span both public and private API Gateways without changing my setup or resorting to manual workarounds. The private endpoints remain secure and isolated in the VPC, but when I need to reach them for testing, the proxy provides controlled access through my authenticated AWS session.
The project is available on GitHub: awsctl-proxy