Why Docker ignores your “system proxy” even when Clash is on
Most Clash-compatible GUIs expose an HTTP or mixed port (often 7890) and flip the operating system proxy so browsers and many CLI tools follow suit. Docker is different: the daemon is a separate long-running service. On Linux it is usually supervised by systemd; on macOS and Windows it runs inside a lightweight VM (Docker Desktop). That daemon process does not read your interactive shell’s export http_proxy=... unless you explicitly inject those variables into its environment or configure the engine through supported files.
Meanwhile, docker pull initiated from your terminal is a client talking to the daemon over a local socket. Registry traffic may still fail if the daemon cannot reach registry-1.docker.io or a mirror, or if BuildKit builders run in isolated contexts without proxy variables. The symptom cluster is predictable: slow handshakes, abrupt TLS failures, or “i/o timeout” mid-layer download—exactly the pain points users blame on “bad Wi‑Fi” when the real issue is which process lacked proxy settings.
This article assumes you operate a normal developer machine: Clash listens on localhost or LAN with Allow LAN when you need cross-namespace access. For subscription hygiene and split-routing concepts that complement proxy wiring, keep our subscription auto-update checklist in mind—your free subscription URL must remain fetchable over HTTPS before you chase Docker-specific layers.
Pick the right Clash listener: HTTP vs SOCKS and Allow LAN
HTTP-aware tools—including Docker’s proxy integration—expect an HTTP proxy URL such as http://127.0.0.1:7890 when your Clash mixed port speaks HTTP CONNECT on that endpoint. If you only expose SOCKS5, many Docker paths still work when you set ALL_PROXY=socks5://..., but mixed stacks are easier to reason about when you standardize on the mixed port shown in your client’s UI.
When traffic originates from a bridge network container, 127.0.0.1 inside the container points to that container, not your host. You either publish a host-reachable address—classically host.docker.internal on Docker Desktop—or the bridge gateway IP on Linux (commonly the first address on docker0). If your proxy only binds to loopback, LAN-origin containers cannot connect; enable Allow LAN (or bind explicitly) when your threat model permits it, then point HTTP_PROXY at http://192.168.x.x:7890 from the container’s perspective.
Document the triplet you actually use: scheme (http vs socks5), host (loopback vs gateway), and port. Inconsistent copies of that triplet across systemd, Compose, and shell profiles are the root cause of “it worked in one terminal” bugs.
Daemon-level proxy: make docker pull and push reliable
To affect operations the daemon performs—image pulls to local storage, pushes, and some builder flows—you configure the engine itself. On systemd Linux, create a drop-in directory for docker.service and set Environment=HTTP_PROXY=..., HTTPS_PROXY=..., and NO_PROXY=..., then systemctl daemon-reload and restart Docker. The variables must be valid for the daemon’s network namespace, which is the host’s—not a container’s.
On Docker Desktop, the GUI exposes proxy settings that rewrite the internal LinuxKit/VM configuration; that is the sanctioned path when you do not want to hand-edit JSON inside the VM. After changes, verify with a deliberate pull of a small public image to confirm latency drops and TLS errors disappear.
Always pair proxies with a thoughtful NO_PROXY list: local registries, localhost, 127.0.0.1, the docker0 subnet, and corporate artifact hosts you must reach directly. Sending internal names through Clash can produce misleading “TLS error” screens when the real failure is a wrong exit path.
~/.docker/config.json: client behavior and registry auth
The Docker CLI reads your user-level configuration for credentials, CLI plugins, and optional proxies blocks. Some teams add a proxies section so CLI operations consistently attach proxy metadata without touching systemd. Keep this file out of screenshots if it includes auth tokens; treat it like a secrets-adjacent artifact.
Remember that config.json does not magically fix a daemon that still cannot route—think of it as aligning the client’s expectations with how you want requests formed. When debugging, compare behavior between docker pull (client + daemon interplay) and curl https://registry-1.docker.io/v2/ through the same proxy to isolate TLS or DNS issues.
If you rely on mirror registries in China or other regions, mirror endpoints still need to be reachable from the daemon with the same proxy policy; a mirror does not remove the need for consistent HTTP_PROXY when the upstream path is congested.
Docker Compose: service-level environment vs top-level inheritance
Docker Compose shines when each service declares its own environment: map. Application containers that fetch models, call external HTTPS APIs, or run package installers during entrypoint scripts need HTTP_PROXY, HTTPS_PROXY, and NO_PROXY spelled explicitly unless you use extension fields or YAML anchors to deduplicate. A backend that only talks to another service on the compose network might need no outbound proxy at all—avoid carpet-bombing every service with the same env block.
For developer ergonomics, define a common anchor at the bottom of your compose file for the proxy triplet and merge it into services that truly egress. Pair that with a documented host address: on Docker Desktop, host.docker.internal is the portable choice; on Linux, some installations require extra_hosts to define that hostname or you use the explicit gateway IP from docker network inspect bridge.
When a service still fails, exec into the container and print the environment: missing lowercase versus uppercase keys matter for some libraries. Also confirm the process you care about is the one that reads those variables—Node, Python, and Go each have slightly different precedence rules.
Bridge versus host networking: where 127.0.0.1 points
Containers on the default bridge network get a private IP. Outbound connections to the wider internet traverse NAT through the host, but connections to “the host’s Clash port” are special: you target the host bridge gateway or host.docker.internal, not loopback. If you mistakenly set HTTP_PROXY=http://127.0.0.1:7890 inside the container, you are asking the container to speak to itself—proxy failures follow immediately.
Host network mode removes that isolation: the container shares the host’s interfaces, so 127.0.0.1:7890 can work when Clash binds to loopback. The trade-off is port collision risk and weaker multi-tenant separation. Reserve host networking for diagnostics or for software that insists on raw sockets, not for routine microservice development.
Advanced setups with user-defined networks still follow the same mental model: identify the host-facing gateway from inside the attached network namespace before you bake addresses into images—prefer DNS names and compose variables over hard-coded IPv4 literals that change when Docker updates bridge addressing.
buildx and BuildKit: build-time proxies and multi-stage caches
docker buildx drives BuildKit, which executes each Dockerfile RUN in a fresh environment unless you pass build arguments. Exporting HTTP_PROXY in your shell does not automatically propagate into those layers. Use docker buildx build --build-arg HTTP_PROXY=... --build-arg HTTPS_PROXY=... or configure BuildKit builders with equivalent defaults.
For reproducibility, some teams add ARG HTTP_PROXY lines near the top of Dockerfiles and forward them into ENV only for the stages that need outbound access, then unset before runtime to avoid leaking proxy assumptions into production images. That pattern keeps CI and laptop builds aligned when corporate proxies differ.
When pulls happen as part of a build (FROM lines, package managers), the daemon still participates—ensure both daemon proxy and build-arg layers agree. A common failure mode is a successful base pull followed by apt-get timeouts inside RUN because only the outer pull inherited proxy settings.
If you enable inline cache or remote cache backends, verify those endpoints appear in NO_PROXY when they are private, otherwise BuildKit may attempt to reach them through Clash unnecessarily.
TLS handshake failures: proxies, MITM, and corporate roots
Clash itself is not a TLS man-in-the-middle for arbitrary HTTPS in standard setups, but corporate proxies and antivirus scanners sometimes are. When Docker reports certificate errors while pulling, capture whether the failure happens through the proxy path or on direct connects. You may need to add a custom CA to the Docker trust store or use insecure-registries only as a last resort for lab registries—never as a blanket workaround.
When experimenting with TUN mode in Clash to capture stubborn desktop traffic, revisit our TUN troubleshooting guide for differences between system proxy and IP-layer capture. Docker daemons rarely care about TUN unless the host routing table steers their packets through the tunnel—usually HTTP_PROXY is clearer for registry traffic.
WSL2, Linux VMs, and the Windows Clash split
Many developers run Docker Desktop with the WSL2 backend. In that world, your proxy story intersects with two networks: the Windows host where Clash runs and the Linux distro where your code lives. Environment exports that work in PowerShell do not automatically appear inside WSL unless you set shell profiles or use /etc/environment equivalents.
For apt, git, and CLI tools inside WSL2—not Docker alone—our dedicated walkthrough on WSL2 apt and Git through Windows Clash covers mirrored networking, resolver changes, and ALL_PROXY patterns. Combine that with the Docker-specific daemon configuration here when both ecosystems coexist on one machine.
Headless Linux hosts: pair this guide with a stable Clash service
If Docker runs on a server where Clash is deployed headless, you want the same operational discipline we describe for Clash Meta on Linux headless: systemd units, journald logs, and explicit listeners. The Docker daemon proxy variables belong in the Docker unit; the Clash unit stands separately—avoid merging unrelated ExecStart lines into one service.
Developer stacks: npm, pnpm, and IDEs alongside Docker
Containers are only half of a modern dev environment. When your editor, package manager, and test runners also need split routing, align hostname lists with the patterns in Cursor and npm split routing so you do not send private registry traffic through an exit node by accident. The same NO_PROXY discipline applies: internal npm registries and Docker pulls to private Harbor instances should bypass Clash when policy demands direct LAN access.
Ordered troubleshooting checklist
1. Confirm Clash egress. On the host, curl a public HTTPS endpoint through the proxy URL you intend to reuse. If this step fails, Docker changes will not help.
2. Decide scope. Determine whether the failing operation is daemon-driven (pull/push), build-time (RUN), or runtime (application inside a running container). Apply proxy settings at that layer.
3. Fix addresses. Replace loopback mistakes in bridge containers with host.docker.internal or the docker gateway IP; re-test with a minimal compose service that runs wget through the proxy.
4. Tighten NO_PROXY. Add internal domains and the loopback trio; retest registry operations that should bypass Clash.
5. Inspect logs. Use docker system events and daemon logs for pull failures; for BuildKit, enable debug logging temporarily rather than guessing.
6. Roll back experiments. Remove redundant proxy layers—sometimes systemd plus config.json plus compose triple-stacks create conflicting URLs.
Quick Q&A
Does Docker Desktop inherit macOS proxy settings? Not reliably for the engine; use the Docker Desktop proxy panel or daemon JSON patterns recommended for your version.
Should CI pipelines copy my laptop’s proxy? Only if CI egress requires it; many pipelines use allow-listed registries without Clash at all. Mirror images into your registry instead of depending on a personal proxy.
What about Podman or nerdctl? The same environment-variable mental model applies; service files differ but the bridge/host distinction does not.
Closing thoughts
Docker networking is explicit: bridges, gateways, daemons, and build stages each need the right HTTP_PROXY story. Once you treat Clash on the host as a well-known endpoint—port, LAN visibility, and NO_PROXY lists included—docker pull, Compose services, and buildx builds stop fighting you and return to being the boring infrastructure they should be. Compared with opaque “just turn VPN on” advice, this layered approach survives upgrades to Docker Desktop, WSL2 kernels, and new Compose specifications because you can always trace which process owned the failing socket.
When your rulesets grow beyond a single exit, keep policy groups maintainable with the YAML routing guide, and refresh nodes through the site’s download flow so your subscription URL stays in sync with the client you actually run.
Browse the full tech column for more routing tutorials, and revisit Meta core upgrade notes when you align desktop clients with the proxy core concepts referenced here.