Azure Container Registry geo-replication enables a single control plane across the global footprint of Azure. You can opt into the regions you’d like your registry to have a network-close/local experience.
However, there are some important aspects to consider.
ACR Geo-replication is multi-master. You can push to any region, and ACR will replicate the manifests and layers to all other regions you’ve opted into. However, replication does take some time. Time, based on the size of the image layers. Without an understanding of the latency, it can be confusing and look like a failure has happened. We also recently realized the errors we’re reporting can add to the confusion.
Lets assume you have a registry that replicates within:
- West US
- East US
- Canada Central
- West Europe
You have developers in both West US and West Europe, with production deployments in all regions. ACR uses Azure Traffic manager to find the closest registry to the location of the pull. The registry url remains the same. Which means its important to understand the semantics.
From West US, I push a new tag:
stevelasdemos.azurecr.io/samples/helloworld:v1. I can immediately pull that tag from West US, as that’s the local registry I pushed to.
However, If I immediately attempt to pull the same tag from West Europe, the pull will fail as it doesn’t yet exist. The pull will fail, even if I issue the docker pull from West US… Hmmm, why? Docker is a client/server model, and the pull is executed from the server.
I work in West US. I connected my docker client to the docker host running in West Europe. When I issue a docker pull, my client is just proxying the request to the host, which is running in West Europe. When the pull is executed, traffic manager routes to the closes registry, which is in West Europe.
If I remote/ssh into a box in West Europe, or execute a helm deploy from West US to West Europe, the same flow happens. The pull is initiated from West Europe. Which, is actually what we want as we want a local fast, reliable, cheap pull. But, how do I know when I can pull from West Europe?
ACR supports web hook notifications that fire when images are pushed or deleted. Webhooks are regionalized, so you can create web hooks for each region.
In a production deployment, you’ll likely want to listen to these web hooks and deploy as the images arrive.
This is a more interesting conversation, and requires a new post on tagging schemes. Basically, base image owners should re-use tags as they service core versions. When a patch to 2.0 of an image is made available, the 2.0 tag should have the new version.
However, a production app should not use stable tags. Every new image to be deployed should use a unique tag. If you have 10 nodes deployed, and one of those nodes fails, the cluster or PaaS service will re-pull the tag it knows should be deployed. (how should you write PaaS service, it’s a bit redundant…?)
If you re-use tags in production deployments, the new node will pull the most recent version of that tag, while the rest of the nodes are running an older version. If you always use unique tags for deployments, the new node will pull the same, consistent tag that’s running on the other nodes.
ACR geo-replication will replicate tag updates. But, if you’re not aware of the flow, you can also get “magical” results.
Using the example above,
stevelasdemos.azurecr.io/samples/helloworld:v1 has been pushed and replicated to all regions, which are running in all regions.
If I push an update to :v1 from West US, and issue a docker run/helm deploy to all regions, my local West US deployment will get the new version of :v1. However, all the other regions will not get the new tag, as they already have that tag running.
If you deploy :v2 from West US, and attempt to deploy :v2 in the other regions, you’ll get an unknown manifest error.
Geo-replication and Webhook Notifications
When we initial designed the geo-replication feature, we realized this gap and added the regionalized webhooks. We do realize this could be slightly easier as we’d like a deployment to be queued, for when the replicas are deployed. We have some more work to do here, and would encourage feedback on what you’d like to see. What you’re scenarios are.
A customer recently raised a ticket with us, as they were receiving a “blob unknown” errors. After digging into the issue, we realized we had an issue with how we cache manifest requests and the ordering of how we sync the blobs, manifests and cache them.
A while back, we added manifest caching. Customers were using OSS tools that continually polled the registry, asking for updates to a given tag. They were pinging the registry every second, from hundreds of machines. Each request went into the ACR storage infrastructure to recalculate the manifest. Only to return the same result. We added caching to cache manifest requests, which sped up the requests and improved overall ACR performance, across all customers. While we invalidate the 1hr cache TTL from a docker push, we realized a tag update, replicated across a registry wasn’t invalidating the cache. We missed this as we consider it a best practice to never use the same tag, but we also know to never say never. So, we’re fixing the cache invalidation when replicated.
We will also avoid “blob unknown” errors, when it’s really a “manifest unknown” error”. In the customer reported problem, we hadn’t yet replicated all the content when they did a pull. Because the cache wasn’t invalidated, we found the updated manifest, but the blobs hadn’t yet replicated. We will make the replication more idem-potent, fix the cache TTL and will more accurately report “manifest unknown” as opposed to “blob unknown”.
Preview –> GA
These sort of quirks are why we have a preview phase of a service. We can’t possibly foresee all the potential issues without taking really long private bake times, which can miss a market. For customers, they get early access to bits that hopefully help them understand what’s coming, so they can focus elsewhere while we improve the service.
I just want to end with a thank you for those that help us improve our services, on your behalf. We value every bit of feedback.
Steve and the rest of the ACR team.