<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Infrastructure on Szymon Kocur</title><link>https://szymonkocur.com/categories/infrastructure/</link><description>Recent content in Infrastructure on Szymon Kocur</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Thu, 02 Apr 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://szymonkocur.com/categories/infrastructure/index.xml" rel="self" type="application/rss+xml"/><item><title>Why Your Kubernetes Pods Get 502s During Deployments (and How preStop Hooks Fix It)</title><link>https://szymonkocur.com/posts/kubernetes-prestop-hooks-zero-downtime/</link><pubDate>Thu, 02 Apr 2026 00:00:00 +0000</pubDate><guid>https://szymonkocur.com/posts/kubernetes-prestop-hooks-zero-downtime/</guid><description>If you&amp;rsquo;ve ever deployed to Kubernetes and seen users briefly hit a blank page or a 502 Bad Gateway error, you&amp;rsquo;re not alone. I ran into this exact issue on a production GKE cluster running a Next.js frontend with multiple replicas behind a Google Cloud Load Balancer.
The symptom: during a rolling update, for a few seconds, some users would see a black screen with a &amp;ldquo;failed upstream&amp;rdquo; message. Then it would resolve on its own.</description></item><item><title>Three HPA Pitfalls I Hit on GKE with Flux and Linkerd</title><link>https://szymonkocur.com/posts/three-hpa-pitfalls-gke-flux-linkerd/</link><pubDate>Sun, 29 Mar 2026 00:00:00 +0000</pubDate><guid>https://szymonkocur.com/posts/three-hpa-pitfalls-gke-flux-linkerd/</guid><description>I added Horizontal Pod Autoscalers to a production GKE cluster running Flux CD for GitOps and Linkerd as a service mesh. The HPA configuration itself was straightforward — the problems came from how these three systems interact.
Here are three pitfalls I hit, each requiring a separate fix.
1. Flux keeps resetting the replica count Link to heading The first HPA I deployed looked like this:
apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: frontend spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: frontend minReplicas: 5 maxReplicas: 10 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 70 Within minutes, the HPA scaled the frontend from 5 to 7 replicas based on CPU load.</description></item><item><title>Flux ignoreDifferences: Stop GitOps From Fighting Your HPA</title><link>https://szymonkocur.com/posts/flux-ignore-differences-hpa-replica-count/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://szymonkocur.com/posts/flux-ignore-differences-hpa-replica-count/</guid><description>I added a Horizontal Pod Autoscaler to a Flux-managed Deployment and watched the replica count bounce between what the HPA wanted and what Git said. Every 5 minutes, Flux would reconcile, see that spec.replicas had drifted from the Git state, and reset it. The HPA would immediately scale it back. Repeat forever.
This is a fundamental tension in GitOps: Flux&amp;rsquo;s job is to make the cluster match Git. The HPA&amp;rsquo;s job is to set the replica count based on metrics.</description></item></channel></rss>