NGINX/PHP-FPM graceful shutdown and 502 errors

We have a PHP application running with Kubernetes in pods with two dedicated containers — NGINX и PHP-FPM.

The problem is that during downscaling clients get 502 errors. E.g. when a pod is stopping, its containers can not correctly close existing connections.

So, in this post, we will take a closer look at the pods’ termination process in general, and NGINX and PHP-FPM containers in particular.

Testing will be performed on the AWS Elastic Kubernetes Service by the Yandex.Tank utility.

Ingress resource will create an AWS Application Load Balancer with the AWS ALB Ingress Controller.

Для управления контейнерами на Kubernetes WorkerNodes испольузется Docker.

Pod Lifecycle — Termination of Pods

So, let’s take an overview of the pods’ stopping and termination process.

Basically, a pod is a set of processes running on a Kubernetes WorkerNode, which are stopped by standard IPC (Inter-Process Communicationsignals.

To give the pod the ability to finish all its operations, a container runtime at first ties softly stop it (graceful shutdown) by sending a SIGTERM signal to a PID 1 in each container of this pod (see docker stop). Also, a cluster starts counting a grace period before force kill this pod by sending a SIGKILL signal.

The SIGTERM can be overridden by using the STOPSIGNAL in an image used to spin up a container.

Thus, the whole flow of the pod’s deleting is (actually, the part below is a kinda copy of the official documentation):

  1. a user issues a kubectl delete pod or kubectl scale deployment command which triggers the flow and the cluster start countdown of the grace period with the default value set to the 30 second
  2. the API server of the cluster updates the pod’s status — from the Running state, it becomes the Terminating (see Container states). A kubelet on the WorkerNode where this pod is running, receives this status update and start the pod’s termination process:
  3. if a container(s) in the pod have a preStop hook – kubelet will run it. If the hook is still running the default 30 seconds on the grace period – another 2 seconds will be added. The grace period can be set with the terminationGracePeriodSeconds
  4. when a preStop hook is finished, a kubelet will send a notification to the Docker runtime to stop containers related to the pod. The Docker daemon will send the SIGTERM signal to a process with the PID 1 in each container. Containers will get the signal in random order.
  5. simultaneously with the beginning of the graceful shutdown — Kubernetes Control Plane (its kube-controller-manager) will remove the pod from the endpoints (see Kubernetes – Endpoints) and a corresponding Service will stop sending traffic to this pod
  6. after the grace period countdown is finished, a kubelet will start force shutdown – Docker will send the SIGKILL signal to all remaining processes in all containers of the pod which can not be ignored and those process will be immediately terminated without change to correctly finish their operations
  7. kubelet triggers deletion of the pod from the API server
  8. API server deletes a record about this pod from the etcd
Image for post

Actually, there are two issues:

  1. the NGINX and PHP-FPM perceives the SIGTERM signal as a force как “brutal murder” and will finish their processes immediately , и завершают работу немедленно, without concern about existing connections (see Controlling nginx and php-fpm(8) – Linux man page)
  2. the 2 and 3 steps — sending the SIGTERM and an endpoint deletion – are performed at the same time. Still, an Ingress Service will update its data about endpoints not momentarily and a pod can be killed before then an Ingress will stop sending traffic to it causing 502 error for clients as the pod can not accept new connections

E.g. if we have a connection to an NGINX server, the NGINX master process during the fast shutdown will just drop this connection and our client will receive the 502 error, see the Avoiding dropped connections in nginx containers with “STOPSIGNAL SIGQUIT”.


Okay, now we got some understanding of how it’s going — let’s try to reproduce the first issue with NGINX.

The example below is taken from the post above and will be deployed to a Kubernetes cluster.

Prepare a Dockerfile:

FROM nginx

RUN echo 'server {\n\
listen 80 default_server;\n\
location / {\n\
}' > /etc/nginx/conf.d/default.conf

CMD ["nginx", "-g", "daemon off;"]

Here NGINX will proxy_pass a request to the which will respond with a 10 seconds delay to emulate a PHP-backend.

Build an image and push it to a repository:

$ docker build -t setevoy/nginx-sigterm .
$ docker push setevoy/nginx-sigterm

Now, add a Deployment manifest to spin up 10 pods from this image.

Here is the full file with a Namespace, Service, and Ingress, in the following part of this post, will add only updated parts of the manifest:

apiVersion: v1
kind: Namespace
name: test-namespace
apiVersion: apps/v1
kind: Deployment
name: test-deployment
namespace: test-namespace
app: test
replicas: 10
app: test
app: test
- name: web
image: setevoy/nginx-sigterm
- containerPort: 80
cpu: 100m
memory: 100Mi
port: 80
apiVersion: v1
kind: Service
name: test-svc
namespace: test-namespace
type: NodePort
app: test
- protocol: TCP
port: 80
targetPort: 80
apiVersion: extensions/v1beta1
kind: Ingress
name: test-ingress
namespace: test-namespace
annotations: alb internet-facing '[{"HTTP": 80}]'
- http:
- backend:
serviceName: test-svc
servicePort: 80

Deploy it:

$ kubectl apply -f test-deployment.yaml
namespace/test-namespace created
deployment.apps/test-deployment created
service/test-svc created
ingress.extensions/test-ingress created

Check the Ingress:

$ curl -I aadca942-testnamespace-tes-5874–
HTTP/1.1 200 OK

And we have 10 pods running:

$ kubectl -n test-namespace get pod
test-deployment-ccb7ff8b6–2d6gn 1/1 Running 0 26s
test-deployment-ccb7ff8b6–4scxc 1/1 Running 0 35s
test-deployment-ccb7ff8b6–8b2cj 1/1 Running 0 35s
test-deployment-ccb7ff8b6-bvzgz 1/1 Running 0 35s
test-deployment-ccb7ff8b6-db6jj 1/1 Running 0 35s
test-deployment-ccb7ff8b6-h9zsm 1/1 Running 0 20s
test-deployment-ccb7ff8b6-n5rhz 1/1 Running 0 23s
test-deployment-ccb7ff8b6-smpjd 1/1 Running 0 23s
test-deployment-ccb7ff8b6-x5dc2 1/1 Running 0 35s
test-deployment-ccb7ff8b6-zlqxs 1/1 Running 0 25s

Prepare a load.yaml for the Yandex.Tank:

header_http: "1.1"
- "[Host:]"
- /
load_type: rps
schedule: const(100,30m)
ssl: false
enabled: true
enabled: false
package: yandextank.plugins.Telegraf
config: monitoring.xml

Here, we will perform 1 request per second to pods behind our Ingress.

Run tests:

Image for post

All good so far.

Now, scale down the Deployment to only one pod:

$ kubectl -n test-namespace scale deploy test-deployment — replicas=1
deployment.apps/test-deployment scaled

Pods became Terminating:

$ kubectl -n test-namespace get pod
test-deployment-647ddf455–67gv8 1/1 Terminating 0 4m15s
test-deployment-647ddf455–6wmcq 1/1 Terminating 0 4m15s
test-deployment-647ddf455-cjvj6 1/1 Terminating 0 4m15s
test-deployment-647ddf455-dh7pc 1/1 Terminating 0 4m15s
test-deployment-647ddf455-dvh7g 1/1 Terminating 0 4m15s
test-deployment-647ddf455-gpwc6 1/1 Terminating 0 4m15s
test-deployment-647ddf455-nbgkn 1/1 Terminating 0 4m15s
test-deployment-647ddf455-tm27p 1/1 Running 0 26m

And we got our 502 errors:

Image for post

Next, update the Dockerfile — add the STOPSIGNAL SIGQUIT:

FROM nginx

RUN echo 'server {\n\
listen 80 default_server;\n\
location / {\n\
}' > /etc/nginx/conf.d/default.conf


CMD ["nginx", "-g", "daemon off;"]

Build, push:

$ docker build -t setevoy/nginx-sigquit .
docker push setevoy/nginx-sigquit

Update the Deployment with the new image:

- name: web
image: setevoy/nginx-sigquit
- containerPort: 80

Redeploy, and check again.

Run tests:

Image for post

Scale down the deployment again:

$ kubectl -n test-namespace scale deploy test-deployment — replicas=1
deployment.apps/test-deployment scaled

And no errors this time:

Image for post


Traffic, preStop, and sleep

But still, if repeat tests few times we still can get some 502 errors:

Image for post

This time most likely we are facing the second issue — endpoints update is performed at the same time when the SIGTERM Is sent.

Let’s add a preStop hook with the sleep to give some time to update endpoints and our Ingress, so after the cluster will receive a request to stop a pod, a kubelet on a WorkerNode will wait for 5 seconds before sending the SIGTERM:

- name: web
image: setevoy/nginx-sigquit
- containerPort: 80
command: ["/bin/sleep","5"]

Repeat tests — and now everything is fine

Our PHP-FPM had no such issue as its image was initially built with the STOPSIGNAL SIGQUIT.

Other possible solutions

And of course, during debugging I’ve tried some other approaches to mitigate the issue.

See links at the end of this post and here I’ll describe them in short terms.

preStop and nginx -s quit

One of the solutions was to add a preStop hook which will send QUIT to NGINX:

- /usr/sbin/nginx
- -s
- quit


- /bin/sh
- 1

But it didn’t help. Not sure why as the idea seems to be correct — instead of waiting for the TERM from Kubernetes/Docker – we gracefully stopping the NGINX master process by sending QUIT.

You can also run the strace utility check which signal is really received by the NGINX.

NGINX + PHP-FPM, supervisord, and stopsignal

Our application is running in two containers in one pod, but during the debugging, I’ve also tried to use a single container with both NGINX and PHP-FPM, for example, trafex/alpine-nginx-php7.

There I’ve tried to add to stopsignal to the supervisor.conf for both NGINX and PHP-FPM with the QUIT value, but this also didn’t help although the idea also seems to be correct.

Still, one can try this way.

PHP-FPM, and process_control_timeout

In the Graceful shutdown in Kubernetes is not always trivial and on the Stackoveflow in the Nginx / PHP FPM graceful stop (SIGQUIT): not so graceful question is a note that FPM’s master process is killed before its child and this can lead to the 502 as well.

Not our current case, but pay your attention to the process_control_timeout.

NGINX, HTTP, and keep-alive session

Also, it can be a good idea to use the [Connection: close] header – then the client will close its connection right after a request is finished and this can decrease 502 errors count.

But anyway they will persist if NGINX will get the SIGTERM during processing a request.

See the HTTP persistent connection.

Giornate difficili

“Oggi è stata una giornata difficile”, disse Pooh.
Ci fu una pausa.
“Vuoi parlarne?” chiese Pimpi.
“No” disse Pooh dopo un po’. “No, non credo”.
“Va bene”, disse Pimpi, e si sedette accanto al suo amico.
“Cosa stai facendo?” chiese Pooh.
“Niente” disse Pimpi. “Solo, so come sono i giorni difficili. Molto spesso non va di parlare neanche a me nei miei giorni difficili”.
“Ma…” continuò Pimpi, “i giorni difficili sono molto più facili quando sai di avere qualcuno lì per te. E io sarò sempre qui per te, Pooh.”
E mentre Pooh sedeva lì, rimuginando sulle difficoltà della giornata, mentre Pimpi sedeva accanto a lui in silenzio, facendo oscillare le sue piccole gambe… pensava che il suo migliore amico aveva proprio ragione


Deploying Kubernetes on bare metal with Rancher 2.0


  • Install Rancher server
  • Create a Kubernetes cluster
  • Add Kubernetes nodes
  • Install StorageOS as the Kubernetes storage class
  • Understand Nginx Ingress in Rancher

Install Rancher

Create a VM with Docker and Docker Compose installed and install Rancher 2.0 with docker compose:

  • Rancher docker-compose file: docker-compose.yaml
  • Run these commands to install Rancher with docker compose:
    • git clone
    • cd rancher-docker-compose
    • docker-compose up -d

Create your Kubernetes cluster with Rancher

Install a custom Kubernetes cluster with Rancher. Use the ‘Custom’ cluster.


Add Kubernetes nodes and join the Kubernetes cluster

Run the following commands on all the VMs that your Kubernetes cluster will run on. The final docker command will have the VM join the new Kubernetes cluster.

Replace the –server and –token with your Rancher server and cluster token.


#sudo apt update
#sudo apt -y dist-upgrade

#Ubuntu (Docker install)
#sudo apt -y install

sudo apt -y install linux-image-extra-$(uname -r)

#Debian 9 (Docker install)
#sudo apt -y install apt-transport-https ca-certificates curl gnupg2 software-properties-common
#curl -fsSL | sudo apt-key add -
#sudo add-apt-repository "deb [arch=amd64] $(lsb_release -cs) stable"
#sudo apt update
#sudo apt -y install docker-ce

sudo mkdir -p /etc/systemd/system/docker.service.d/
sudo cat <<EOF > /etc/systemd/system/docker.service.d/mount_propagation_flags.conf

sudo systemctl daemon-reload
sudo systemctl restart docker.service

#This is dependent on your Rancher server
sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.1.0-rc9 --server --token rb8k8kkqw55jqnqbbf4ssdjqtw6hndhfxxcghgv8257kx4p6qsqq55 --ca-checksum 641b2888ce3f1091d20149a495d10457154428f440475b42291b6af1b6c0dd06 --etcd --controlplane --worker

Download the kub config file for the cluster


After you download the kub config file you can use it by running this command:

export KUBECONFIG=$HOME/.kube/rancher-config

Install Helm on the cluster

git clone

cd set-up-tiller

chmod u+x


helm init --service-account tiller

Install StorageOS Helm Chart

helm repo add storageos
helm install --name storageos --namespace storageos-operator --version 1.1.3 storageos/storageoscluster-operator

Add the Storage OS Secret

apiVersion: v1
kind: Secret
  name: storageos-api
  namespace: default
    app: storageos
  # echo -n '<secret>' | base64
  apiUsername: c3RvcmFnZW9z
  apiPassword: c3RvcmFnZW9z

Add the StorageOSCluster

kind: StorageOSCluster
  name: example-storageos
  namespace: default
  secretRefName: storageos-api
  secretRefNamespace: default
    enable: true

Set StorageOS as the default storage class

kubectl patch storageclass fast -p ‘{“metadata”: {“annotations”:{“”:”true”}}}’

Using the default Nginx Igress

Rancher automatically installs the nginx ingress controller on all the nodes in the cluster.
If you are able to expose one of the VMs in the cluster to the outside world with a public IP then you can connect to the ingress based services on ports 80 and 443.

Any app you want to be accessible through the default nginx ingress must be added to the ‘default’ project in Rancher.

Tirati indietro

se da un amore
devi difenderti
tirati indietro

ritirati di nuovo
nelle trincee del tuo cuore
alza bandiera bianca
e macchiala di cielo
fa’ soffiare forte
il vento della malinconia
che perdere
fa male
ma restare
per lasciarsi sempre
è ogni giorno
un assaggio di morte

sotterra i fucili
dove non potrai dimenticare
pulisci le lame
dal sangue delle promesse
togli la polvere dagli occhi
e mira al sole
abbandona il campo di battaglia
e ritorna
a prenderti cura
del tuo grano

piangi forte
per lavarti lo sporco
affonda senza paura
nel tuo mare
per disinfettare i tagli
punta a nuovi orizzonti
e quando credi di affogare
trova il tuo appiglio
per ritrovarti
il respiro
ma resisti
al desiderio di tornare
su quelle rive
dove hai perso
il sorriso

se da un amore
devi difenderti

la sconfitta
a volte
può essere
la più bella vittoria.

-Gian Marco Manzo


Ieri la mia bambolina che non smette di crescere mi ha chiesto di portarla in piazza.
In piazza c’era il suo amico con il quale ormai intrattiene ore ed ore di conversazioni e videochiamate via whatsapp.

Quindi, per farla contenta, anche se avevo zero voglia l’accompagno.

Mi siedo, la vedo giocare, sorridere, una sicurezza che io non avevo. Si rincorrono e ridono insieme…. che carini .

Beh dai, come papà mi sento contento, non sono geloso, ed in fondo, io non sono mai stato una persona gelosa.
Solo in un periodo della mia vita, in cui sono stato trasformato in un geloso paranoico.
Non era colpa mia.

Tutto inizia a prendere forma nella mia testa mentre la vedo giocare e sorridere e complice la musica, ovviamente penso. Mi creo il mio momento introspettivo.
Penso a come gestire le situazioni del futuro, al mio percorso, a cosa vorrei consigliarle, in fondo vorrei proteggerla da determinate esperienze: impossibile.

Le dovrà affrontare tutte, tutte quelle sensazioni di pancia, quegli assoluti che l’adolescenza le presenterà.

Come mi comporterò alla prima sofferenza d’amore ? Chiederà di me ? E se avessi questa fortuna cosa le direi ? Io poi…. io che mi emoziono per una foglia che cade 😀

Un grande scrittore, Steinbeck, trovò le parole giuste per il figlio Thom quattordicenne, sofferente per amore.

Se sei innamorato, è una buona cosa. Non permettere che nessuno la sottovaluti o sminuisca. Può succedere che quanto senti non sia ricambiato, per una ragione o per l’altra, ma ciò non renderà i tuoi sentimenti meno veri e belli. E non avere paura di perdere. Se deve succedere, succederà. La cosa più importante è non avere fretta. Le cose belle non scappano via“, il finale, semplice e universale con cui termina la lettera al figlio, è bellissimo! Ma oggi quelle parole basterebbero ancora?

Vedi caro Luigi ? Ancora una volta pensi che la soluzione migliore in fondo sia non affrontare, invece sai che non è così.

Far capire che spesso soffrire per amore è addirittura bello, è comunque una sensazione che ti fa crescere (ma questo può dirlo solo chi non sta soffrendo in quel momento) e che esternando le proprie sensazioni potrebbe venire fuori qualcosa di buono.

Qualsiasi cosa sia, vorrei solo comunicare.
Quanto è brutto non comunicare.

Come i ciliegi con la primavera

Ti stai sbagliando non è stata una coincidenza..ognuno di noi si trova dov’è per via delle scelte che ha compiuto! Che fossimo in classe insieme non è stato un caso..non è stato nemmeno a causa del destino..Per scrivere “Vivere con la morte” ho scelto quel quaderno..a te che piacciono i libri ha incuriosito e l’hai raccolto..e poi hai accettato il mio desiderio..Tutte le scelte che hai fatto finora..tutte le scelte che io ho fatto finora..ogni nostra scelta si è accumulata alle altre e ci ha fatto incontrare! Ecco perché siamo qui ora..