“È la fine della storia, e non lo sai. Lui è lì, in piedi davanti alla finestra, e tu non gli perdoni di schermare la luce. Non è lui che vedi, ma il giorno, cui lui impedisce di entrare. Inizia così. Lui è lì, e la sua presenza ti dà fastidio… Certo, per lui provi tenerezza. Pare si dica così quando non si ama più. Ma allora, più si prova tenerezza e meno si ama? Chi può dire quale sia la differenza? Tenerezza è quando non c’è più desiderio. Una carezza sulla guancia prima di addormentarsi. Come fratello e sorella.“
Autore: Luigi Molinaro
Wonderful
You just walked out of here and now I know what I will die for
It’s really hard to stay afloat when you got no reason (to do it for) it’s Wonderful
So who’ll forget your face?
Who’ll forget your warm embrace?
I won’t Who will fix this broken door?
Who will bring the lights back home?
You won’t Like the end, you are slowly fading
Lascia
“Lascia che le cose si rompano, smettila di cercare di tenerle insieme.
Lascia che le persone si arrabbino.
Lascia che ti critichino, la loro reazione non è un tuo problema.
Lascia che tutto cada a pezzi e non preoccuparti dopo.
Dove andrò? Cosa ho intenzione di fare?
Nessuno si è mai perso per strada, nessuno è rimasto senza riparo.
Ciò che è destinato ad andare andrà comunque.
Ciò che deve restare rimarrà.
Troppo sforzo non è mai un buon segno, troppo sforzo è un segno di conflitto con l’universo.
Relazioni
Lavori
Casa
Amici e grandi amori …
Dai tutto al creatore, acqua quando puoi, prega e balla ma poi lascia che fiorisca e che le foglie secche si strappino da sole.
Ciò che va, lascia sempre spazio a qualcosa di nuovo: sono le leggi universali.
E non pensare mai che non ci sia più niente di buono per te, solo che devi smettere di contenere ciò che devi lasciare andare.
Solo quando il tuo viaggio sarà finito, le possibilità finiranno, ma fino ad allora, lascia che tutto vada in pezzi, lascia andare, lascia che sia. “
Non ho più paura
I used to think death was the end
Dream Theater
But that was before
I’m not scared anymore
Where did we come from?
Why are we here?
Where do we go when we die?
What lies beyond
And what lay before?
Is anything certain in life?They say, life is too short
The here and the now
And you're only given one shot
But could there be more
Have I lived before
Or could this be all that we've got?If I die tomorrow
I'd be all right
Because I believe
That after we're gone
The spirit carries onI used to be frightened of dying
I used to think death was the end
But that was before
I'm not scared anymore
I know that my soul will transcendI may never find all the answers
I may never understand why
I may never prove
What I know to be true
But I know that I still have to tryIf I die tomorrow
I'd be alright
Because I believe
That after we're gone
The spirit carries onMove on, be brave
Don't weep at my grave
Because I am no longer here
But please never let
Your memory of me disappearSafe in the light that surrounds me
Free of the fear and the pain
My questioning mind
Has helped me to find
The meaning in my life again
Victoria's real
I finally feel
At peace with the girl in my dreams
And now that I'm here
It's perfectly clear
I found out what all of this meansIf I die tomorrow
I'd be alright
Because I believe
That after we're gone
The spirit carries on
Fuori dal coro
Fuori dal coro dico
che la più grande battaglia della nostra epoca è quella che combattiamo contro l’ego.
Abbiamo sempre questo spasmodico bisogno di riempirlo .
Ignorando che è proprio lui la principale causa dei nostri malesseri .
E i social non fanno altro che amplificare tutto questo .
Ricevere un complimento sulla tua immagine sul web non aggiunge alcun valore alla persona ( le cui qualità continuano a rimanere sconosciute ai più che ti mettono il tanto sospirato like )
La parola ego significa “ io”. Rappresenta la coscienza di chi siamo . È quello che facciamo che definisce chi siamo . Riempire l ego con ciò che facciamo è positivo .
Questo ego positivo è minato costantemente dal bisogno di approvazione virtuale .
Normalmente una persona non va in giro fermando sconosciuti e chiedendogli se a 50 anni è ancora figo, o se piace la nuova maglietta attillata.
Sarebbe strano .. e comunque inutile . Gli uomini e le donne per noi belli (anche quando non lo sono esteticamente) si riconoscono quasi chimicamente. A volte è un semplice sorriso.
L’autostima è quindi costantemente minata sul web se ci si propone con un aspetto superficiale come l’immagine .
Alimentando il bisogno di pubblicare sempre più immagini . Alimentiamo il bisogno di sentirci degli “influencer”, che poi tra filtri, ciglia finte, pose strategiche, grand’angoli non siamo neanche noi ma una proiezione di noi, a volte una caricatura.
L’uomo e la donna hanno dei bisogni . Il bisogno nasce da una mancanza.
Se una persona pubblica 5 /6 foto di sè tutte in fila chiedendo approvazione .. è perché ne ha bisogno .. quindi manca. Manca autostima appunto . E ha bisogno del consenso .
Da qui nascono selfie in tutte le pose , tag a posti dove non si è nemmeno stati , foto con amici a significare una vita mondana anche quando invece sono le persone più sole del mondo: l’ego vero si nutre di verità . Per quanto imperfetti e poco piacevoli possiamo essere … quanto è bella la verità !!!
😘😘
Ma questo è un altro discorso ..
Linux Name spaces
Namespaces in Linux are heavily used by many applications, e.g. LXC, Docker and Openstack.
Question: How to find all existing namespaces in a Linux system?
The answer is quite difficult, because it’s easy to hide a namespace or more exactly make it difficult to find them.
Exploring the system
In the basic/default setup Ubuntu 12.04 and higher provide namespaces for
- ipc for IPC objects and POSIX message queues
- mnt for filesystem mountpoints
- net for network abstraction (VRF)
- pid to provide a separated, isolated process ID number space
- uts to isolate two system identifiers — nodename and domainname – to be used by uname
These namespaces are shown for every process in the system. if you execute as rootls -lai /proc/1/nsShell
ls -lai /proc/1/ns
60073292 dr-x--x--x 2 root root 0 Dec 15 18:23 .
10395 dr-xr-xr-x 9 root root 0 Dec 4 11:07 ..
60073293 lrwxrwxrwx 1 root root 0 Dec 15 18:23 ipc -> ipc:[4026531839]
60073294 lrwxrwxrwx 1 root root 0 Dec 15 18:23 mnt -> mnt:[4026531840]
60073295 lrwxrwxrwx 1 root root 0 Dec 15 18:23 net -> net:[4026531968]
60073296 lrwxrwxrwx 1 root root 0 Dec 15 18:23 pid -> pid:[4026531836]
60073297 lrwxrwxrwx 1 root root 0 Dec 15 18:23 uts -> uts:[4026531838]
you get the list of attached namespaces of the init process using PID=1. Even this process has attached namespaces. These are the default namespaces for ipc, mnt, net, pid and uts. For example, the default net namespace is using the ID net:[4026531968]. The number in the brackets is a inode number.
In order to find other namespaces with attached processes in the system, we use these entries of the PID=1 as a reference. Any process or thread in the system, which has not the same namespace ID as PID=1 is not belonging to the DEFAULT namespace.
Additionally, you find the namespaces created by „ip netns add <NAME>“ by default in /var/run/netns/ .
The python code
The python code below is listing all non default namespaces in a system. The program flow is
- Get the reference namespaces from the init process (PID=1). Assumption: PID=1 is assigned to the default namespaces supported by the system
- Loop through /var/run/netns/ and add the entries to the list
- Loop through /proc/ over all PIDs and look for entries in /proc/<PID>/ns/ which are not the same as for PID=1 and add then to the list
- Print the result
List all non default namespaces in a systemPython
#!/usr/bin/python
#
# List all Namespaces (works for Ubuntu 12.04 and higher)
#
# (C) Ralf Trezeciak 2013-2014
#
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
import os
import fnmatch
if os.geteuid() != 0:
print "This script must be run as root\nBye"
exit(1)
def getinode( pid , type):
link = '/proc/' + pid + '/ns/' + type
ret = ''
try:
ret = os.readlink( link )
except OSError as e:
ret = ''
pass
return ret
#
# get the running command
def getcmd( p ):
try:
cmd = open(os.path.join('/proc', p, 'cmdline'), 'rb').read()
if cmd == '':
cmd = open(os.path.join('/proc', p, 'comm'), 'rb').read()
cmd = cmd.replace('\x00' , ' ')
cmd = cmd.replace('\n' , ' ')
return cmd
except:
return ''
#
# look for docker parents
def getpcmd( p ):
try:
f = '/proc/' + p + '/stat'
arr = open( f, 'rb').read().split()
cmd = getcmd( arr[3] )
if cmd.startswith( '/usr/bin/docker' ):
return 'docker'
except:
pass
return ''
#
# get the namespaces of PID=1
# assumption: these are the namespaces supported by the system
#
nslist = os.listdir('/proc/1/ns/')
if len(nslist) == 0:
print 'No Namespaces found for PID=1'
exit(1)
#print nslist
#
# get the inodes used for PID=1
#
baseinode = []
for x in nslist:
baseinode.append( getinode( '1' , x ) )
#print "Default namespaces: " , baseinode
err = 0
ns = []
ipnlist = []
#
# loop over the network namespaces created using "ip"
#
try:
netns = os.listdir('/var/run/netns/')
for p in netns:
fd = os.open( '/var/run/netns/' + p, os.O_RDONLY )
info = os.fstat(fd)
os.close( fd)
ns.append( '-- net:[' + str(info.st_ino) + '] created by ip netns add ' + p )
ipnlist.append( 'net:[' + str(info.st_ino) + ']' )
except:
# might fail if no network namespaces are existing
pass
#
# walk through all pids and list diffs
#
pidlist = fnmatch.filter(os.listdir('/proc/'), '[0123456789]*')
#print pidlist
for p in pidlist:
try:
pnslist = os.listdir('/proc/' + p + '/ns/')
for x in pnslist:
i = getinode ( p , x )
if i != '' and i not in baseinode:
cmd = getcmd( p )
pcmd = getpcmd( p )
if pcmd != '':
cmd = '[' + pcmd + '] ' + cmd
tag = ''
if i in ipnlist:
tag='**'
ns.append( p + ' ' + i + tag + ' ' + cmd)
except:
# might happen if a pid is destroyed during list processing
pass
#
# print the stuff
#
print '{0:>10} {1:20} {2}'.format('PID','Namespace','Thread/Command')
for e in ns:
x = e.split( ' ' , 2 )
print '{0:>10} {1:20} {2}'.format(x[0],x[1],x[2][:60])
#
Copy the script to your system as listns.py , and run it as root using python listns.py
PID Namespace Thread/Command
-- net:[4026533172] created by ip netns add qrouter-c33ffc14-dbc2-4730-b787-4747
-- net:[4026533112] created by ip netns add qrouter-5a691ed3-f6d3-4346-891a-3b59
-- net:[4026533050] created by ip netns add qdhcp-02e848cb-72d0-49df-8592-2f7a03
-- net:[4026532992] created by ip netns add qdhcp-47cfcdef-2b34-43b8-a504-6720e5
297 mnt:[4026531856] kdevtmpfs
3429 net:[4026533050]** dnsmasq --no-hosts --no-resolv --strict-order --bind-interfa
3429 mnt:[4026533108] dnsmasq --no-hosts --no-resolv --strict-order --bind-interfa
3446 net:[4026532992]** dnsmasq --no-hosts --no-resolv --strict-order --bind-interfa
3446 mnt:[4026533109] dnsmasq --no-hosts --no-resolv --strict-order --bind-interfa
3486 net:[4026533050]** /usr/bin/python /usr/bin/neutron-ns-metadata-proxy --pid_fil
3486 mnt:[4026533107] /usr/bin/python /usr/bin/neutron-ns-metadata-proxy --pid_fil
3499 net:[4026532992]** /usr/bin/python /usr/bin/neutron-ns-metadata-proxy --pid_fil
3499 mnt:[4026533110] /usr/bin/python /usr/bin/neutron-ns-metadata-proxy --pid_fil
4117 net:[4026533112]** /usr/bin/python /usr/bin/neutron-ns-metadata-proxy --pid_fil
4117 mnt:[4026533169] /usr/bin/python /usr/bin/neutron-ns-metadata-proxy --pid_fil
41998 net:[4026533172]** /usr/bin/python /usr/bin/neutron-ns-metadata-proxy --pid_fil
41998 mnt:[4026533229] /usr/bin/python /usr/bin/neutron-ns-metadata-proxy --pid_fil
The example above is from an Openstack network node. The first four entries are entries created using the command ip. The entry PID=297 is a kernel thread and no user process. All other processes listed, are started by Openstack agents. These process are using network and mount namespaces. PID entries marked with ‚**‘ have a corresponding entry created with the ip command.
When a docker command is started, the output is:
PID Namespace Thread/Command
-- net:[4026532676] created by ip netns add test
35 mnt:[4026531856] kdevtmpfs
6189 net:[4026532585] [docker] /bin/bash
6189 uts:[4026532581] [docker] /bin/bash
6189 ipc:[4026532582] [docker] /bin/bash
6189 pid:[4026532583] [docker] /bin/bash
6189 mnt:[4026532580] [docker] /bin/bash
The docker child running in the namespaces is marked using [docker].
On a node running mininet and a simple network setup the output looks like :
exampleShell
PID Namespace Thread/Command
14 mnt:[4026531856] kdevtmpfs
1198 net:[4026532150] bash -ms mininet:h1
1199 net:[4026532201] bash -ms mininet:h2
1202 net:[4026532252] bash -ms mininet:h3
1203 net:[4026532303] bash -ms mininet:h4
Googles Chrome Browser
Googles Chrome Browser makes extensive use of the linux namespaces. Start Chrome and run the python script. The output looks like:Chrome’s namespaces
PID Namespace Thread/Command
63 mnt:[4026531856] kdevtmpfs
30747 net:[4026532344] /opt/google/chrome/chrome --type=zygote
30747 pid:[4026532337] /opt/google/chrome/chrome --type=zygote
30753 net:[4026532344] /opt/google/chrome/nacl_helper
30753 pid:[4026532337] /opt/google/chrome/nacl_helper
30754 net:[4026532344] /opt/google/chrome/chrome --type=zygote
30754 pid:[4026532337] /opt/google/chrome/chrome --type=zygote
30801 net:[4026532344] /opt/google/chrome/chrome --type=renderer --lang=en-US --for
30801 pid:[4026532337] /opt/google/chrome/chrome --type=renderer --lang=en-US --for
30807 net:[4026532344] /opt/google/chrome/chrome --type=renderer --lang=en-US --for
30807 pid:[4026532337] /opt/google/chrome/chrome --type=renderer --lang=en-US --for
30813 net:[4026532344] /opt/google/chrome/chrome --type=renderer --lang=en-US --for
30813 pid:[4026532337] /opt/google/chrome/chrome --type=renderer --lang=en-US --for
30820 net:[4026532344] /opt/google/chrome/chrome --type=renderer --lang=en-US --for
30820 pid:[4026532337] /opt/google/chrome/chrome --type=renderer --lang=en-US --for
30829 net:[4026532344] /opt/google/chrome/chrome --type=renderer --lang=en-US --for
30829 pid:[4026532337] /opt/google/chrome/chrome --type=renderer --lang=en-US --for
30835 net:[4026532344] /opt/google/chrome/chrome --type=renderer --lang=en-US --for
30835 pid:[4026532337] /opt/google/chrome/chrome --type=renderer --lang=en-US --for
30841 net:[4026532344] /opt/google/chrome/chrome --type=renderer --lang=en-US --for
30841 pid:[4026532337] /opt/google/chrome/chrome --type=renderer --lang=en-US --for
30887 net:[4026532344] /opt/google/chrome/chrome --type=renderer --lang=en-US --for
30887 pid:[4026532337] /opt/google/chrome/chrome --type=renderer --lang=en-US --for
30893 net:[4026532344] /opt/google/chrome/chrome --type=renderer --lang=en-US --for
30893 pid:[4026532337] /opt/google/chrome/chrome --type=renderer --lang=en-US --for
30901 net:[4026532344] /opt/google/chrome/chrome --type=renderer --lang=en-US --for
30901 pid:[4026532337] /opt/google/chrome/chrome --type=renderer --lang=en-US --for
30910 net:[4026532344] /opt/google/chrome/chrome --type=renderer --lang=en-US --for
30910 pid:[4026532337] /opt/google/chrome/chrome --type=renderer --lang=en-US --for
30915 net:[4026532344] /opt/google/chrome/chrome --type=renderer --lang=en-US --for
30915 pid:[4026532337] /opt/google/chrome/chrome --type=renderer --lang=en-US --for
30923 net:[4026532344] /opt/google/chrome/chrome --type=renderer --lang=en-US --for
30923 pid:[4026532337] /opt/google/chrome/chrome --type=renderer --lang=en-US --for
30933 net:[4026532344] /opt/google/chrome/chrome --type=renderer --lang=en-US --for
30933 pid:[4026532337] /opt/google/chrome/chrome --type=renderer --lang=en-US --for
30938 net:[4026532344] /opt/google/chrome/chrome --type=renderer --lang=en-US --for
30938 pid:[4026532337] /opt/google/chrome/chrome --type=renderer --lang=en-US --for
30944 net:[4026532344] /opt/google/chrome/chrome --type=renderer --lang=en-US --for
30944 pid:[4026532337] /opt/google/chrome/chrome --type=renderer --lang=en-US --for
31271 net:[4026532344] /opt/google/chrome/chrome --type=renderer --lang=en-US --for
31271 pid:[4026532337] /opt/google/chrome/chrome --type=renderer --lang=en-US --for
31538 net:[4026532344] /opt/google/chrome/chrome --type=renderer --lang=en-US --for
31538 pid:[4026532337] /opt/google/chrome/chrome --type=renderer --lang=en-US --for
Chrome makes use of pid and network namespaces to restrict the access of subcomponents. The network namespace does not have a link in /var/run/netns/.
Conclusion
It’s quite hard to explore the Linux namespace. There is a lot of documentation flowing around. I did not find any simple program to look for namespaces in the system. So I wrote one.
The script cannot find a network namespace, which do not have any process attached to AND which has no reference in /var/run/netns/. If root creates the reference inode somewhere else in the filesystem, you may only detect network ports (ovs port, veth port on one side), which are not attached to a known network namespace –> an unknown guest might be on your system using a „hidden“ (not so easy to find) network namespace.
And — Linux namespaces can be stacked.
Per te (per me)
Qualunque fiore tu sia, quando verrà il tuo tempo, sboccerai. Prima di allora una lunga e fredda notte potrà passare. Anche dai sogni della notte trarrai forza e nutrimento. Perciò sii paziente verso quanto ti accade e curati e amati senza paragonarti o voler essere un altro fiore, perché non esiste fiore migliore di quello che si apre nella pienezza di ciò che è. E quando ciò accadrà, potrai scoprire che andavi sognando di essere un fiore che aveva da fiorire.“ — Daisaku Ikeda
Tutorial: creazione di un cluster con un’attività Fargate utilizzando la CLI di Amazon ECS
Questo tutorial mostra come configurare un cluster e distribuire un servizio con attività che utilizzano il tipo di lancio Fargate.
Prerequisiti
Verifica i seguenti requisiti preliminari:
- Configura un account AWS.
- Installa la CLI di Amazon ECS. Per ulteriori informazioni, consulta Installazione dell’interfaccia a riga di comando Amazon ECS.
- Istalla e configura la AWS CLI. Per ulteriori informazioni, consulta la sezione relativa all’interfaccia a riga di comando di AWS.
Fase 1: Crea il ruolo IAM per l’esecuzione dell’attività
L’agente del container Amazon ECS effettua chiamate all’API di AWS per tuo conto, pertanto richiede una policy e un ruolo IAM che consentano al servizio di stabilire che l’agente appartiene a te. Questo ruolo IAM viene definito un ruolo IAM di esecuzione delle attività. Se disponi già di un ruolo per l’esecuzione delle attività pronto per essere utilizzato, puoi ignorare questa fase. Per ulteriori informazioni, consulta Ruolo IAM per l’esecuzione di attività Amazon ECS.
Per creare il ruolo IAM per l’esecuzione delle attività utilizzando AWS CLI
- Crea un file denominato
task-execution-assume-role.json
con i seguenti contenuti:{ "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "Service": "ecs-tasks.amazonaws.com" }, "Action": "sts:AssumeRole" } ] }
- Crea il ruolo per l’esecuzione delle attività
aws iam --region us-west-2 create-role --role-name ecsTaskExecutionRole --assume-role-policy-document file://task-execution-assume-role.json
- Collega la policy relativa al ruolo per l’esecuzione delle attività:
aws iam --region us-west-2 attach-role-policy --role-name ecsTaskExecutionRole --policy-arn arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy
Fase 2: Configura la CLI di Amazon ECS
Per poter effettuare richieste API a tuo nome, la CLI di Amazon ECS necessita delle credenziali, che può estrarre da variabili di ambiente, da un profilo AWS o da un profilo Amazon ECS. Per ulteriori informazioni, consulta Configurazione della CLI di Amazon ECS.
Per creare una configurazione della CLI di Amazon ECS
- Crea una configurazione cluster, che definisce la regione AWS, i prefissi di creazione delle risorse e il nome del cluster da utilizzare con la CLI di Amazon ECS:
ecs-cli configure --cluster tutorial --default-launch-type FARGATE --config-name tutorial --region us-west-2
- Crea un profilo CLI utilizzando l’ID chiave di accesso e la chiave segreta:
ecs-cli configure profile --access-key AWS_ACCESS_KEY_ID --secret-key AWS_SECRET_ACCESS_KEY --profile-name tutorial-profile
Fase 3: creare un cluster e configurare il gruppo di sicurezza
Per creare un cluster ECS e un gruppo di sicurezza
- Creare un cluster Amazon ECS con il comando ecs-cli up. Poiché nella configurazione del cluster hai specificato Fargate come tipo di lancio predefinito, il comando crea un cluster vuoto e un VPC configurato con due sottoreti pubbliche.
ecs-cli up --cluster-config tutorial --ecs-profile tutorial-profile
Possono essere necessari alcuni minuti perché le risorse vengano create e il comando venga completato. L’output di questo comando contiene gli ID di sottorete e i VPC creati. Prendi nota di questi ID poiché verranno utilizzati in un secondo momento. - Utilizzando l’AWS CLI, recupera l’ID del gruppo di sicurezza predefinito per il VPC. Utilizza l’ID VPC dell’output precedente:
aws ec2 describe-security-groups --filters Name=vpc-id,Values=VPC_ID --region us-west-2
L’output di questo comando contiene l’ID del gruppo di sicurezza, utilizzato nella fase successiva. - Tramite l’AWS CLI, aggiungi una regola del gruppo di sicurezza per consentire l’accesso in entrata sulla porta 80:
aws ec2 authorize-security-group-ingress --group-id security_group_id --protocol tcp --port 80 --cidr 0.0.0.0/0 --region us-west-2
Fase 4: Crea un file Compose
In questa fase dovrai generare un semplice file Docker Compose che crea un’applicazione Web PHP. Attualmente, la CLI di Amazon ECS supporta le versione 1, 2 e 3 della sintassi del file di Docker Compose. Questo tutorial utilizza Docker Compose v3.
Di seguito è riportato il file di Compose, che puoi denominare docker-compose.yml
. Il container web
espone la porta 80 per il traffico in entrata al server Web, oltre a configurare il posizionamento dei log di container nel gruppo di log CloudWatch creato in precedenza. Questa riportata è la best practice per le attività Fargate.
version: '3'
services:
web:
image: amazon/amazon-ecs-sample
ports:
- "80:80"
logging:
driver: awslogs
options:
awslogs-group: tutorial
awslogs-region: us-west-2
awslogs-stream-prefix: web
Nota
Se il tuo account contiene già un gruppo di log CloudWatch Logs denominato tutorial
nella regione us-west-2
, scegli un nome univoco in modo che l’interfaccia a riga di comando ECS crei un nuovo gruppo di log per questo tutorial.
Oltre alle informazioni del file di Docker Compose, dovrai specificare alcuni parametri specifici di Amazon ECS necessari per il servizio. Utilizzando gli ID per VPC, sottorete e gruppo di sicurezza ottenuti nel passo precedente, crea un file denominato ecs-params.yml
con il seguente contenuto:
version: 1
task_definition:
task_execution_role: ecsTaskExecutionRole
ecs_network_mode: awsvpc
task_size:
mem_limit: 0.5GB
cpu_limit: 256
run_params:
network_configuration:
awsvpc_configuration:
subnets:
- "subnet ID 1"
- "subnet ID 2"
security_groups:
- "security group ID"
assign_public_ip: ENABLED
Fase 5: Distribuisci il file Compose a un cluster
Dopo aver creato il file Compose, puoi distribuirlo al cluster con il comando ecs-cli compose service up. Per impostazione predefinita, il comando cerca i file denominati docker-compose.yml
ed ecs-params.yml
nella directory corrente; puoi specificare un altro file di Docker Compose con l’opzione --file
e un altro file ECS Params con l’opzione --ecs-params
. Per impostazione predefinita, le risorse create da questo comando contengono la directory corrente nel titolo, ma puoi sostituire questo valore con l’opzione --project-name
. L’opzione --create-log-groups
crea i gruppi di log CloudWatch per i log di container.
ecs-cli compose --project-name tutorial service up --create-log-groups --cluster-config tutorial --ecs-profile tutorial-profile
Fase 6: Visualizza i container in esecuzione su un cluster
Dopo aver distribuito il file Compose, puoi visualizzare i container in esecuzione nel servizio con il comando ecs-cli compose service ps.
ecs-cli compose --project-name tutorial service ps --cluster-config tutorial --ecs-profile tutorial-profile
Output:
Name State Ports TaskDefinition Health tutorial/0c2862e6e39e4eff92ca3e4f843c5b9a/web RUNNING 34.222.202.55:80->80/tcp tutorial:1 UNKNOWN
Nell’esempio precedente, dal file Compose puoi visualizzare sia il container web
, sia l’indirizzo IP e la porta del server Web. Se il browser Web fa riferimento a tale indirizzo, viene visualizzata l’applicazione Web PHP. Nell’output è riportato anche il valore task-id
del container. Copia l’ID attività, che ti servirà nella fase successiva.
Fase 7: Visualizza i log di container
Visualizza i log per l’attività:
ecs-cli logs --task-id 0c2862e6e39e4eff92ca3e4f843c5b9a --follow --cluster-config tutorial --ecs-profile tutorial-profile
Nota
L’opzione --follow
indica alla CLI di Amazon ECS di eseguire continuamente il polling per i log.
Fase 8: Dimensiona le attività sul cluster
Con il comando ecs-cli compose service scale puoi ampliare il numero di attività per aumentare il numero di istanze dell’applicazione. In questo esempio, il conteggio in esecuzione dell’applicazione viene portato a due.
ecs-cli compose --project-name tutorial service scale 2 --cluster-config tutorial --ecs-profile tutorial-profile
Ora nel cluster saranno presenti due container in più:
ecs-cli compose --project-name tutorial service ps --cluster-config tutorial --ecs-profile tutorial-profile
Output:
Name State Ports TaskDefinition Health tutorial/0c2862e6e39e4eff92ca3e4f843c5b9a/web RUNNING 34.222.202.55:80->80/tcp tutorial:1 UNKNOWN tutorial/d9fbbc931d2e47ae928fcf433041648f/web RUNNING 34.220.230.191:80->80/tcp tutorial:1 UNKNOWN
Fase 9: visualizzare l’applicazione Web
Inserisci l’indirizzo IP dell’attività nel browser Web per visualizzare una pagina Web contenente l’applicazione Web Simple PHP App (App PHP semplice).
Fase 10: Elimina
Al termine di questo tutorial, dovrai eliminare le risorse in modo che non comportino ulteriori addebiti. Per prima cosa, elimina il servizio: in questo modo i container esistenti verranno interrotti e non tenteranno di eseguire altre attività.
ecs-cli compose --project-name tutorial service down --cluster-config tutorial --ecs-profile tutorial-profile
Ora arresta il cluster per eliminare le risorse create in precedenza con il comando ecs-cli up.
ecs-cli down --force --cluster-config tutorial --ecs-profile tutorial-profile
Ready-to-use commands and tips for kubectl
Kubectl is the most important Kubernetes command-line tool that allows you to run commands against clusters. We at Flant internally share our knowledge of using it via formal wiki-like instructions as well as Slack messages (we also have a handy and smart search engine in place — but that’s a whole different story…). Over the years, we have accumulated a large number of various kubectl tips and tricks. Now, we’ve decided to share some of our cheat sheets with a wider community.
I am sure our readers might be familiar with many of them. But still, I hope you will learn something new and, thereby, improve your productivity.
NB: While some of the commands & techniques listed below were compiled by our engineers, others were found on the Web. In the latter case, we checked them thoroughly and found them useful.
Well, let’s get started!
Getting lists of pods and nodes
1. I guess you are all aware of how to get a list of pods across all Kubernetes namespaces using the --all-namespaces
flag. Many people are so used to it that they have not noticed the emergence of its shorter version, -A
(it exists since at least Kubernetes 1.15).
2. How do you find all non-running pods (i.e., with a state other than Running
)?
kubectl get pods -A --field-selector=status.phase!=Running | grep -v Complete
By the way, examining the --field-selector
flag more closely (see the relevant documentation) might be a good general recommendation.
3. Here is how you can get the list of nodes and their memory size:
kubectl get no -o json | \
jq -r '.items | sort_by(.status.capacity.memory)[]|[.metadata.name,.status.capacity.memory]| @tsv'
4. Getting the list of nodes and the number of pods running on them:
kubectl get po -o json --all-namespaces | \
jq '.items | group_by(.spec.nodeName) | map({"nodeName": .[0].spec.nodeName, "count": length}) | sort_by(.count)'
5. Sometimes, DaemonSet does not schedule a pod on a node for whatever reason. Manually searching for them is a tedious task, so here is a mini-script to get a list of such nodes:
ns=my-namespace
pod_template=my-pod
kubectl get node | grep -v \"$(kubectl -n ${ns} get pod --all-namespaces -o wide | fgrep ${pod_template} | awk '{print $8}' | xargs -n 1 echo -n "\|" | sed 's/[[:space:]]*//g')\"
6. This is how you can use kubectl top
to get a list of pods that eat up CPU and memory resources:
# cpu
kubectl top pods -A | sort --reverse --key 3 --numeric
# memory
kubectl top pods -A | sort --reverse --key 4 --numeric
7. Sorting the list of pods (in this case, by the number of restarts):
kubectl get pods --sort-by=.status.containerStatuses[0].restartCount
Of course, you can sort them by other fields, too (see PodStatus and ContainerStatus for details).
Getting other data
1. When tuning the Ingress resource, we inevitably go down to the service itself and then search for pods based on its selector. I used to look for this selector in the service manifest, but later switched to the -o wide
flag:
kubectl -n jaeger get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTORjaeger-cassandra ClusterIP None <none> 9042/TCP 77d app=cassandracluster,cassandracluster=jaeger-cassandra,cluster=jaeger-cassandra
As you can see, in this case, we get the selector used by our service to find the appropriate pods.
2. Here is how you can easily print limits and requests of each pod:
kubectl get pods -n my-namespace -o=custom-columns='NAME:spec.containers[*].name,MEMREQ:spec.containers[*].resources.requests.memory,MEMLIM:spec.containers[*].resources.limits.memory,CPUREQ:spec.containers[*].resources.requests.cpu,CPULIM:spec.containers[*].resources.limits.cpu'
3. The kubectl run
command (as well as create
, apply
, patch
) has a great feature that allows you to see the expected changes without actually applying them — the --dry-run
flag. When it is used with -o yaml
, this command outputs the manifest of the required object. For example:
kubectl run test --image=grafana/grafana --dry-run -o yamlapiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: test
name: test
spec:
replicas: 1
selector:
matchLabels:
run: test
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
run: test
spec:
containers:
- image: grafana/grafana
name: test
resources: {}
status: {}
All you have to do now is to save it to a file, delete a couple of system/unnecessary fields, et voila.
NB: Please note that the kubectl run
behavior has been changed in Kubernetes v1.18 (now, it generates Pods instead of Deployments). You can find a great summary on this issue here.
4. Getting a description of the manifest of a given resource:
kubectl explain hpaKIND: HorizontalPodAutoscaler
VERSION: autoscaling/v1DESCRIPTION:
configuration of a horizontal pod autoscaler.FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an
object. Servers should convert recognized schemas to the latest internal
value, and may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/api-conventions.md#resourceskind <string>
Kind is a string value representing the REST resource this object
represents. Servers may infer this from the endpoint the client submits
requests to. Cannot be updated. In CamelCase. More info:
https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kindsmetadata <Object>
Standard object metadata. More info:
https://git.k8s.io/community/contributors/devel/api-conventions.md#metadataspec <Object>
behaviour of autoscaler. More info:
https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status.status <Object>
current information about the autoscaler.
Well, that is a piece of extensive and very helpful information, I must say.
Networking
1. Here is how you can get internal IP addresses of cluster nodes:
kubectl get nodes -o json | \
jq -r '.items[].status.addresses[]? | select (.type == "InternalIP") | .address' | \
paste -sd "\n" -
2. And this way, you can print all services and their respective nodePorts:
kubectl get --all-namespaces svc -o json | \
jq -r '.items[] | [.metadata.name,([.spec.ports[].nodePort | tostring ] | join("|"))]| @tsv'
3. In situations where there are problems with the CNI (for example, with Flannel), you have to check the routes to identify the problem pod. Pod subnets that are used in the cluster can be very helpful in this task:
kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}' | tr " " "\n"
Logs
1. Print logs with a human-readable timestamp (if it is not set):
kubectl -n my-namespace logs -f my-pod --timestamps2020-07-08T14:01:59.581788788Z fail: Microsoft.EntityFrameworkCore.Query[10100]
Logs look so much better now, don’t they?
2. You do not have to wait until the entire log of the pod’s container is printed out — just use --tail
:
kubectl -n my-namespace logs -f my-pod --tail=50
3. Here is how you can print all the logs from all containers of a pod:
kubectl -n my-namespace logs -f my-pod --all-containers
4. Getting logs from all pods using a label to filter:
kubectl -n my-namespace logs -f -l app=nginx
5. Getting logs of the “previous” container (for example, if it has crashed):
kubectl -n my-namespace logs my-pod --previous
Other quick actions
1. Here is how you can quickly copy secrets from one namespace to another:
kubectl get secrets -o json --namespace namespace-old | \
jq '.items[].metadata.namespace = "namespace-new"' | \
kubectl create-f -
2. Run these two commands to create a self-signed certificate for testing:
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=grafana.mysite.ru/O=MyOrganization"
kubectl -n myapp create secret tls selfsecret --key tls.key --cert tls.crt
Helpful links on the topic
In lieu of conclusion — here is a small list of similar publications and cheat sheets’ collections we’ve found online:
- The official cheatsheet from the Kubernetes documentation;
- A short practical introduction plus a detailed 2-page table by Linux Academy. It provides novice engineers with all the basic kubectl commands at a glance:
- An exhaustive list of commands by Blue Matador divided into sections;
- A gist compilation of links to kubectl cheatsheets, articles on the topic, as well as some commands;
- The Kubernetes-Cheat-Sheet GitHub repository by another enthusiast containing kubectl commands categorized by topics;
- The kubectl-aliases GitHub repository which is a real paradise for alias lovers.
Deploying WordPress and MySQL with Persistent Volumes in Kubernetes
This tutorial shows you how to deploy a WordPress site and a MySQL database using Minikube. Both applications use PersistentVolumes and PersistentVolumeClaims to store data.
A PersistentVolume (PV) is a piece of storage in the cluster that has been manually provisioned by an administrator, or dynamically provisioned by Kubernetes using a StorageClass. A PersistentVolumeClaim (PVC) is a request for storage by a user that can be fulfilled by a PV. PersistentVolumes and PersistentVolumeClaims are independent from Pod lifecycles and preserve data through restarting, rescheduling, and even deleting Pods.Warning: This deployment is not suitable for production use cases, as it uses single instance WordPress and MySQL Pods. Consider using WordPress Helm Chart to deploy WordPress in production.Note: The files provided in this tutorial are using GA Deployment APIs and are specific to kubernetes version 1.9 and later. If you wish to use this tutorial with an earlier version of Kubernetes, please update the API version appropriately, or reference earlier versions of this tutorial.
Objectives
- Create PersistentVolumeClaims and PersistentVolumes
- Create a
kustomization.yaml
with- a Secret generator
- MySQL resource configs
- WordPress resource configs
- Apply the kustomization directory by
kubectl apply -k ./
- Clean up
Before you begin
You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using Minikube, or you can use one of these Kubernetes playgrounds:
To check the version, enter kubectl version
. The example shown on this page works with kubectl
1.14 and above.
Download the following configuration files:
Create PersistentVolumeClaims and PersistentVolumes
MySQL and WordPress each require a PersistentVolume to store data. Their PersistentVolumeClaims will be created at the deployment step.
Many cluster environments have a default StorageClass installed. When a StorageClass is not specified in the PersistentVolumeClaim, the cluster’s default StorageClass is used instead.
When a PersistentVolumeClaim is created, a PersistentVolume is dynamically provisioned based on the StorageClass configuration.Warning: In local clusters, the default StorageClass uses the hostPath
provisioner. hostPath
volumes are only suitable for development and testing. With hostPath
volumes, your data lives in /tmp
on the node the Pod is scheduled onto and does not move between nodes. If a Pod dies and gets scheduled to another node in the cluster, or the node is rebooted, the data is lost.Note: If you are bringing up a cluster that needs to use the hostPath
provisioner, the --enable-hostpath-provisioner
flag must be set in the controller-manager
component.Note: If you have a Kubernetes cluster running on Google Kubernetes Engine, please follow this guide.
Create a kustomization.yaml
Add a Secret generator
A Secret is an object that stores a piece of sensitive data like a password or key. Since 1.14, kubectl
supports the management of Kubernetes objects using a kustomization file. You can create a Secret by generators in kustomization.yaml
.
Add a Secret generator in kustomization.yaml
from the following command. You will need to replace YOUR_PASSWORD
with the password you want to use.
cat <<EOF >./kustomization.yaml
secretGenerator:
- name: mysql-pass
literals:
- password=YOUR_PASSWORD
EOF
Add resource configs for MySQL and WordPress
The following manifest describes a single-instance MySQL Deployment. The MySQL container mounts the PersistentVolume at /var/lib/mysql. The MYSQL_ROOT_PASSWORD
environment variable sets the database password from the Secret.
application/wordpress/mysql-deployment.yaml |
---|
apiVersion: v1 kind: Service metadata: name: wordpress labels: app: wordpress spec: ports: - port: 80 selector: app: wordpress tier: frontend type: LoadBalancer --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: wp-pv-claim labels: app: wordpress spec: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi --- apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: wordpress labels: app: wordpress spec: selector: matchLabels: app: wordpress tier: frontend strategy: type: Recreate template: metadata: labels: app: wordpress tier: frontend spec: containers: - image: wordpress:4.8-apache name: wordpress env: - name: WORDPRESS_DB_HOST value: wordpress-mysql - name: WORDPRESS_DB_PASSWORD valueFrom: secretKeyRef: name: mysql-pass key: password ports: - containerPort: 80 name: wordpress volumeMounts: - name: wordpress-persistent-storage mountPath: /var/www/html volumes: - name: wordpress-persistent-storage persistentVolumeClaim: claimName: wp-pv-claim |
The following manifest describes a single-instance WordPress Deployment. The WordPress container mounts the PersistentVolume at /var/www/html
for website data files. The WORDPRESS_DB_HOST
environment variable sets the name of the MySQL Service defined above, and WordPress will access the database by Service. The WORDPRESS_DB_PASSWORD
environment variable sets the database password from the Secret kustomize generated.
application/wordpress/wordpress-deployment.yaml |
---|
apiVersion: v1 kind: Service metadata: name: wordpress labels: app: wordpress spec: ports: - port: 80 selector: app: wordpress tier: frontend type: LoadBalancer --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: wp-pv-claim labels: app: wordpress spec: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi --- apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: wordpress labels: app: wordpress spec: selector: matchLabels: app: wordpress tier: frontend strategy: type: Recreate template: metadata: labels: app: wordpress tier: frontend spec: containers: - image: wordpress:4.8-apache name: wordpress env: - name: WORDPRESS_DB_HOST value: wordpress-mysql - name: WORDPRESS_DB_PASSWORD valueFrom: secretKeyRef: name: mysql-pass key: password ports: - containerPort: 80 name: wordpress volumeMounts: - name: wordpress-persistent-storage mountPath: /var/www/html volumes: - name: wordpress-persistent-storage persistentVolumeClaim: claimName: wp-pv-claim |
- Download the MySQL deployment configuration file.
curl -LO https://k8s.io/examples/application/wordpress/mysql-deployment.yaml
- Download the WordPress configuration file.
curl -LO https://k8s.io/examples/application/wordpress/wordpress-deployment.yaml
- Add them to
kustomization.yaml
file.
cat <<EOF >>./kustomization.yaml
resources:
- mysql-deployment.yaml
- wordpress-deployment.yaml
EOF
Apply and Verify
The kustomization.yaml
contains all the resources for deploying a WordPress site and a MySQL database. You can apply the directory by
kubectl apply -k ./
Now you can verify that all objects exist.
- Verify that the Secret exists by running the following command:
kubectl get secrets
The response should be like this:NAME TYPE DATA AGE mysql-pass-c57bb4t7mf Opaque 1 9s
- Verify that a PersistentVolume got dynamically provisioned.
kubectl get pvc
Note: It can take up to a few minutes for the PVs to be provisioned and bound.The response should be like this:NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql-pv-claim Bound pvc-8cbd7b2e-4044-11e9-b2bb-42010a800002 20Gi RWO standard 77s wp-pv-claim Bound pvc-8cd0df54-4044-11e9-b2bb-42010a800002 20Gi RWO standard 77s
- Verify that the Pod is running by running the following command:
kubectl get pods
Note: It can take up to a few minutes for the Pod’s Status to beRUNNING
.The response should be like this:NAME READY STATUS RESTARTS AGE wordpress-mysql-1894417608-x5dzt 1/1 Running 0 40s
- Verify that the Service is running by running the following command:
kubectl get services wordpress
The response should be like this:NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE wordpress LoadBalancer 10.0.0.89 <pending> 80:32406/TCP 4m
Note: Minikube can only expose Services throughNodePort
. The EXTERNAL-IP is always pending. - Run the following command to get the IP Address for the WordPress Service:
minikube service wordpress --url
The response should be like this:http://1.2.3.4:32406
- Copy the IP address, and load the page in your browser to view your site.You should see the WordPress set up page similar to the following screenshot.
Warning: Do not leave your WordPress installation on this page. If another user finds it, they can set up a website on your instance and use it to serve malicious content.
Either install WordPress by creating a username and password or delete your instance.
Cleaning up
- Run the following command to delete your Secret, Deployments, Services and PersistentVolumeClaims:
kubectl delete -k ./
What’s next
- Learn more about Introspection and Debugging
- Learn more about Jobs
- Learn more about Port Forwarding
- Learn how to Get a Shell to a Container