Learning GIT using McDonald’s

The most lightly taken & ignored skill: Git️️⚙️

Git process

When you go to McDonald’s🍔 & order a “Double Quarter Pounder Burger with whole wheat bread bun, some extra Cheese with less tomatoes slice & extra mayonnaise with a large Coke”🍔🍶
(By the way thats a huge burger man 😂)

The moment you place Order:

🍔 The manager will shout out your order within the team🗣️

🍔 2 3 people will start separately working on your order like one on chicken patty part, one on wheat bread bun, one on veggies & one on your soft drink so on🕴🏻🕴🏻

🍔 They all will first “Pull” the latest stock they have in their respective inventory like bread, veggies, chicken patty & all.

🍔 The moment all 3 4 guys are ready with their individual task, they come at a single place with their result🤌🏻😎

🍔 They will add & merge all their results in a wrapper

🍔 Then push it in a box & boom it will be handed over to you🥳

And This is how exactaly Git concept works‍✅🎉

The moment your Team gets a Project like “Create a Calculator”

👨🏻‍💻Your Team Lead will shout out the task within the Team➕✖️➗

👨🏻‍💻 2 3 developers, will start separately working on different features of Calculator like one on Addition➕, one on Division➗ & so on.

👨🏻‍💻They will first “Pull”, at present the latest code, of your project, from Github/Bitbucket or wherever your project is stored into their local machines💻💻

👨🏻‍💻 Once all 2 3 guys, are ready with their respective task, they all will “push” their code at a single place📥

👨🏻‍💻They will “merge” all their work & boom the Calculator is ready✅🎉

Lets now dig a bit deeper on the Practical & Processing part of Git🧙🏼‍♂️⚙️

Note: Keep patience for next 3 mints and read calmly, lets start😎️

The moment you are asked to code & add a new feature in your existing project code repo, you will first

🔥Clone the code repo by command : git clone

Your company will be storing their code in a repo on Github or Bitbucket, they will create your account as well there, so login into that account, go to the code repo and use the command

🧞‍♂”️git clone https://[email protected]/sample_repo.git -b develop”
The “-b develop” will clone the develop branch, from many other branches present in your main code repo of team.

🔥Now once you have project’s develop branch code, you will create your own branch from it (like your own working space)

🧞‍♂️git checkout -b mohit/feature_calculator_addition
“mohit/feature_calculator_addition” this branch is your private space, separated from your main team project, this will help to avoid any issue, from your side, in the main code of the team.

🔥Start doing the changes or code addition you want and once done do

🧞‍♂️git status : It will show all the scripts you have made changes to.

🧞‍♂️git add a.py b.py: This will add the changed script to a staging/space where they are made ready to be now pushed to main repo.

🧞‍♂️git commit -m “added addition feature”: This will be your msg to the team for the code you have worked on.

🧞‍♂️git push origin mohit/feature_calculator_addition: This will push your entire code to the branch you have created.

🔥Till now you have taken pull of the team’s code, created your own branch, worked on it, pushed the code to “your” branch.

But all this is at “your” level till now, so to put all this, in teams code repo you will raise a “Pull Request” to your Team Lead.

🔥Login to your Bitbucket/Github console -> Go to Pull Request tab,
select from where to where you want to move your code.
From: mohit/feature_calculator_addition To: Develop

🔥Once you raise the PR, by adding your team lead name, he will be notified & he can see your changes in all those scripts you have made.

🔥If he/she feels it’s proper, lead will accept your code & merge it in the teams’s develop branch or else he will reject it with a message by him & you will be notified why it’s been rejected & you will do the necessary changes & will raise a PR.

PR raised to Lead Emma from Your branch to Team Branch

🔥Also when you raise a PR, there can be a scenario where you and one other team member have made changes in the exact same script in exactly the same line, now this will give a “Merge Conflict” when code will be added to the main branch🤕

Merge conflicts happen when you merge two branches that have competing/smiliar commits, and Git needs your help to decide which/whose changes to incorporate in the final merge.

🔥You have to resolve this merge conflict, by either changing the code’s position or by deciding which one of you will be keeping your code on that paticular line giving merge conflict🥶

🥳Once resolved, the final code will be added to the project’s repo code🥳

🤷🏻‍♂️Now the most asked question: What is Git and GitHub ?🙄

See Internet is a concept, on which we are making and using Whatsapp, Facebook, Gmail🤖

Same way, Git is the concept, an approach on which Github & Bitbucket have opened their shops, of giving developer a space, where we can save our code & version them, so that if by chance we have issue in our Production code, we can just rollback to our second last release code & can publish it for customer🥳

There are many more, deep down things in Git, but this article’s approach was to just give you a brief idea, about the concept of git, so that later on it will be easy for you to connect dots about git by any tutorial you like🤌🏻😎

And Thank You for being here with me and for your love🫶🏻🙌🏻
It makes me so happy 😃 seeing a clap 👏🏻 or a comment✍🏻 or seeing article shared by someone. It makes all the efforts worth 🙏🏻

So that’s all from now, hope you have enjoyed the Git burger 🍔😂 as well the concept.
Wish you all a happy learning and an awesome year ahead🥳

Critical Thinking

Tip: Critical thinking skills can be really valuable for Software engineers, Product and many other walks of life. It’s about approaching new information with a mix of humble curiosity and doubt.

Think independently and ask good questions that help make thoughtful decisions.

In broad strokes, some of the questions I like to ask based on critical thinking are:

➡️ How do we know we’re solving the right problem?
➡️ How do we know we’re solving the problem in the right way? (i.e. balancing rigor and efficiency, given our understanding of the problem and constraints)
➡️ If we don’t know the sources of our problem, how can we determine the root cause?
➡️ How can we break the key question down into smaller questions that we can analyze further?
➡️ Once we have one or more hypotheses, how do we structure work to evaluate them?
➡️ What shortcuts might we take if we’re under constraints (time pressure) without unduly compromising our analytics rigor around the question?
➡️ Does the evidence sufficiently support the conclusions?
How do we know when we are done? When is the solution “good enough”?
➡️ How do I communicate the solution clearly and logically to all stakeholders?

I’ve found these questions often help. Sometimes we’ll address the symptom of a problem, only to discover there are other symptoms that pop up. At other times, we might quickly ship a solution that creates more problems later down the road.

With a lens on critical thinking, we might challenge assumptions, look closer at the risk/benefit, seek out contradictory evidence, evaluate credibility and look for more data to build confidence we are doing the right thing.

Being in engineering or product, we can sometimes rush to solve a problem right away so it feels like we’re making progress or looks like we’re being responsive to stakeholders. This can introduce risks if we aren’t asking the right questions before doing so, fully considering causes and consequences. Put another way, critical thinking is thinking on purpose and forming your own conclusions. This goal-directed thinking can help you focus on root-cause issues that avoid future problems that arise from not keeping in mind causes and consequences.

Critical thinkers:
➡️ Raise mindful questions, formulating them clearly and precisely
➡️ Collect and assess relevant information, validating how they might answer the question
➡️ Arrive at well-reasoned conclusions and solutions, testing them against relevant criteria and standards
➡️ Think open mindedly within alternative systems of thought, recognizing and assessing, as need be, their assumptions, implications, and practical consequences
➡️ Communicate effectively with others in figuring out solutions to complex problems

Why i switched form docker desktop to Colima

DDEV is an open source tool that makes it simple to get local PHP development environments up and running within minutes. It’s powerful and flexible as a result of its per-project environment configurations, which can be extended, version controlled, and shared. In short, DDEV aims to allow development teams to use containers in their workflow without the complexities of bespoke configuration.

DDEV replaces more traditional AMP stack solutions (WAMP, MAMP, XAMPP, and so on) with a flexible, modern, container-based solution. Because it uses containers, DDEV allows each project to use any set of applications, versions of web servers, database servers, search index servers, and other types of software.

In March 2022, the DDEV team announced support for Colima, an open source Docker Desktop replacement for macOS and Linux. Colima is open source, and by all reports it’s got performance gains over its alternative, so using Colima seems like a no-brainer.

Migrating to Colima

First off, Colima is almost a drop-in replacement for Docker Desktop. I say almost because some reconfiguration is required when using it for an existing DDEV project. Specifically, databases must be reimported. The fix is to first export your database, then start Colima, then import it. Easy.

Colima requires that either the Docker or Podman command is installed. On Linux, it also requires Lima.

Docker is installed by default with Docker Desktop for macOS, but it’s also available as a stand-alone command. If you want to go 100% pure Colima, you can uninstall Docker Desktop for macOS, and install and configure the Docker client independently. Full installation instructions can be found on the DDEV docs site.

An image of the container technology stack.

(Mike Anello,CC BY-SA 4.0)

If you choose to keep using both Colima and Docker Desktop, then when issuing docker commands from the command line, you must first specify which container you want to work with. More on this in the next section.

More on Kubernetes

What is Kubernetes?

Free online course: Containers, Kubernetes and Red Hat OpenShift technical over…

eBook: Storage Patterns for Kubernetes

Test drive OpenShift hands-on

An introduction to enterprise Kubernetes

How to explain Kubernetes in plain terms

eBook: Running Kubernetes on your Raspberry Pi homelab

Kubernetes cheat sheet

eBook: A guide to Kubernetes for SREs and sysadmins

Latest Kubernetes articles

Install Colima on macOS

I currently have some local projects using Docker, and some using Colima. Once I understood the basics, it’s not too difficult to switch between them.

  1. To get started, install Colima using Homebrew brew install colima
  2. ddev poweroff (just to be safe)
  3. Next, start Colima with colima start --cpu 4 --memory 4. The --cpu and --memory options only have to be done once. After the first time, only colima start is necessary.
  4. If you’re a DDEV user like me, then you can spin up a fresh Drupal 9 site with the usual ddev commands (ddev config, ddev start, and so on.) It’s recommended to enable DDEV’s mutagen functionality to maximize performance.

Switching between a Colima and Docker Desktop

If you’re not ready to switch to Colima wholesale yet, it’s possible to have both Colima and Docker Desktop installed.

  1. First, poweroff ddev:ddev poweroff
  2. Then stop Colima: colima stop
  3. Now run docker context use default to tell the Docker client which container you want to work with. The name default refers to Docker Desktop for Mac. When colima start is run, it automatically switches Docker to the colima context.
  4. To continue with the default (Docker Desktop) context, use the ddev start command.

Technically, starting and stopping Colima isn’t necessary, but the ddev poweroff command when switching between two contexts is.

Recent versions of Colima revert the Docker context back to default when Colima is stopped, so the docker context use default command is no longer necessary. Regardless, I still use docker context show to verify that either the default (Docker Desktop for Mac) or colima context is in use. Basically, the term context refers to which container provider the Docker client routes commands to.

Try Colima

Overall, I’m liking what I see so far. I haven’t run into any issues, and Colima-based sites seem a bit snappier (especially when DDEV’s Mutagen functionality is enabled). I definitely foresee myself migrating project sites to Colima over the next few weeks.

Luigi è a lavoro ?

Se nei giorni in cui lavoro da remoto entra qualcuno in ufficio chiedendo di me, ecco non dite mai “No, oggi Luigi non c’è, è in smartworking” ma rispondete “Sì, sì, oggi Luigi c’è, lavora in smartworking”.

Si potrebbe sintetizzare così lo smartworking, in italiano il lavoro agile. Perché si lavora anche così, non necessariamente seduti alla scrivania nel proprio ufficio. L’abbiamo imparato, chi già non lo faceva prima, in questi due anni di pandemia. Ci si può organizzare meglio, si risparmia tempo e tanto la produttività quanto la vita personale ne traggono vantaggio. Win-win, per stare sull’inglese. Che poi, per rispondere agli scettici, guardate che chi non lavora da casa, di regola non lavora neppure dall’ufficio.

Dal 1 settembre, ormai ci siamo, torna obbligatorio l’accordo individuale, sospeso negli ultimi due anni. Cos’è? È il fulcro dello smartworking (che ricordiamo non è un contratto di lavoro ma una sua modalità di esecuzione).

Datore di lavoro e lavoratore si devono mettere d’accordo su come organizzare il lavoro, in parte all’interno dei locali aziendali e in parte all’esterno, senza precisi vincoli di orario e di luogo di lavoro. È un accordo, il che significa che se una delle parti non è – appunto – d’accordo non se ne fa nulla. Ed è individuale, non collettivo (anche se spesso regolamenti o contratti collettivi possono dare indicazioni).

L’accordo deve essere stipulato in forma scritta, dicono le norme, “ai fini della regolarità amministrativa e della prova” e deve essere conservato dal datore di lavoro per cinque anni dalla sua sottoscrizione. Nessuna sanzione esplicita se manca l’accordo, ma conseguenze in base ai fini che è chiamato a raggiugere. Come provare i contenuti dell’accordo? Quali le conseguenze in caso di infortunio?

Novità degli ultimi giorni: l’accordo individuale, a differenza del periodo pre-pandemico, non deve più essere trasmesso al Ministero del Lavoro al quale basterà sapere (oltre all’elenco dei lavoratori interessati) la data di sottoscrizione dell’accordo e la sua durata. La comunicazione di questi dati (che poi il Ministero trasmetterà all’Inail) dovrà essere effettuata entro cinque giorni dalla sottoscrizione dell’accordo, pena una sanzione pecuniaria da 100 a 500 euro. Per la prima fase transitoria il termine è fissato al 1 novembre.

Bene, inizia la fase due dello smartworking ordinario e non più emergenziale!

E ricordatevi che risposta dare quando, non trovandomi in ufficio, vi chiederanno “Oggi Luigi è al lavoro?”.

Migrate databases to Kubernetes using Konveyor

Ships at sea on the web

Kubernetes Database Operator is useful for building scalable database servers as a database (DB) cluster. But because you have to create new artifacts expressed as YAML files, migrating existing databases to Kubernetes requires a lot of manual effort. This article introduces a new open source tool named Konveyor Tackle-DiVA-DOA (Data-intensive Validity Analyzer-Database Operator Adaptation). It automatically generates deployment-ready artifacts for database operator migration. And it does that through datacentric code analysis.

What is Tackle-DiVA-DOA?

Tackle-DiVA-DOA (DOA, for short) is an open source datacentric database configuration analytics tool in Konveyor Tackle. It imports target database configuration files (such as SQL and XML) and generates a set of Kubernetes artifacts for database migration to operators such as Zalando Postgres Operator.

A flowchart shows a database cluster with three virtual machines and SQL and XML files transformed by going through Tackle-DiVA-DOA into a Kubernetes Database Operator structure and a YAML file

DOA finds and analyzes the settings of an existing system that uses a database management system (DBMS). Then it generates manifests (YAML files) of Kubernetes and the Postgres operator for deploying an equivalent DB cluster.

A flowchart shows the four elements of an existing system (as described in the text below), the manifests generated by them, and those that transfer to a PostgreSQL cluster

Database settings of an application consist of DBMS configurations, SQL files, DB initialization scripts, and program codes to access the DB.

  • DBMS configurations include parameters of DBMS, cluster configuration, and credentials. DOA stores the configuration to postgres.yaml and secrets to secret-db.yaml if you need custom credentials.
     
  • SQL files are used to define and initialize tables, views, and other entities in the database. These are stored in the Kubernetes ConfigMap definition cm-sqls.yaml.
     
  • Database initialization scripts typically create databases and schema and grant users access to the DB entities so that SQL files work correctly. DOA tries to find initialization requirements from scripts and documents or guesses if it can’t. The result will also be stored in a ConfigMap named cm-init-db.yaml.
     
  • Code to access the database, such as host and database name, is in some cases embedded in program code. These are rewritten to work with the migrated DB cluster.

Tutorial

DOA is expected to run within a container and comes with a script to build its image. Make sure Docker and Bash are installed on your environment, and then run the build script as follows:

cd /tmp
git clone https://github.com/konveyor/tackle-diva.git
cd tackle-diva/doa
bash util/build.sh

docker image ls diva-doa
REPOSITORY   TAG       IMAGE ID       CREATED        SIZE
diva-doa     2.2.0     5f9dd8f9f0eb   14 hours ago   1.27GB
diva-doa     latest    5f9dd8f9f0eb   14 hours ago   1.27GB

This builds DOA and packs as container images. Now DOA is ready to use.

The next step executes a bundled run-doa.shwrapper script, which runs the DOA container. Specify the Git repository of the target database application. This example uses a Postgres database in the TradeApp application. You can use the -o option for the location of output files and an -i option for the name of the database initialization script:

cd /tmp/tackle-diva/doa
bash run-doa.sh -o /tmp/out -i start_up.sh \
      https://github.com/saud-aslam/trading-app
[OK] successfully completed.

The /tmp/out/ directory and /tmp/out/trading-app, a directory with the target application name, are created. In this example, the application name is trading-app, which is the GitHub repository name. Generated artifacts (the YAML files) are also generated under the application-name directory:

ls -FR /tmp/out/trading-app/
/tmp/out/trading-app/:
cm-init-db.yaml  cm-sqls.yaml  create.sh*  delete.sh*  job-init.yaml  postgres.yaml  test//tmp/out/trading-app/test:
pod-test.yaml

The prefix of each YAML file denotes the kind of resource that the file defines. For instance, each cm-*.yamlfile defines a ConfigMap, and job-init.yamldefines a Job resource. At this point, secret-db.yaml is not created, and DOA uses credentials that the Postgres operator automatically generates.

Now you have the resource definitions required to deploy a PostgreSQL cluster on a Kubernetes instance. You can deploy them using the utility script create.sh. Alternatively, you can use the kubectl createcommand:

cd /tmp/out/trading-app
bash create.sh  # or simply “kubectl apply -f .”configmap/trading-app-cm-init-db created
configmap/trading-app-cm-sqls created
job.batch/trading-app-init created
postgresql.acid.zalan.do/diva-trading-app-db created

The Kubernetes resources are created, including postgresql (a resource of the database cluster created by the Postgres operator), servicerspodjobcmsecretpv, and pvc. For example, you can see four database pods named trading-app-*, because the number of database instances is defined as four in postgres.yaml.

$ kubectl get all,postgresql,cm,secret,pv,pvc
NAME                                        READY   STATUS      RESTARTS   AGE

pod/trading-app-db-0 1/1     Running     0          7m11s
pod/trading-app-db-1 1/1     Running     0          5m
pod/trading-app-db-2 1/1     Running     0          4m14s
pod/trading-app-db-3 1/1     Running     0          4mNAME                                      TEAM          VERSION   PODS   VOLUME   CPU-REQUEST   MEMORY-REQUEST   AGE   STATUS
postgresql.acid.zalan.do/trading-app-db   trading-app   13 4      1Gi                                     15m   RunningNAME                            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/trading-app-db          ClusterIP   10.97.59.252    <none> 5432/TCP   15m
service/trading-app-db-repl     ClusterIP   10.108.49.133   <none> 5432/TCP   15mNAME                         COMPLETIONS   DURATION   AGE
job.batch/trading-app-init   1/1           2m39s      15m

Note that the Postgres operator comes with a user interface (UI). You can find the created cluster on the UI. You need to export the endpoint URL to open the UI on a browser. If you use minikube, do as follows:

$ minikube service postgres-operator-ui

Then a browser window automatically opens that shows the UI.

Screenshot of the UI showing the Cluster YAML definition on the left with the Cluster UID underneath it. On the right of the screen a header reads "Checking status of cluster," and items in green under that heading show successful creation of manifests and other elements

(Yasuharu Katsuno and Shin Saito, CC BY-SA 4.0)

Now you can get access to the database instances using a test pod. DOA also generated a pod definition for testing.

$ kubectl apply -f /tmp/out/trading-app/test/pod-test.yaml # creates a test Pod
pod/trading-app-test created
$ kubectl exec trading-app-test -it -- bash # login to the pod

The database hostname and the credential to access the DB are injected into the pod, so you can access the database using them. Execute the psql metacommand to show all tables and views (in a database):

# printenv DB_HOST; printenv PGPASSWORD
(values of the variable are shown)# psql -h ${DB_HOST} -U postgres -d jrvstrading -c '\dt'
             List of relations
 Schema |      Name      | Type  |  Owner  
--------+----------------+-------+----------
 public | account        | table | postgres
 public | quote          | table | postgres
 public | security_order | table | postgres
 public | trader         | table | postgres
(4 rows)# psql -h ${DB_HOST} -U postgres -d jrvstrading -c '\dv'
                List of relations
 Schema |         Name          | Type |  Owner  
--------+-----------------------+------+----------
 public | pg_stat_kcache        | view | postgres
 public | pg_stat_kcache_detail | view | postgres
 public | pg_stat_statements    | view | postgres
 public | position              | view | postgres
(4 rows)

After the test is done, log out from the pod and remove the test pod:

# exit
$ kubectl delete -f /tmp/out/trading-app/test/pod-test.yaml

Finally, delete the created cluster using a script:

$ bash delete.sh