Linux in Cloud, DevOps & Production: How Linux Powers Modern Infrastructure (Hands‑On Labs)

How Linux Is Used in the Real World

Over the past decade, Linux has moved from “a useful skill” to the foundation of modern infrastructure. Whether you’re deploying microservices, running CI/CD pipelines, or operating production workloads at scale, Linux is everywhere.

It powers:

  • Cloud virtual machines across AWS, Azure, and GCP

  • Container runtimes and Kubernetes worker nodes

  • CI/CD build servers and automation pipelines

  • Production servers that run continuously at global scale

In other words, cloud, DevOps, and production environments are built on Linux; even when it’s abstracted away.

In the earlier parts of this series, you’ve built the core Linux foundations required to operate these systems professionally:

These pillars form the baseline.

This final part answers the most important question:

How is Linux actually used in real production environments today, and how can you practice Linux the way cloud and DevOps professionals do?

This post is tutorial-driven; you’ll get hands‑on labs for each topic so you can apply concepts like a real operator: connecting to cloud VMs with SSH, automating configuration, building and running containers, wiring up CI/CD pipelines, and adopting the production mindset: repeatable, observable, and recoverable.

Linux in the Cloud (Foundation of Modern IT)

Almost all cloud platforms rely on Linux under the hood. Even when you’re “not touching servers,” the services hosting your data, queues, load balancers, and APIs are usually Linux‑based. Why Linux?

  • Lightweight & stable for high density and uptime
  • Scriptable via shells and APIs
  • Automation‑friendly for DevOps and IaC (Infrastructure as Code)

Cloud providers run Linux across:

  • Virtual machines (IaaS)
  • Containers & Kubernetes (orchestration)
  • Managed services (databases, message brokers, observability backends)
  • Networking planes (load balancers, ingress, NAT, routing appliances)

Lab: Launch & Connect to a Cloud Linux VM

Goal: Create a small Linux VM in your preferred cloud and connect via SSH.

  1. Provision a VM (e.g., Ubuntu LTS):
    • Choose the smallest cost‑effective size (e.g., 1 vCPU, 1–2 GB RAM).
    • Add your SSH public key during creation.
  2. Open SSH port (22) in your VM’s security group/firewall.
  3. Connect from your terminal:
    ssh -i ~/.ssh/id_rsa ubuntu@
  4. Verify basic details:
    uname -a
    lsb_release -a 2>/dev/null || cat /etc/os-release
    whoami && hostname -f

    Success criteria: You can SSH into the VM and see the OS details and hostname.

    Linux Virtual Machines (Cloud Servers)

    In the cloud, Linux typically runs as virtual machines (VMs) you access remotely. Key differences from your local laptop:

    • No physical access → everything is done via SSH and automation
    • Infrastructure is disposable → you recreate servers instead of nursing them back to health
    • Configuration is declarative → infrastructure is described in code

    Lab: Treat a Server as Disposable

    Goal: Practice “cattle, not pets.”

    1. Create a simple setup script you can rerun on any VM:
      cat > bootstrap.sh <<'EOF'
      set -euo pipefail
      sudo apt-get update -y
      sudo apt-get install -y git curl jq htop
      echo "Bootstrap complete on $(hostname) at $(date -Iseconds)"
      EOF
      chmod +x bootstrap.sh
    2. Run it, terminate the VM, recreate it, and run it again.
      This mindset (rebuild, don’t repair) is core to production reliability.

    Linux & Automation (Why DevOps Exists)

    Manual Linux administration does not scale. DevOps practices exist to automate infrastructure, configuration, and deployments.

    • Shell scripting for quick tasks
    • Configuration management (e.g., Ansible)
    • Immutable infrastructure (golden images, Packer, cloud‑init)
    • Infrastructure as Code (Terraform, Bicep, CloudFormation)

    Linux is perfect for this because everything is text and scriptable.

    Lab: Automated Server Provisioning with cloud‑init (No Clicks)

    Goal: Provision a VM that configures itself at first boot.

    1. Create a cloud‑init file (user‑data):
      YAML ↓

      #cloud-config
      packages:
        - nginx
        - fail2ban
      runcmd:
        - systemctl enable --now nginx
        - ufw allow 'Nginx Full' || true
        - ufw --force enable || true
        - echo "Hello from cloud-init on $(hostname)" > /var/www/html/index.html
    2. Pass this user‑data when creating the VM (varies by cloud).
    3. Verify after boot:
      curl -I http:///
      sudo systemctl status nginx --no-pager
      sudo ufw status

      Success criteria: Nginx is running automatically on first boot, the firewall is enabled, and the index page is set.

      Linux & Containers (Modern Applications)

      Containers aren’t magic; they are built on Linux features:

      • Namespaces → isolate processes
      • cgroups → limit resources
      • Overlay filesystems → efficient images
      • Capabilities → fine‑grained privileges

      Docker, containerd, and Kubernetes make these primitives easier to use at scale.

      Lab: Build & Run a Simple Container

      Goal: Package and run an app on Linux using Docker or Podman.

        1. Install Docker (or Podman) on your VM.
        2. Create a minimal web app:
          mkdir hello && cd hello
          cat > app.py <<'EOF'
          from http.server import SimpleHTTPRequestHandler, HTTPServer
          import socket
          class Handler(SimpleHTTPRequestHandler):
              def do_GET(self):
                  self.send_response(200)
                  self.end_headers()
                  self.wfile.write(f"Hello from {socket.gethostname()}".encode())
          if __name__ == "__main__":
              HTTPServer(("0.0.0.0", 8080), Handler).serve_forever()
          EOF
        3. Add a Dockerfile:
          DockerFile ↓

          FROM python:3.11-slim
          WORKDIR /app
          COPY app.py .
          EXPOSE 8080
          CMD ["python", "app.py"]
        4. Build & run:
          docker build -t hello:latest .
          docker run -d -p 8080:8080 --name hello hello:latest
          curl http://localhost:8080
        5. Harden basics: run as non‑root inside the container:
          DockerFile ↓

          RUN useradd -m appuser
          USER appuserRebuild and rerun.

          Rebuild and rerun.

        Success criteria: You can access the container app and confirm it’s running as a non‑root user.

        Linux in CI/CD Pipelines

        CI/CD runners are almost always Linux servers, often ephemeral. They build, test, and deploy your code and even apply infrastructure changes.

        • Builds → compile, package artifacts
        • Tests → unit/integration
        • Deployments → push to servers/containers
        • Infra changes → Terraform/Ansible runs

        Lab: Create a Minimal CI Pipeline (Local “Runner” Feel)

        Goal: Simulate a pipeline on a Linux box as if it were a CI runner.

        1. Create a repo structure:
          mkdir -p demo/.pipeline && cd demo
          git init
        2. Add a build script:
          cat > .pipeline/build.sh <<'EOF'
          set -euo pipefail
          echo "[BUILD] Packaging app..."
          tar -czf artifact.tgz ../hello
          echo "[BUILD] Done."
          EOF
          chmod +x .pipeline/build.sh
        3. Add a test script:
          cat > .pipeline/test.sh <<'EOF'
          set -euo pipefail
          echo "[TEST] Running basic checks..."
          test -f ../hello/Dockerfile
          test -f ../hello/app.py
          echo "[TEST] OK."
          EOF
          chmod +x .pipeline/test.sh
        4. Add a deploy script (container deploy):
          cat > .pipeline/deploy.sh <<'EOF'
          set -euo pipefail
          echo "[DEPLOY] Deploying container..."
          docker rm -f hello || true
          docker build -t hello:ci ../hello
          docker run -d -p 8080:8080 --name hello hello:ci
          curl -fsS http://localhost:8080 >/dev/null && echo "[DEPLOY] OK."
          EOF
          chmod +x .pipeline/deploy.sh
        5. Execute the “pipeline”:
          .pipeline/test.sh && .pipeline/build.sh && .pipeline/deploy.sh

          Success criteria: A single command sequence builds, tests, and deploys your app into a container on a Linux “runner”.

          Next step: Port these scripts to a real CI system (GitHub Actions, GitLab CI, Azure Pipelines) where the runner is a Linux VM/container.

          Production Linux Mindset (Most Important)

          Production Linux is about repeatability, automation, observability, and recovery. Professionals ask:

          • Can this be rebuilt? (golden images, cloud‑init, IaC)
          • Can this be automated? (no snowflakes)
          • Can this fail safely? (graceful degradation, rollbacks, canaries)

          Lab: Make Your Service Observable

          Goal: Add basic health checks and logs to behave like a production service.

              1. Add a health endpoint to app.py:
                Python ↓

                if self.path == "/health":
                    self.send_response(200)
                    self.end_headers()
                    self.wfile.write(b"ok")
                    return
              2. Add a systemd unit to manage your containerized service:
            sudo tee /etc/systemd/system/hello.service >/dev/null <<'EOF'
            [Unit]
            Description=Hello container service
            After=network-online.target
            Wants=network-online.target
            
            [Service]
            Restart=always
            ExecStart=/usr/bin/docker run --rm -p 8080:8080 --name hello hello:latest
            ExecStop=/usr/bin/docker stop hello
            
            [Install]
            WantedBy=multi-user.target
            EOF
            sudo systemctl daemon-reload
            sudo systemctl enable --now hello
            1. Verify logging & health:
              curl -fsS http://localhost:8080/health && echo "Health OK"
              journalctl -u hello -f

              Success criteria: The service starts automatically on boot, exposes a health endpoint, and logs are accessible via journalctl.

              Bonus Labs: “Real‑World” Touches

              Lab: Immutable Pattern with Rebuild Script

              Create a quick rebuild script to nuke and redeploy:

              cat > rebuild.sh <<'EOF'
              set -euo pipefail
              docker rm -f hello || true
              docker build -t hello:latest ./hello
              systemctl restart hello || true
              curl -fsS http://localhost:8080/health && echo "Healthy after rebuild"
              EOF
              chmod +x rebuild.sh
              ./rebuild.sh

              Lab: Basic Rollback

              Keep the previous image and roll back if needed:

              docker tag hello:latest hello:previous || true
              # Build new
              docker build -t hello:latest ./hello || { echo "Build failed"; exit 1; }
              # Try deploy
              if ! ./rebuild.sh; then
                echo "Rollback..."
                docker tag hello:previous hello:latest
                ./rebuild.sh
              fi

              Lab: Minimal Monitoring Probe

              Schedule a simple cron probe (or systemd timer) to alert on failure:

              ( crontab -l 2>/dev/null; echo "*/5 * * * * curl -fsS http://localhost:8080/health || 
              echo \"[ALERT] Health check failed at $(date)\" | systemd-cat -t hello-health" ) | crontab -

              Key Takeaway

              Linux is not just an operating system—it’s the backbone of cloud and DevOps. When you understand Linux deeply, every modern platform becomes easier:

              • Cloud VMs become scripts and templates
              • Containers become Linux features with a friendly UI
              • CI/CD becomes predictable, repeatable workflows
              • Production becomes rebuildable, observable, recoverable

              Keep practicing the labs above until they feel natural. That’s how Linux stops being a “skill” and becomes infrastructure.

              Environment & Prerequisites (for All Labs)

              • A Linux VM (local or cloud): Ubuntu 22.04+ / Debian 12+ / Fedora / Rocky/Alma 9+
              • A non‑root user with sudo
              • SSH keypair configured
              • Docker or Podman installed (for container labs)
              • Basic firewall rules allowing SSH and chosen app port (8080 in examples)
              • Snapshot/backup before changes in production‑like environments

Leave a Comment

We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners. View more
Cookies settings
Accept
Decline
Privacy & Cookie policy
Privacy & Cookies policy
Cookie name Active

Who we are

Suggested text: Our website address is: https://humbletech.cloud.

Comments

Suggested text: When visitors leave comments on the site we collect the data shown in the comments form, and also the visitor’s IP address and browser user agent string to help spam detection. An anonymised string created from your email address (also called a hash) may be provided to the Gravatar service to see if you are using it. The Gravatar service Privacy Policy is available here: https://automattic.com/privacy/. After approval of your comment, your profile picture is visible to the public in the context of your comment.

Media

Suggested text: If you upload images to the website, you should avoid uploading images with embedded location data (EXIF GPS) included. Visitors to the website can download and extract any location data from images on the website.

Cookies

Suggested text: If you leave a comment on our site you may opt in to saving your name, email address and website in cookies. These are for your convenience so that you do not have to fill in your details again when you leave another comment. These cookies will last for one year. If you visit our login page, we will set a temporary cookie to determine if your browser accepts cookies. This cookie contains no personal data and is discarded when you close your browser. When you log in, we will also set up several cookies to save your login information and your screen display choices. Login cookies last for two days, and screen options cookies last for a year. If you select "Remember Me", your login will persist for two weeks. If you log out of your account, the login cookies will be removed. If you edit or publish an article, an additional cookie will be saved in your browser. This cookie includes no personal data and simply indicates the post ID of the article you just edited. It expires after 1 day.

Embedded content from other websites

Suggested text: Articles on this site may include embedded content (e.g. videos, images, articles, etc.). Embedded content from other websites behaves in the exact same way as if the visitor has visited the other website. These websites may collect data about you, use cookies, embed additional third-party tracking, and monitor your interaction with that embedded content, including tracking your interaction with the embedded content if you have an account and are logged in to that website.

Who we share your data with

Suggested text: If you request a password reset, your IP address will be included in the reset email.

How long we retain your data

Suggested text: If you leave a comment, the comment and its metadata are retained indefinitely. This is so we can recognise and approve any follow-up comments automatically instead of holding them in a moderation queue. For users that register on our website (if any), we also store the personal information they provide in their user profile. All users can see, edit, or delete their personal information at any time (except they cannot change their username). Website administrators can also see and edit that information.

What rights you have over your data

Suggested text: If you have an account on this site, or have left comments, you can request to receive an exported file of the personal data we hold about you, including any data you have provided to us. You can also request that we erase any personal data we hold about you. This does not include any data we are obliged to keep for administrative, legal, or security purposes.

Where your data is sent

Suggested text: Visitor comments may be checked through an automated spam detection service.
Save settings
Scroll to Top