🌱 Building a CI/CD Pipeline and Learning How Systems Really Behave

Lessons from building a Jenkins and Docker pipeline under real constraints

When I set out to build this Jenkins and Docker CI/CD pipeline, I thought the value would come from getting it to work. What I didn’t expect was how much of the learning would come from the moments where it didn’t.

This project quietly reshaped how I think about version control, pipelines, networking, and deployment safety. Not in a dramatic way, but in the kind of way that sticks.

📁 Starting Calmly with Version Control

Early on, I had a moment of doubt about the state of my repository. The directory structure didn’t look quite right, and it would have been easy to assume something had gone wrong.

Instead of guessing, I checked the repository state properly. That small moment was a reminder I’ve learned before, but clearly needed again. Git does not tell its story through visible folders alone. The real truth lives in its metadata.

From that point on, I trusted the tools more than my instincts when something looked unfamiliar.

🧩 Being Deliberate with How I Commit

I made a conscious decision to structure the project locally first on my Ubuntu machine. The repository started clean. Once the application, container setup, and pipeline structure were ready, I committed them together as a foundation.

That first commit wasn’t about speed. It was about intent. From there, every change was small and purposeful. Each commit told a clear story.

When something broke later, I could trace it back confidently instead of digging through noisy history.

⚙️ Jenkins Taught Me to Respect Boundaries

Jenkins was where most of the real learning happened. Before touching pipeline logic, I learned to always check that the Jenkins service itself was healthy.

One lesson that really settled in was the difference between internal and external access. Jenkins can safely use localhost for things running on the same machine. The moment anything external is involved, that assumption breaks down.

Once I started thinking consistently in terms of network boundaries, many issues made immediate sense.

🧪 Why Tests Come Before Containers

I designed the pipeline to run tests before building Docker images. The pipeline creates a Python virtual environment, installs dependencies, and runs tests before any containerization happens.

This keeps feedback fast and failures cheap. There’s no reason to package an application that’s already failing.

🐳 Deployment on a Shared Machine

Deployment was where the project started to feel real. Port 5000 was already in use on the host by another container that had been running for a long time.

Instead of stopping it, I adjusted the pipeline to expose the new container on port 5001. This preserved an existing service and allowed both applications to coexist safely.

The deployment logic always replaces the running container and validates success using an application-level health check.

🚦 When the Pipeline Failed Before It Even Started

One of the most educational moments came when Jenkins refused to load the pipeline from source control. The same pipeline worked perfectly when pasted directly into Jenkins but failed when configured to load from GitHub.

The inline pipeline worked because Jenkins didn’t need to fetch anything. The SCM-based pipeline failed because Jenkins couldn’t locate the Jenkinsfile at the configured path.

🌤️ What I’m Taking Forward

By the end of this project, the pipeline only turned green when the application was actually reachable and healthy.

More importantly, it reinforced principles I’ll carry forward. Fail early. Validate assumptions. Respect network boundaries. Make deployments safe to repeat.

This project wasn’t just about Jenkins or Docker. It was about learning to build systems that behave predictably under real-world constraints.