~/blog/5

$ cat post-5.log

Published

#Azure DevOps #CI/CD #DevOps #YAML

The Pipeline That Took Forever

When I inherited the Azure DevOps pipeline for our modernization project, builds took 45 minutes. Developers would push code, go grab coffee, attend a meeting, and come back to see if their build passed. The feedback loop was broken. Worse, the pipeline was configured through the UI—no version control, no code review, just clicking through forms and hoping you didn't break something.

The first thing we did was migrate to YAML pipelines. Pipeline-as-code meant our build definitions lived in the repository, went through pull requests, and had the same review process as application code. If the pipeline broke, we could see exactly what changed and who changed it.

Structure: Templates and Reusability

Our initial YAML file was a 500-line monstrosity. Every project duplicated the same steps with slight variations. When we needed to update the build process, we had to modify multiple pipelines and hope we didn't miss one.

We introduced template files. Common steps like restoring NuGet packages, running tests, or publishing artifacts became reusable templates stored in a shared repository. Individual projects imported these templates and provided project-specific parameters. One change to the template propagated to all pipelines automatically.

# azure-pipelines.yml
trigger:
  - main

pool:
  vmImage: 'ubuntu-latest'

extends:
  template: templates/dotnet-build.yml
  parameters:
    projectPath: 'src/MyProject.csproj'
    runTests: true

Speed: Parallelization and Caching

The 45-minute build time was killing productivity. We analyzed where time was spent: package restoration took 8 minutes, builds took 12 minutes, tests took 20 minutes, and deployment steps took 5 minutes.

First optimization: package caching. Azure DevOps can cache NuGet packages between builds. If dependencies haven't changed, restore happens in seconds instead of minutes. We went from 8 minutes to under 30 seconds for package restoration.

Second optimization: parallel jobs. Tests for different projects didn't need to run sequentially. We split them into parallel jobs using a matrix strategy. Multiple agents ran tests simultaneously. Testing time dropped from 20 minutes to 7 minutes.

strategy:
  matrix:
    unit_tests:
      testProject: 'tests/UnitTests'
    integration_tests:
      testProject: 'tests/IntegrationTests'
    e2e_tests:
      testProject: 'tests/E2ETests'

Result: Build time went from 45 minutes to under 15 minutes. Developers got faster feedback, caught issues earlier, and stayed in flow.

Multi-Environment Deployments

We had three environments: Development, Staging, and Production. Initially, we had three separate pipelines. When we wanted to change the deployment process, we had to update all three. Mistakes were common.

We consolidated to a single pipeline with environment-specific stages. Each environment had its own stage with approval gates. Development deployed automatically on successful build. Staging required manual approval. Production required approval from two team leads.

stages:
  - stage: Build
    jobs:
      - job: BuildAndTest
        # build steps

  - stage: DeployDev
    dependsOn: Build
    jobs:
      - deployment: DeployToDev
        environment: development

  - stage: DeployStaging
    dependsOn: DeployDev
    jobs:
      - deployment: DeployToStaging
        environment: staging

  - stage: DeployProd
    dependsOn: DeployStaging
    jobs:
      - deployment: DeployToProd
        environment: production

Quality Gates and Automated Checks

Catching issues in production is expensive. We integrated quality gates directly into the pipeline. Code coverage thresholds, static code analysis, security scanning—all happened automatically before deployment.

  • Unit test coverage: Builds failed if coverage dropped below 80%
  • SonarQube analysis: Code quality and security vulnerabilities checked on every PR
  • Dependency scanning: Automated checks for vulnerable packages
  • Integration tests: Verified interactions between services before deploying

These gates shifted quality left. Instead of discovering issues in staging or production, we caught them during the build. Fixing a bug before it deploys is cheaper than fixing it in production.

Secrets Management

Connection strings, API keys, and certificates can't be hardcoded in YAML files that live in git. Azure DevOps provides variable groups and Azure Key Vault integration for managing secrets securely.

We created environment-specific variable groups in Azure DevOps, linked to Key Vault. Pipelines referenced these variables, and Azure DevOps injected them at runtime. Secrets never appeared in logs or code repositories.

Lessons Learned

  • Pipeline as Code: YAML pipelines in version control enable code review and auditability
  • Templates are your friend: Reusable templates reduce duplication and make updates easier
  • Optimize the critical path: Identify bottlenecks, parallelize what you can, cache what's expensive
  • Fast feedback matters: Developers lose context when waiting. Aim for sub-10-minute builds
  • Automate quality checks: Don't rely on humans to remember to run linters and tests
  • Use environments and approvals: Protect production with appropriate gates
  • Monitor pipeline health: Failed builds block everyone. Make pipeline maintenance a priority

The Impact

Today, our pipeline is a well-oiled machine. Developers push code and get feedback in under 15 minutes. Deployments to development happen automatically dozens of times per day. Staging and production deployments are routine, low-stress events with automated rollback capabilities.

The CI/CD pipeline isn't just infrastructure—it's a force multiplier. It gives developers confidence to refactor, enables rapid iteration, and ensures quality before code reaches users. Investing in pipeline optimization pays dividends every single day.

← Back to blog overview