Setting up a development environment with Cluster API using Kind

By Alexander Hughes on 14/04/2020

Airship is a collection of loosely coupled, but interoperable open source tools that declaratively automates cloud provisioning. Airship is designed to make your cloud deployments simple, repeatable, and resilient.

The primary motivation for Airship 2.0 is the continued evolution of the control plane, and by aligning with maturing CNCF projects we can improve Airship by making 2.0:

  • More capable
  • More secure
  • More resilient
  • Easier to operate

One such project is Cluster API, a Kubernetes project that brings declarative, Kubernetes-style APIs to cluster creation, configuration, and management. It provides optional, additive functionality on top of core Kubernetes to manage the lifecycle of a Kubernetes cluster.

In a previous blog post, Alan Meadows and Rodolfo Pacheco discussed the evolution of Airship 1.0 to Airship 2.0 and the relationship between Drydock and Cluster API. It's an interesting read, looking at how Cluster API will be used by Airship 2.0.

Today I will provide you the documentation and my tested step-by-step directions to creating a Cluster API development environment using Kind. This development environment will allow you to deploy virtual nodes as Docker containers in Kind, test out changes to the Cluster API codebase, and gain a better understanding of how Airship works at the component level to deploy Kubernetes clusters. These steps have all been tested in a virtual machine with the following configuration:

  • Hypervisor: VirtualBox 6.1
  • Operating System: Ubuntu 18.04 Desktop
  • Memory: 8gb
  • Processor: 6cpus
  • Networking: NAT
  • Proxy: N/A

To begin, create a new virtual machine with the above configuration.

Next, we will be working with the Cluster API Quickstart documentation using the Docker Provider and leveraging Kind to create clusters. What follows is a consolidated set of instructions from these resources.

  1. Update package manager and install common packages

    sudo apt-get update && sudo apt-get dist-upgrade -y
    sudo apt-get install -y gcc python git make
  2. Install golang (Documentation)

    sudo tar -C /usr/local -xzf go1.14.1.linux-amd64.tar.gz
    rm go1.14.1.linux-amd64.tar.gz
  3. Install docker (Documentation)

    sudo apt-get remove docker docker-engine containerd runc
    sudo apt-get update
    sudo apt-get install -y \
        apt-transport-https \
        ca-certificates \
        curl \
        gnupg-agent \
    curl -fsSL | sudo apt-key add -
    sudo apt-key fingerprint 0EBFCD88
    sudo add-apt-repository \
       "deb [arch=amd64] \
       $(lsb_release -cs) \
    sudo apt-get update
    sudo apt-get install -y docker-ce docker-ce-cli
    sudo groupadd docker
    sudo usermod -aG docker $USER
  4. Update /etc/profile with necessary environment variables

    sudo bash -c 'cat <<EOF >> /etc/profile
    export PATH=\$PATH:/usr/local/go/bin
    export DOCKER_POD_CIDRS=
    export DOCKER_SERVICE_DOMAIN=cluster.local
  5. Logout and log back in, or reboot your machine, for the user group and profile changes to take effect

    sudo reboot now
  6. Install kustomize (Documentation)

    git clone
    cd kustomize/kustomize
    go install .
    sudo mv ~/go/bin/kustomize /usr/local/bin/
    cd ~
  7. Install kind (Documentation)

    curl -Lo ./kind$(uname)-amd64
    chmod +x ./kind
    sudo mv ./kind /usr/local/bin/kind
  8. Install kubectl (Documentation)

    curl -LO`curl -s`/bin/linux/amd64/kubectl
    chmod +x ./kubectl
    sudo mv ./kubectl /usr/local/bin/kubectl
  9. Install clusterctl (Documentation)

    curl -L -o clusterctl
    chmod +x ./clusterctl
    sudo mv ./clusterctl /usr/local/bin/clusterctl
  10. Set up cluster api using docker provider (Documentation)

    git clone
    cd cluster-api
    cat > clusterctl-settings.json <<EOF
      "providers": ["cluster-api","bootstrap-kubeadm","control-plane-kubeadm", "infrastructure-docker"],
      "provider_repos": []
    make -C test/infrastructure/docker docker-build
    make -C test/infrastructure/docker generate-manifests
    cat > ~/.cluster-api/clusterctl.yaml <<EOF
      - name: docker
        url: $HOME/.cluster-api/overrides/infrastructure-docker/latest/infrastructure-components.yaml
        type: InfrastructureProvider
    cat > kind-cluster-with-extramounts.yaml <<EOF
    kind: Cluster
      - role: control-plane
          - hostPath: /var/run/docker.sock
            containerPath: /var/run/docker.sock
    cp cmd/clusterctl/test/testdata/docker/v0.3.0/cluster-template.yaml ~/.cluster-api/overrides/infrastructure-docker/v0.3.0/
    kind create cluster --config ./kind-cluster-with-extramounts.yaml --name clusterapi
    kind load docker-image --name clusterapi
    clusterctl init --core cluster-api:v0.3.0 --bootstrap kubeadm:v0.3.0 --control-plane kubeadm:v0.3.0 --infrastructure docker:v0.3.0
    clusterctl config cluster work-cluster --kubernetes-version 1.17.0 > work-cluster.yaml
    kubectl apply -f work-cluster.yaml
    kubectl --namespace=default get secret/work-cluster-kubeconfig -o jsonpath={.data.value} | base64 --decode > ./work-cluster.kubeconfig
    kubectl --kubeconfig=./work-cluster.kubeconfig apply -f
  11. Interact with your cluster

    kubectl --kubeconfig=./work-cluster.kubeconfig get nodes

That's all there is to it! If you made it this far, you should have a working CAPD environment to develop in.

I'd like to thank Michael McCune and the rest of the Cluster API community for helping me troubleshoot my setup so that I could share these steps with you. The Cluster API community is available on Slack in the #cluster-api channel.