It’s a series post about the Container Network Interface and you can find other posts below.
[Container Network Interface] Bridge Network In Docker
[Container Network Interface] Write a CNI Plugin By Golang
In this post, I will try to introduce the concept of Container Network Interface (CNI), including why we need this, how it works and what does it do.
If you have not familiar with what is
linux network namespace and how
docker handles the network for its containers.
You should read the [CNI] Bridge Network In Docker to learn those concepts and that will be helpful for this tutorial.
In the previous post, we have learn the procedure of the basic bridge network in the docker.
- Create a Linux Bridge
- Create a Network Namespace
- Create a Veth Pair
- Connect the bridge and network namespace with veth pair
- Setup the IP address to the network namespace
- Setup the iptalbes rules for exporting the services (optional)
However, That’s the
bridge network and it only provide the layer2 forwarding. For some use cases, it’s not enough.
More and more requirement, such as layer3 routing, overlay network, high performance
, openvswitch and so on.
From the docker point of view, it’s impossible to implement and maintain all above requirements by them.
The better solution is to open its interface and make everyone can write its own network service and that’s how
docker network works.
So, there’re so many plugins for the
docker network now and every can choose what kind of the network they want.
Unfortunately, docker isn’t the only container technical, there’re otehr competitors, such as
Besides, more and more
container cluster orchestration,
kubernetes and so on.
bridge network as an example, do we need to implement the
bridge network for all container orchestration/solutions? do we need to write many duplicate code because of the not-unified interface between each orchestrator?
That’s why we need the
Container Network Interface(CNI), The
Container Network Interface(CNI) is a
Cloud Native Computing Foundation projects, we can see more information here.
CNI, we have a unified interface for network services and we should only implement our network plugin once, and it should works everywhere which support the
According to the official website’s report. those
container runtimes solutions all supports the
- rkt - container engine
- Kubernetes - a system to simplify container operations
- OpenShift - Kubernetes with additional enterprise features
- Cloud Foundry - a platform for cloud applications
- Apache Mesos - a distributed systems kernel
- Amazon ECS - a highly scalable, high performance container management service
Container Network Interface is a specifiction which defined what kind of the interface you should implement.
In order to make it easy for developers to deveploe its own CNI plugin. the
Container Network Interface project also provides many library for developing and all of it is based on the
In CNI specifiction, there’re three method we need to implement for our own plugin.
ADD will be invoked when the container has been created. The plugin should prepare resources and make sure that container with network connectivity.
DEKETE will be inboked when the container has been destroyed. The plugin should remove all allocated reousrces.
VERSION shows the version of this CNI plugin.
For each method, the CNI interface will pass the following information into your plugin
I will explain those fields detaily in the next tutorial. In here, we just need to know for the CNI plugin, we sholud use those information
Network Namespace path and
Interface Name and
StdinData to make the container with network connectivity.
Use the previous bridge-network as example. the
network namespace will be created by the
orchestrator and it will pass the path of that
network namespace via the variable
netns to CNI.
After we crete the
veth pair and connect to the
network namespace, we should set the interface name to
For the IPAM (IP Adderss Management), we can get the information from the
StdinData and calculate what IP address we should use in the CNI plugin.
Now, We will see how kubernetes use CNI to create a network function for Pods.
In order to use the CNI, we need to config the
kubelet to use the
There’re three argurments we need to take care.
- cni-bin-dir: the directory of CNI binaries.
- cni-conf-dir: the directory of CNI config files, common CNI(flannel/calico..etc) will install its config into here.
- network-plugin: the type of network-plugin for Pods.
In my kubernetes cluster (installed by kubeadm)
ps axuw | grep cni
root 1864 4.9 2.1 569172 110108 ? Ssl 15:18 3:06 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni
You can see the arguments
--cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni of the kubelet.
Now, Let we see the files under
cni-bin-dir contains all the CNI binary file and those files can be programmed by any language, just follow the CNI interface.
bridge dhcp flannel host-local ipvlan loopback macvlan portmap ptp rainier sample tuning vlan
cni-conf-dir, we should put the CNI config here and
kubernetes will use the config for your Pod.
kubernetes cluster, I had installed the flannel CNI in it and the flannel will install its config here.
kubelet receives a request to create a Pod in the node.
First, it will search the
cni-conf-dir in the alphabet order and inspect it.
10-falnnel.conf as example. when the
kubelet knows the
flannel, it will try to call the
flannel in the
cni-bin-dir and that’s
kuberlet create the Pod, it will create a
pause conatiner first.
And follows the CNI steps to setup the network fot that
pause container.(Assueme we use the network-plugin=cni)
Now, The pause container is running and has the network connectivity.
kubelet will create containers which is be described in the yaml file and attach those container to that pause container (in the docker command, we can use the –net=$containerID to do the same thing).
By those procedure, we can maks sure all containers share the same network stack and any container crash won’t destory the network stack since the network stack is hold sy the
Combine the pause container and user containers, it’s called
And you can try to use the
docker ps in your
kubernetes node to see how many pause container in there.
Container Network Interface CNI made the network-service developer more easy to develop their own network plguin. They don’t need to write duplicate code for different system/orchestrator.
Just write once and run everywhere.
And the CNI consists of a specification and many userful libraries for developers. The CNI only care the
DELETE events. the CNI plugin shoould make sure the container with network connectivity when the
ADD event has been triggered and remove all allocted resources when the
DELETE event has been triggered.
In the next tutorial, I will show how to write a simple bridge CNI plugin in golang.