I’ve recently started getting more involved with Arista deployments, and EVE-NG is my go to for lab simulation of customer environment. EVE supports Arista vEOS out of the box, which is a full KVM QEMU image of Arista’s EOS switch operating system. A co-worker has a requirement to lab up a very large Arista deployment, 25-30 switches worth. He gave me a screenshot of a 128 core 256G ram server, that was almost 100% full on CPU and had consumed half the memory of the system.

I know Arista supports containerised EOS (cEOS) for lab purposes, so I figured I’d give it a whirl to see how the same sized lab would run on containers.

Containerlab

I first started with a VM running containerlab. Using their built-in instructions for cEOS worked flawlessly. I was able to spin up a 2 spine 8 leaf demo environment and use Ansible to deploy it.

As a side note, I will write up my experience with Ansible + Arista on another date. TLDR; The implementation isn’t complete in Ansible, and has some quirky workarounds.

Once I was confident I could get cEOS to run, I hit up my friend for his EVE-NG UNL lab file so i could replicate it as closely as possible in containerlab. THis is when I discovered his topology in EVE included some other vendors. Some of these vendors work really well in EVE with KVM/QEMU, and I don’t want to reinvent the wheel where I don’t need to.

EVE-NG and Docker

First, I have to figure out how Docker fits into EVE. I’m a paying EVE-NG customer, so I have the pro version. Adding Docker was pretty easy, run a few apt commands to install their built-in stuff. This is the official documentation, and also Knox Hutchinson has this video about setting it all up.

I should preface, I’ve messed with Docker in the past, but at a very high level. I understand the workflow of containers, and mostly how the networking works. That being said, the EVE-NG documentation is very lacking. If you want to run their pre-built containers it’s spot on, but when it comes to custom containers (which cEOS is to that environment) you’re pretty much on your own. Here’s how I built mine.

Everything that follows is done via CLI, so please ensure you have SSH access and SUDO setup. Another big tip, the ‘docker’ command in EVE doesn’t work. They have an alias called dc that you’ll see used below. This is because they have Docker listening only on a loopback which the dc command references.

Import cEOS and build a Container.

I started with manually importing cEOS and building a running Docker container within EVE. First step, go download cEOS. I’m part of an organization that is an Arista partner, so YMMV in getting the cEOS images. I see them in my downloads.

Once you have that, you need to copy it to the EVE-NG box. I’m not going to detail how you do that, just get it somewhere you can get to from SSH. Once SSH’d in, run this command to import the container to Docker:

docker import /path/to/your/download.cEOS-lab.tar.xz TAG:VERSION

Please put a proper TAG and VERSION in my case, mine was ceos:04.29.02F. This is how you’ll reference the image in the next steps.

Now we build a container. As part of your cEOS download you will see a readme file. This is Arista’s official instructions on running on docker. I can’t link it here, but here’s what they run to build a container

dc create --name=ceos1 --privileged -e INTFTYPE=eth -e ETBA=1 -e SKIP_ZEROTOUCH_BARRIER_IN_SYSDBINIT=1 -e CEOS=1 -e EOS_PLATFORM=ceoslab -e container=docker -i -t TAG:VERSION /sbin/init systemd.setenv=INTFTYPE=eth systemd.setenv=ETBA=1 systemd.setenv=SKIP_ZEROTOUCH_BARRIER_IN_SYSDBINIT=1 systemd.setenv=CEOS=1 systemd.setenv=EOS_PLATFORM=ceoslab systemd.setenv=container=docker

Again, fill in TAG and VERSION for your enviornment.

This builds a container, but does not start it. Arista’s documentation shows you then how to attach networks and NICs to the container, and then start it. EVE-NG will actually handle that for you, so you can probably skip this part.

Also one gotcha. In the above command they do NOT define a Management interface. This means Management0 will not be created in the container, nor bound to eth0 on docker. This is important if you plan on using Management for anything (like binding Management to an outside Cloud network in EVE). A modified command would be

dc create --name=ceos1 --privileged -e INTFTYPE=eth -e ETBA=1 -e SKIP_ZEROTOUCH_BARRIER_IN_SYSDBINIT=1 -e CEOS=1 -e EOS_PLATFORM=ceoslab -e container=docker -e MGMT_INTF=eth0 -i -t ceosimage:4.21.0F /sbin/init systemd.setenv=INTFTYPE=eth systemd.setenv=ETBA=1 systemd.setenv=SKIP_ZEROTOUCH_BARRIER_IN_SYSDBINIT=1 systemd.setenv=CEOS=1 systemd.setenv=EOS_PLATFORM=ceoslab systemd.setenv=container=docker systemd.setenv=MGMT_INTF=eth0

Once you get things built, we can start the container:

dc start ceos1

This will take a few minutes to fully start, depending on your system speed, but once it’s built you can get to the CLI with

dc exec -it ceos1 Cli

I did not mistype Cli, it is case sensitive. You should be greeted with an Arista EOS CLI.

Create an EVE compatible Docker image

Now that you’ve built the running container and you can see the Cli, it’s time to create an image EVE can boot and see in the GUI. First we stop the container

dc stop ceos1

Next we will “commit” the container. THis is effectively making a copy of the running/built container to be used as a template for other containers.

dc commit ceos1 eve-ceos:VERSION

Again, fill in your own VERSION combo. Mine is latest, so my image is tagged eve-ceos:latest. I read that you should preface all eve native containers with eve, so I’ve done that here.

Once that’s done, you are ready to spin up the container(s) in EVE.

Create a container in the EVE lab

Going back to this official EVE video, you can see how he brings up an EVE compatible docker image after installation. The only changes I make for this cEOS are:

  • First the image is the one you created in the previous step, in my case it’s eve-ceos:latest
  • Second, the Console is Telnet
  • Third, Ensure you enable DHCP on Eth0 if you want the Management0 interface in EOS to get DHCP. Again mine is bound to Management(Cloud0) network in EVE, which is in turn bound to EVE’s main Management interface. This gives me access to the container outside EVE if I want. I’ll explain why at the end.
  • Fourth, choose the number of additional NICs you want on your container. These are the Data interfaces, so pick as many as you need in your topology.
  • Fifth, I use 2 CPU 1G of ram for my nodes, YMMV.

That’s it. You can now drag the connections to appropriate Clouds, Networks, Nodes, etc. Start the container and again after a few minutes click to get console.

The only thing I’ve noticed, the console does not go straight into Cli. Instead it loads into bash (because Arista DOES give you full root access). To get to the EOS cli, just run Cli from the bash prompt.

Caveats

Here is a list of things I’ve noticed, good or bad, about doing this. I’ll try to update as I do more.

  • You cannot export configurations through EVE’s normal mechanism. This can probably be overcome with a script, I just haven’t looked into it.
  • To overcome config export, attach to a real management network and SCP the configurations off from EOS

Performance

At the beginning I mentioned the lab running 25+ vEOS nodes. I was able to replicate most of that lab, with 25 nodes, running on containers with an 8 Core 32G ram Intel NUC. It ran SLOW, and took about 20 minutes to fully boot up. I didn’t configure any spine/leaf or features, so after things settled it was sitting about 10% CPU load and 16G of memory. I did restrcit the memory on the containers to 1G, which seems like plenty.