Backgroound Image

Automated ThousandEyes Raspberry Pi image customization

If you haven’t caught this yet, I am a huge fan of ThousandEyes! I was working on a project where we were planning to deploy ThousandEyes agents to hundreds of sites, and to keep costs down we were going to use the Raspberry Pi 4 Model B. Having to image hundreds of SD cards was going to be a nightmare, so I wanted to automate the ThousandEyes Raspberry Pi agent image customization process to simplify the deployment.

Full disclosure: I started this project just as the supply chain issues were hitting the Pi market, and I left that job a few months later. I was never able to run this at scale, but I was able to use this for the few units I was able to get my hands on.

I also have a series on getting ThousandEyes deployed in a lab environment if you want to read more on getting ThousandEyes running:

Hardware/Software required

Configure Pi for imaging

I used a Raspberry Pi as the system for building the images. I found it makes the process much easier, but this isn’t a hard requirement. The main thing is you will need to be able to mount a Linux file system, and it seemed to work much better using a native Ubuntu instance instead of trying to map an SD card to a VM or WSL instance.

  1. Using your preferred tool, flash the Ubuntu image onto one of the microSD cards – these are both good options:
  2. Boot a Pi using the newly flashed Ubuntu image and complete the setup

Automated and Manual Processes

I have an automated process available at my GitHub page:
The automation uses a JSON file with the device-specific information. You could automate the process of filling out the hostname and IP info if you have that info in an IPAM or similar platform.

How to use the vars.json file:

  • Token Value – Group token from ThousandEyes portal
  • Image name – Shouldn’t need to do anything with this, unless ThousandEyes changes the image file name
  • SSH key – Leave blank if you aren’t adding a key, otherwise paste in the contents of the public key file
  • Devices – Duplicate the device objects as needed
    • Hostname – The name of the specific agent
    • IP – Assign static IP or for DHCP, leave it blank- “”
    • Subnet mask and gateway – self explanatory
    • DNS – Also self explanatory. If you’re only using one DNS server, leave DNS2 blank- “”

Below is the manual process. The overall process is the same as the automated script, but with less automated goodness. If you want to understand how the automated process works, you can read through the manual process to get a better understanding.

Mount the ThousandEyes image

Before we can customize the image we need to mount it. Run the following steps from a terminal on the imaging Pi created previously.

  1. Download the ThousandEyes image
  2. Decompress the image
    unxz -k thousandeyes-appliance.rpi4.img.xz
  3. Create a mount directory
    mkdir /tmp/temount
  4. Mount the image
    sudo mount -o loop,offset=269484032 thousandeyes-appliance.rpi4.img /tmp/temount
    • Note: Because there are two partitions, we need to mount the second partition
      • If this doesn’t work it could be caused by a change in the partition layout, use these steps to get the correct offset
        fdisk -l thousandeyes-appliance.rpi4.img
      • Here’s an example output:
        Disk thousandeyes-appliance.rpi4.img: 4.07 GiB, 4367262720 bytes, 8529810 sectors
        Units: sectors of 1 * 512 = 512 bytes
        Sector size (logical/physical): 512 bytes / 512 bytes
        I/O size (minimum/optimal): 512 bytes / 512 bytes
        Disklabel type: dos
        Disk identifier: 0x354db354

        Device Boot Start End Sectors Size Id Type
        thousandeyes-appliance.rpi4.img1 * 2048 526335 524288 256M c W95 FAT32 (LBA)
        thousandeyes-appliance.rpi4.img2 526336 8529809 8003474 3.8G 83 Linux
      • Multiply the sector size (512) by the start block (526336) and use that value in the offset.
  5. Verify the image mounted
    ls /tmp/temount/
    • There should be several folders listed. If not, check the offset mentioned in step 4.

Customize the image

There are three files that need to be modified to get the image ready.

  1. Apply the ThousandEyes Account Token
    sudo sed -i 's/<account-token>/$TOKEN/g' /tmp/temount/etc/te-agent.cfg
    • Note: replace $TOKEN with your token value.
    • Follow these steps to get your account token:
      • Open a web browser and navigate to
      • Log into your account
      • Click the Hamburger icon in the top left
      • Expand Cloud & Enterprise Agents
      • Click Agent Settings
      • Click the Add New Enterprise Agent button
      • Click the eye button to show the token, or the copy button to store it on the clipboard
  2. Set the hostname
    sudo sed -i 's/tepi/$HOSTNAME/g' /tmp/temount/etc/hostname
    • Replace $HOSTNAME with your desired hostname
  3. Configure a static IP (Optional – by default the appliance will use DHCP)
    sudo sed -i 's/dhcp/static/g' /tmp/temount/etc/network/interfaces
    sudo echo address $IP >> /tmp/temount/etc/network/interfaces
    sudo echo netmask $MASK >> /tmp/temount/etc/network/interfaces
    sudo echo broadcast $BROADCAST >> /tmp/temount/etc/network/interfaces
    sudo echo gateway $GW >> /tmp/temount/etc/network/interfaces
    sudo sed -i 's/#DNS=/DNS= $DNS/g' /tmp/temount/etc/systemd/resolved.conf
    • These values will need to be updated accordingly: $IP $MASK $BROADCAST $GW $DNS
    • A second DNS server can be added by simply adding both addresses with a space between them
  4. Add an SSH key (Optional)
    sudo echo $SSH >> /tmp/temount/etc/ssh/keys/thousandeyes/authorized_keys
    • You can generate an SSH key with the following command: ssh-keygen -b 2048 -t rsa
    • Copy the contents of the key file and put it in place of $SSH
  5. Unmount the image
    sudo umount /tmp/temount

Flash the image to a microSD card

This is the easiest part, but also the most time consuming. After plugging in the SD card, find the device path for it by running this:

sudo fdisk -l | grep sd

If the SD card is already has partitions on it the output will show each partition, so if you see /dev/sda1 and /dev/sda2 that’s showing the partitions. The image file contains its own partitions, so it needs to be written directly to the card, not into an existing partition. To do that just ignore the numbers shown, so the destination would be /dev/sda.

To start writing the SD card run this command with the correct destination location.

dd if=thousandeyes-appliance.rpi4.img of=/dev/sda status=progress

Make sure the correct destination is selected, otherwise you might overwrite something you don’t want to lose. It usually takes about 10 minutes to write the SD card. When it’s finished, try booting a Pi using the SD card and it should come up with the correct hostname, IP, and account token.

All set!

After booting the Pi you’ll want to log in and change the local admin username and password

Default username: admin
Default password: welcome

After that, you can double check the setup wizard to make sure everything is good to go. The ThousandEyes agent will reach out to their registration servers and it will use the account token to get assigned to your account. The agent should be online in the portal within a few minutes.


I’m not going to go through all the possible code examples of what you can do with this, but I thought I’d throw out a few things to think about.

  • Make API calls into an IPAM to get the site name and IP info, then populate the vars.json file automatically with that data.
  • Flip the script – this process builds the site-specific data into the image. Another option would be to use a generic image (still embedding the token and SSH key) and use DHCP addressing. Then, query the ThousandEyes API to find new agents named “tepi” and then remotely update the IP and hostname that way
  • A unique SSH key can be added to each image by moving the SSH key value under the devices in the vars.json and moving the SSH section of the script under the per device section, along with changing the variable to use the correct location from vars.json.
  • Similarly, you could also create new, unique SSH keys by including the keygen process in the script. Just make sure you keep a copy of the keys somewhere safe!

That’s it! Overall, the process is pretty easy, and far easier than finding Raspberry Pi 4s in stock anywhere… If you use this process to automate the ThousandEyes Raspberry Pi agent image customization, I’d love to hear how it worked for you and what, if any, changes you made to the process. You can add a comment here,

ThousandEyes Walkthrough 4.3.1 Scenario 1 – Enterprise agent to agent test configuration

This post will go over the first scenario for the ThousandEyes lab. To see past posts in this series expand the box below.

ThousandEyes Walkthrough Table of Contents

There are some behind-the-scenes posts that go into more detail on how and why I took the approach that I did. Those can be found here:


Scenario 1

The objective is to monitor the connection between all agents and the two client sites.  Both client sites have agents installed, which means an agent-to-agent test would be the best fit.  An agent to server test could be used, but that is dependent on a specific service running, or an ICMP response.  Additionally, the agent-to-server test can only initiate from the agent side, while an agent-to-agent test can perform bidirectional monitoring.

Create an Enterprise agent-to-agent test

  1. Log in to ThousandEyes (I presume this skill has been mastered by now)
  2. On the left side, expand the menu, then click on Cloud and Enterprise Agents to expand that list, and then click Test Settings
  3. Click Add New Test.
  4. This will be an agent-to-agent test.  For the layer select Network, and then under Test Type select Agent to Agent.
    1. Add a name for the test
  5. Under Basic Configuration there are a few things to set here.
    1. Click in the Target Agent field
    2. On the right side, click Enterprise to filter the list down to only our Enterprise agents
      1. NOTE: the agents listed here are Cloud agents, and those can be used to test public services from locations all over the world
    3.  Select the CS1-2 agent
    4. The interval is how often these tests are performed.  Since this is a lab we’ll back this off to a 30-minute interval to reduce the number of tests being run.
    5. In the Agents field, we’ll select the source agents.  Again, filter based on Enterprise agents, and then select all agents except CS1-2
      1. NOTE: Selecting the North America group (or your local region) will include the CS1-2 agent, and that won’t allow the test to be created because a source and target are the same.
    6. Under Direction, select Both Directions
    7. Leave the Protocol option set to TCP
    8. Check the box to Enable Throughput monitoring, then leave the duration with the default 10s time.
    9. Leave Path Trace Mode unchecked
    10. Uncheck the Enabled box next to Alerts.  We’ll cover alerts later on in this series
    11. When everything is completed it should look like this:
  6. Click Create New Test
That will get the first test created.  Now we’ll want to create the same test, but use CS2-2 as the target.  You could manually create the new test, or duplicate the existing test and make a few changes.
  1. Click the ellipsis (…) to the right of the test, and then in the menu click Duplicate
  2. Correct the test name
  3. Change the Target Agent to CS2-2
  4. In the Agents field uncheck CS2-2 and check CS1-2
    1. NOTE: If you can’t uncheck CS2-2 then click the box next to the region, and that should unselect all agents.  Then reselect all agents except CS2-2
  5. When complete it should look like this:
  6. Click Create New Test again
That gets the two tests required for Scenario 1.  It should automatically start running the tests.  While the tests are running they will consume test units.  If you want to conserve test units you can disable the tests from the Test Settings screen.  Simply uncheck the Enabled boxes to disable them, and when you want to enable the tests again just check the boxes.

The test results will be reviewed in a later post in this series.

ThousandEyes Walkthrough 4.2 Scenarios and Test Types

This post will go over the scenarios that will be used for the ThousandEyes lab, as well as the different test types available in ThousandEyes. To see past posts in this series expand the box below.

ThousandEyes Walkthrough Table of Contents


Here are business use case scenarios that will be the basis for the ThousandEyes lab testing.  The scenarios are intentionally vague, but should provide a relatable situation that any network team might encounter.  Then, these business cases will be translated into technical requirements that will be used to build out each solution.  Keep in mind, that this is to illustrate the business and technical use cases for ThousandEyes.  In a real-world situation, there would be multiple facets to these scenarios, but we’re focused on one piece of this solution.

Business Use Cases

  1. Both of the corporate campuses (client site 1 and client site 2) house resources critical to business processes.  It is important that these resources are accessible and performance meets the business requirements.
  2. It’s known that critical applications are dependent on other network services, but there is a concern that the underlying services aren’t able to support the applications.  
  3. An externally managed web application is used frequently used by employees, and if it were unavailable or slow that would negatively impact the employees’ productivity.
  4. Two internal web applications are used by customer-facing staff, and if these applications are unavailable or poorly performing for any location that would negatively impact the customer experience.
  5. A critical business process relies on a third-party web service.  If there are delays in any stage of this process it can cause a significant impact.  

Translation into Technical Requirements

  1. Network performance from all enterprise locations should be monitored.  
  2. DNS has been identified as a critical service that other applications are dependent on.  The CML.LAB domain must be monitored for availability and performance.
  3. Connection performance and availability to these HTTP servers must be monitored.  The application vendor should be monitoring the application performance
    1. NOTE: There’s potential value in monitoring application performance both to keep track of SLA metrics, and also to speed the identification of an issue, but for this example, we’ll take the simplest approach.
  4. In addition to monitoring the HTTP connections, the time required to access these applications should be monitored
  5. Multiple web pages need to be monitored, and it needs to happen in a sequence similar to how a user would interact with the application.

Test Types

ThousandEyes has a total of 12 Enterprise test types split across 4 categories.
  • Routing
    • BGP
      • This test looks at BGP peering (the lab environment isn’t peered out to the internet, and setting up private peering is going to be out of scope for the lab)
  • Network
    • Agent-to-Server
      • This creates either a TCP or ICMP connection to the target address and monitors loss, latency, and jitter
    • Agent-to-Agent
      • Creates a connection between two ThousandEyes agents, and allows bidirectional TCP or UDP testing, monitoring loss, latency, jitter, and optionally throughput.
  • DNS
    • DNS Server
      • This is pretty straightforward, a DNS record and DNS server are entered, and the test checks for resolution of that DNS record.
    • DNS Trace
      • A DNS trace queries top-level DNS servers and then works down through name servers to show the path used to resolve a DNS query the fastest.
    • DNSSec
      • With this test, the validity of a DNSSEC entry can be verified
  • Web
    • HTTP Server
      • An HTTP(S) connection is made to a web server, effectively this is like using cURL to check if a connection can be established.
    • Page Load
      • Building on the HTTP test, the page load actually loads the HTML and related objects and tracks the time for each step in the process.
    • Transaction
      • Continuing to build on the Page Load test, a transaction test uses a Selenium browser and a script to simulate user interaction with a web application.
    • FTP
      • Establishes an FTP (or SFTP/FTPS)  connection, and attempts to download, upload, or list files.
  • Voice
    • SIP Server
      • This test checks SIP access to a server (by default TCP 5060), and under the advanced options, it can be configured to attempt to register a device with the SIP server.
    • RTP Stream
      • This is similar to an agent-to-agent test, but since it tests specifically focuses on RTP it includes MOS and PDV metrics in addition to the loss, latency, and jitter provided by normal agent tests.
There’s a lot more detail around tests and some of the advanced options available for each.  More info can be found here:
The Endpoint tests have two different methods to choose from.  The first is a scheduled test, which functions in a similar way to the Enterprise tests.  Endpoints can either do agent-to-server network tests or HTTP web tests.  The other option, which is unique to Endpoint agents, is the Browser Session test.  A Browser Session test uses a browser plugin (installed as part of the Endpoint agent installation) that collects data from the browser based on real-time interactions with web applications.

What’s next?

The next few posts will go over each scenario in detail. It will cover the creation of various tests to meet the requirements outlined in each scenario. After all the tests are created then we’ll look at the results and review how that information is useful.

ThousandEyes Walkthrough Part 4.1 – SNMP Monitoring

This post will go over enabling and using SNMP monitoring in ThousandEyes. To see past posts in this series expand the box below.

ThousandEyes Walkthrough Table of Contents

There are some behind-the-scenes posts that go into more detail on how and why I took the approach that I did. Those can be found here:

With the lab environment built, and the agents installed and online, now it’s time to start actually getting monitoring data through ThousandEyes!

If you haven’t followed along with the previous posts in this series you can find the lab build here: and the agent installation here:

This lab requires version 1.1 of the lab build.  Verify the lab you are using is 1.1 or newer.  If it’s not, look at the CHANGELOG section near the bottom of the lab build post:

SNMP Configuration

The SNMP configuration will allow basic SNMP monitoring but is not intended to replace existing SNMP monitoring solutions.  Within ThousandEyes the value of SNMP monitoring is to provide more contextual data and visibility, and some capabilities to alert on different conditions.

  1. Open a web browser and navigate to
  2. Log into your account
  3. Click the Hamburger icon in the top left
  4. Expand Devices
  5. Click on Device Settings
  6. There might be a Get Started with Devices splash screen, or it will take you directly to the Devices page.
    1. Splash screen –
      1. Click Start Discovery
    2. Devices Page
      1. Click Find New Devices
  7. On the right in the Basic Configuration enter the scan details
    1. In Targets enter the following subnet:
    2. In the Monitoring Agent drop-down select CS1-1
    3. Under Credentials click “Create new credentials”
      1. In the Add New, Credentials pane enter a name, and for the community, string enter: TE
    4. If the Credentials don’t auto-populate then click the dropdown and select the TE-SNMP that was just created
    5. Occasionally devices may not be picked up on the first discovery, but if the box is checked to “Save as a scheduled discovery” it will retry every hour
    6. Click Start Discovery
  8. Wait for the discovery process to complete – this might take a few minutes
    1. NOTE: There seems to be a bug in the UI where it displays a “No devices found” error, even though all the devices were discovered.
  9. Click back to the main section of the page and the Add Devices panel will disappear
  10. Click the Select All checkbox on the top left of the device list, then click Monitor at the bottom of the page.
  11. Wait a few minutes for the devices to show Green under the Last Contact column

SNMP Toplolgy

ThousandEyes includes a cool toplogy builder based on the data collected from the SNMP monitors.  It’s able to determine device adjacency, but not necissarily the best placement for our interpretation.  The good news is the devices can be moved to better align to what we’d like to see.
  1. Hover over the menu icon in the top left, then under Devices click on Views
  2. The Device Views will show some metric data on the top, and the topology on the bottom
  3. Click on Edit Topology Layout
  4. Devices in the topology view can be moved (drag-and-drop) to better represent the actual topology. Click Done Editing when the device positions match the lab topology.
As usual, if there were any issues you can add a comment to this post, or reach me on Twitter @Ipswitch


The SNMP monitoring in ThousandEyes is now configured.  One important note here is that this is a lab build.  In a production environment steps should be taken to secure SNMP access.  Restricting access to SNMP via ACL is always a good idea, as well as using SNMP v3 for authentication and encryption.
Later in this series the SNMP configuration will be revisited.  When data is flowing on the lab network the SNMP views will be useful in getting more information on traffic flows.  The SNMP data can also be used to help troubleshoot issues and to create alarms depending on network conditions.

What’s next

The next task is to define some scenarios to identify what needs to be monitored.  The scenarios will be generic but should be relatable for any IT professional out there.  After the scenarios are defined then the ThousandEyes tests can be built for each unique scenario.

ThousandEyes Walkthrough Part 3 – Enterprise and Endpoint Agent Installs

This post will go over installing the ThousandEyes agents in the lab. To see all the posts in this series expand the box below.

ThousandEyes Walkthrough Table of Contents

There are some behind-the-scenes posts that go into more detail on how and why I took the approach that I did. Those can be found here:

There are going to be a number of agent deployments in the lab that was covered in the previous post:

  • 4x Linux Enterprise Agent installs on the CML Ubuntu instances
    • CS1-1, CS1-2, CS2-1, and CS2-2
  • 2x Docker Enterprise Agent container deployments on the Ubuntu Docker host
    • These two agents will be added to a cluster
  • 1x Raspberry Pi Enterprise agent (optional)
  • 1x Windows Endpoint Agent install on the Windows VM


The lab needs to be built out.  Details on that process can be found here:
Before we can start with the agent installs some ThousandEyes licenses are required.  It’s possible you already have some ThousandEyes licenses.  Cisco has bundled Enterprise Agents with the purchase of DNA Advantage or Premier licensing on the Catalyst 9300 and 9400 switches.

If existing licenses are unavailable a 15-day trial license can be requested here:

Additional hardware and software

As a side note – if you plan to work a lot with the Raspberry Pi I strongly recommend getting the USB 3 adapter.  It has a significant improvement in performance over the USB 2 adapters that are typically bundled with Raspberry Pi kits.  The SD cards recommended by ThousandEyes are because of the card performance.  Other cards can be used, but there may be a negative impact on performance.


Account Group Token

Before getting started with the installs it is important to get your Account Group Token.  This is an ID that is used to associate the agents to the correct account.  When deploying agents it will often require the token to be specified.
There’s multiple ways to find the token, but I think the easiest is to just pull it from the Enterprise Agent deployment panel
  1. Open a web browser and navigate to
  2. Log into your account
  3. Click the Hamburger icon in the top left
  4. Expand Cloud & Enterprise Agents
  5. Click Agent Settings
  6. Click the Add New Enterprise Agent button
  7. Click the eye button to show the token, or the copy button to store it on the clipboard
    1. In a production environment you would want to keep this token safe.  It provides devices access to your ThousandEyes account, so it should not be made public
  8. Store the token in a safe, convenient location.  It will be used to add agents to the ThousandEyes account throughout this process.

Linux Enterprise Agent install

  1. Open a web browser and navigate to
  2. Log into your account
  3. Click the Hamburger icon in the top left
  4. Expand Cloud & Enterprise Agents
  5. Click Agent Settings
  6. Click the Add New Enterprise Agent button
  7. Click the option for Linux Package
  8. Copy the commands displayed
    1. curl -Os
      chmod +x
      sudo ./ -b <--Your Token goes here-->
  9. Perform the following steps for CS1-1. CS1-2, CS2-1, and CS2-2 in CML
    1. In CLM open the terminal session and log in
    2. Paste the commands into the terminal and press Enter
    3. It may take some time, but eventually there will be a prompt that say:

      The default log path is /var/log. Do you want to change it [y/N]?

    4. Press Enter to accept the default log location
    5. It might take 10 minutes or it could be over an hour for the process to complete and the agent to come online.  When it returns to the user prompt the service should be started.
  10. When the installs are complete they should be listed in the ThousandEyes portal under Enterprise Agents
    1. If the agent status is yellow it likely means an agent update is required, and it should automatically update within a few minutes

Docker Enterprise Agent install

    1. Open a web browser and navigate to
    2. Log into your account
    3. Click the Hamburger icon in the top left
    4. Expand Cloud & Enterprise Agents
    5. Click Agent Settings
    6. Click the Add New Enterprise Agent button
    7. Click the option for Docker
    8. Scroll down to the sections with the commands
    9. Copy the section to configure seccomp and apparmor profile
      1. curl -Os
        chmod +x
        sudo ./
    10. Log in to the Ubuntu node that is the Docker host and paste in the commands:
      1. Add listening IPs for the Docker containers
        1. sudo ip add add dev ens33
          sudo ip add add dev ens33
      2. Pull the TE Docker image
        1. docker pull thousandeyes/enterprise-agent > /dev/null 2>&1
      3. Update these commands by putting in your ThousandEyes token and changing the IPs if needed, then run them to create two ThousandEyes agents.

NOTE: These commands have been updated to include DNS and IP settings that aren’t available on the ThousandEyes Enterprise Agent page. If you use the commands from ThousandEyes the DNS and Published ports will need to be updated.

      1. docker run
          -e TEAGENT_ACCOUNT_TOKEN=<--Your Token goes here--> 
          -e TEAGENT_INET=4 
          -v '/etc/thousandeyes/TE-Docker1/te-agent':/var/lib/te-agent 
          -v '/etc/thousandeyes/TE-Docker1/te-browserbot':/var/lib/te-browserbot 
          -v '/etc/thousandeyes/TE-Docker1/log/':/var/log/agent 
          --name 'TE-Docker1' 
          --security-opt apparmor=docker_sandbox 
          --security-opt seccomp=/var/docker/configs/te-seccomp.json 
          thousandeyes/enterprise-agent /sbin/my_init
      2. docker run
          -e TEAGENT_ACCOUNT_TOKEN=<--Your Token goes here--> 
          -e TEAGENT_INET=4 
          -v '/etc/thousandeyes/TE-Docker2/te-agent':/var/lib/te-agent 
          -v '/etc/thousandeyes/TE-Docker2/te-browserbot':/var/lib/te-browserbot 
          -v '/etc/thousandeyes/TE-Docker2/log/':/var/log/agent 
          --name 'TE-Docker2' 
          --security-opt apparmor=docker_sandbox 
          --security-opt seccomp=/var/docker/configs/te-seccomp.json 
          thousandeyes/enterprise-agent /sbin/my_init
  1. When the installs are complete they should be listed in the ThousandEyes portal under Enterprise Agents
    1. If the agent status is yellow it likely means an agent update is required, and it should automatically update within a few minutes

Docker Enterprise Agent configuration

There are two configuration tasks that will be performed on the Docker agents.  The IP setting in ThousandEyes will be updated to use the host IPs that are tied to the Docker agents instead of the private Docker IPs, and the two agents will be added to a ThousandEyes Cluster.
  1. Open a web browser and navigate to
  2. Log into your account
  3. Click the Hamburger icon in the top left
  4. Expand Cloud & Enterprise Agents
  5. Click Agent Settings
  6. Click on the Agent
  7. In the right panel click on Advanced Settings
  8. Updated the IP address with the address assigned to that instance
  9. Click the Save Changes button on the bottom right
  10. Repeat this process for the other container agent
  11. At the Enterprise Agents page select both Docker agents
  12. Click the Edit button
  13. Select Edit Cluster
  14. On the right select Add to a new cluster
    1. In the name field type Docker
  15. Click Save Changes
    1. It will give a confirmation screen, click Save Changes again
  16. The agent icon will be updated to include the cluster icon, and under the Cluster tab it will display the new cluster
Wondering why those changes were made?
The first change to the IP address was because ThousandEyes learns the IP address of the agent from its local configuration.  Docker, by default, creates a bridged network that uses NAT to communicate with the rest of the network.  That means the addresses Docker assigns to containers aren’t accessible on the network.  The additional IPs were added to the Ubuntu host to allow static NAT entries to be created in Docker (the Publish lines), which redirect traffic to sent to those IPs to the correct agent.  Since there are two containers using the same ports, we need two IP addresses to uniquely address each instance.  The change that was made to the agent settings in ThousandEyes forces other agents to use the routed LAN network instead of the unrouted Docker network.  This is only needed because we are going to build inbound tests into those agents.  If this was only outbound then it wouldn’t matter.
As for the creation of the cluster, this was done for high availability.  Granted, in this scenario both instances are running on the same Docker host which defeats the purpose.  However, it still illustrates how to configure the cluster.  The purpose of the cluster is exactly what would be expected.  Both agents share a name, and are treated as a single agent.  If a test is assigned to a cluster then either instance could run it.  In addition to high availability, this also can provide some load balancing between the agents, and it can simplify test creation.  Instead of managing tests to multiple instances in one location we can use the cluster agent to distribute those tests.

Raspberry Pi Enterprise Agent install

I have an automated configuration process for the Raspberry Pi image:

  1. Open a web browser and navigate to
  2. Log into your account
  3. Click the Hamburger icon in the top left
  4. Expand Cloud & Enterprise Agents
  5. Click Agent Settings
  6. Click the Add New Enterprise Agent button
  7. The pane on the right should open the the Appliance tab, under Physical Appliance Installer find the Raspberry Pi 4, and to the right of that click Download – IMG
  8. Wait for the download to complete.  It’s nearly a 1GB file, so it might take a few minutes.
  9. Connect the SD card to the computer that will be doing the imaging
    1. This process erases the entire card.  Make sure you are using a blank card, or you have any valuable data on the card backed up elsewhere.
  10. Launch the Raspberry Pi Imager
  11. Under Operating System click Choose OS
  12. Scroll down to the bottom of the list and click Use custom
  13. Browse to the location of the downloaded image, select it, and click Open
  14. Under Storage click on Choose Storage (or Choose Stor…)
  15. Select the SD card in the window that pops up
    1. If the SD card does not show up try reseating the card
  16. Click Write
  17. Continuing this process will erase all data on the SD card, if that’s acceptable click Yes
  18. A progress bar will be displayed, and after a few minutes the image copy should complete successfully.  Click continue and close the Raspberry Pi Imager software
  19. Remove the SD card from the imaging PC and insert it in the Raspberry Pi.
  20. Boot the Raspberry Pi
    1. You’ll want a monitor connected to find the IP assigned, though this could also be done by looking at DHCP leases, scanning the network, or trying name resolution for the default hostname: tepi
    2. Make sure there’s a network cable plugged in and connected to the LAN (the ThousandEyes agent doesn’t support wireless connections)
  21. When the Pi finishes booting find the IP address displayed on the screen
  22. Use a web browser to connect to the IP of the Pi agent (using the name might work – https://tepi/)
  23. Likely the browser will display a security warning because the certificate is untrusted.  Go through the steps required to accept the security risk and access the site.
  24. At the login page enter the default credentials: admin / welcome
    1. After logging in there may be an error message that briefly appears in the lower right stating the Account Group Token needs to be set.  This will be resolved shortly, and the error can be ignored for now.
  25. The first page will prompt to change the password.  Enter the current password and create a new one, then click Change Password
    1. After the password change is saved click the Continue button at the bottom of the page
  26. The next page prompts for the Account Group Token.  Enter the token value that was collected earlier in this post and then click Continue
    1. Even though there is a button to enable Browserbot here, the Raspberry Pi agent does not support it.  Leave that field set to No.  You can decide if you want to leave the crash reports enabled.
  27. The agent will go through a check-in process and provide diagnostic data.  If everything looks good you can click Complete
  28. That completes the required agent set up.  It will then bring you to the network configuration page.  Scroll down to the DNS section, switch the Current DNS Resolver to Override and enter the IP in the Primary DNS box
    1. For the purposes of this lab none of the other settings need to be changed.  A static IP can be configured and/or the hostname could be changed if desired
  29. The agent should now be listed in the ThousandEyes portal under Enterprise Agent
    1. If the agent status is yellow it likely means an agent update is required, and it should automatically update within a few minutes
That completes the Enterprise Agent installations for the lab.

Windows Endpoint Agent install

  1. Start the Windows VM and log in
  2. Open a web browser and navigate to
  3. Log into your account
  4. Click the Hamburger icon in the top left
  5. Expand the Endpoint Agents section
  6. Click on Agent Settings
  7. Either a splash screen with a Download button will appear, or there will be a button to Add New Endpoint Agent.  Click the button that shows up – both bring up the same pane
    1. Splash screen – 
    2. Add Endpoint Agent Button
  8. Leave the Endpoint Agent radio button selected and click the button Download – Windows MSI
    1. The Mac installation isn’t being covered here, but there’s instructions on how to install it here:
  9. There will be two options for the processor architecture, select the x64 Windows MSI
  10. When the download completes run the MSI
  11. The installation is a typical MSI package, so I’m not going to include screenshots for every step
    1. Click Next to start the install
    2. Read the EULA and if you agree to the terms check the box to accept and click Next
    3. Click on the TCP Network Tests Support and select “Will be installed on local hard drive”
    4. Do the same for at least one browser extension.  Edge is the default browser on Windows 10, but if you want to install and use Chrome then get Chrome installed before continuing the Endpoint Agent installation.  Click Next when you have the browser selected.
    5. Click Install
    6. If there us a UAC prompt for the install, click yes to continue
    7. Click Finish
  12. It might take a few minutes for the agent to check in, but eventually you should see the agent listed under Endpoint Agents in the portal


This was the first post actually working with ThousandEyes, and hopefully it illustrates how powerful this tool is.  As part of the lab there are four different types of agents installed, but there’s many more available:
  • Bare metal install (Intel NUC or other hardware)
  • OVA (VMware ESX, Workstation, and Player, Microsoft Hyper-V Oracle VirtualBox)
  • Application hosting on Cisco platforms (Catalyst 9300 and 9400, Nexus 9300 and 9500, Catalyst 8000, ISR, ASR)
  • AWS CloudFormation Template
  • Mac OS Endpoint Agents
  • Pulse Endpoint Agents for external entities
In addition to the breadth of agents available, the deployment can easily be automated.  I’ve written a script that wrote the Raspberry Pi image to an SD card, then mounted it and applied customizations.  The MSI package can be used with the plethora of Windows software deployment tools, or a link can be given to end users to install on their own.  With DNA Center the image can be pushed to Catalyst switches in bulk.  The Docker images can be build with Docker files.  If that’s not enough, there’s also all the automation tools – Ansible, Terraform…
Getting ThousandEyes deployed throughout an environment can be done with ease.

What’s next?

That completes the agent installation.  The next installment in this series will cover some test scenarios, and walk through getting monitoring configured and tests created.

ThousandEyes Walkthrough Behind the Scenes – The Lab Build

This post will go over the planning of the ThousandEyes lab used in this series. To see past posts in this series expand the box below.

ThousandEyes Walkthrough Table of Contents

There are some behind-the-scenes posts that go into more detail on how and why I took the approach that I did. Those can be found here:

If you are following the series, this post is strictly informational.  It won’t contain any steps that need to be performed in the lab.  The goal is to provide insight into why I made the design choices I did with the lab.

The details on the lab build can be found here:

And here’s an overview of the objective of this series:


  • There are plenty of similar tools (GNS3, EveNG, etc) that are available, so why did I pick the paid tool?  The simple answer is licensing.  My understanding is CML is the only way to run virtual Cisco instances without running afoul of the EULA.  Yes, I could have used non-Cisco routers, but since Cisco is a major vendor it seemed reasonable to go with it.
  • The Personal version of CML has two flavors, Personal which allows 20 active nodes, and Personal Plus which allows 40 active nodes.  I built the lab using 20 nodes because the Personal Plus is an extra $150, and because the additional nodes would increase the resource requirements.  I wanted the lab to be as accessible as possible.  It could easily be extended to 40 nodes or higher, but 20 is enough to get basic testing done.
  • Even though the TE agents could be deployed to VMs, I wanted to use CML as a way to easily simulate scenarios where an engineer would need to do some troubleshooting.  Within CML links can be configured with bandwidth limits, latency, jitter, and loss.  The theory is that ThousandEyes should be able to detect and even alert on those conditions.
  • I am using version 2.2.3, even though version 2.3 is available.  The simple reason is that Cisco is still recommending version 2.2.3.  There are some known issues with 2.3, which is why I’m not running that.

IOSv Routers

  • Even though CML can run CSR 1000V and IOS-XR instances I decided to go with IOSv instances.  This was because of resource requirements.  The CSR 1000v and IOS-XR instances each require 3GB RAM, and with 14 routers that would consume an additional 35GB RAM over what the IOSv routers use.  For the purposes of the lab, the IOSv can do everything needed without the overhead.


  • I wanted to keep as much of the lab in CML as possible, and running Ubuntu in CML aligns with that goal.  Of the Linux flavors that are available out of the box in CML, Ubuntu is the only one supported by ThousandEyes.
  • With Ubuntu being used in the CML lab it seemed reasonable to use Ubuntu for the Docker host as well.


  • I’ll admit I spent a lot of time working through different topology options.  At one point I had switches and HSRP in the design, but I decided to back away from layer 2 technologies to focus on layer 3.  The primary use case for ThousandEyes is looking at WAN links, and with the node limit in CML, it made sense to drop the L2 configurations to make room for more L3 devices.
  • I wanted to maximize the number of BGP AS configurations while maintaining multiple links, which is why there are 7 BGP AS configurations.  By simply shutting down specific links traffic could hit 6 of the 7 AS networks.  With some BGP reconfiguration that could be extended.
  • The two “Client” networks are intended to be what a network engineer would have in their environment.  Likely they’d have a lot more, but with the node limits having two networks is enough to test with.  Each of the client networks has two Ubuntu nodes that are running the TE Enterprise agent.  One of the Ubuntu nodes is also running Apache.  (more on Apache shortly)
  • In the “Public” network I wanted to add another BGP path outside the redundant ISP paths, and I wanted a service that was accessible.  With this being treated as public I opted to not run a TE agent there.
  • Access outside of the CML environment is done via the “External” network.  ThousandEyes is a SaaS service, which means the agents all need to be able to connect to the TE portal.
  • Even though the entire network is built using RFC 1918 addresses, the design is effectively using public addresses throughout the entire lab.  The “Client” addresses are propagated through the ISP and public networks, which isn’t typical in IPv4 deployments.  This was mainly choosing simplicity and efficiency.  If the client networks were masked then something like a VPN would be required to link the two client networks.  Though that better aligns with the real world, for the functional purposes of the lab it makes no difference.  Both ends need IP reachability and adding more NAT and VPN configuration work doesn’t provide a significant improvement in how the lab operates.

External Routing and NAT

  • On the external router, NAT is configured, which should allow internet access from the lab with no additional configuration needed.  The network is excluded from translation with the intent that devices on the LAN (Docker, Windows, and Raspberry Pi agents) would be able to connect directly to devices in the CML lab.
  • For the LAN devices to reach the CML lab routes need to be added either to the LAN router or as static routes to each of the devices.  Using the LAN router requires the fewest changes, and is the most extensible.
  • Unfortunately not every environment is identical.  I suspect that there may be some issues with getting the routing working properly.  I spent a lot of time trying to decide if this routing solution was better than just using DHCP on the external router and doing full outbound NAT.  I decided that having the external agents able to have full connectivity to the internal agents was worth the added complexity.


  • The Apache instances were set up just to create a simple webserver to establish HTTP connections.  For transaction tests, I will be using external websites.
  • Bind is deployed primarily for easy name resolution of the lab devices, and to have another service running inside the lab.  Since ThousandEyes can do DNS tests it made sense to include.

External Resources

  • The Docker, Windows, and Raspberry Pi agents are primarily just to provide the ability to test with those platforms.  The Docker and Pi agents are functionally similar to the Ubuntu agents running in the CML lab.  The Windows agent is an Endpoint agent, which brings a different set of functionality.  
  • I do expect that there will be improvements in test performance with these agents versus the ones in CML because there are fewer layers of abstraction.  I can’t imagine an Ubuntu agent running on a minimum spec VM inside KVM, that is running on the CML VM inside Workstation is going to be the most efficient.  Add in the software layers for the routers connecting those agents, and that only adds more potential performance impact.
  • As mentioned previously, internet access is required for ThousandEyes agents to reach the SaaS platform.  With that requirement in mind, it made sense to just use external websites for most of the testing instead of building elaborate web servers inside the lab.

Misc. Notes

  • Everyone has their preferred numbering scheme.  For this lab, I tried to come up with something that I could easily build on in a programmatic sense.  Yes, for the router links I could have used /30 or /31, but in a lab, I’m not worried about address consumption.  I built addresses based on the nodes being connected.
  • I’m sure someone somewhere will be upset that I don’t have passwords on the routers.  It’s a lab that I tear down frequently, and it’s inside a trusted network.  The risk of an attack is minimal, and worth it to not need to log in to each device.
  • The Ubuntu server version was the latest at the time of writing, and I went with Windows 10 to avoid some of the issues with getting Windows 11 deployed.
  • With the complexity of the build in CML, I decided it was easiest to just publish the YAML code.  Initially, I had intended to write up exactly how to build the lab, and provide configs for each device, but as I built it out it became clear that doing so would be quite cumbersome.  Using the YAML file should give more consistent deployments, with less manual work to get the lab running.
  • I’ve had several requests to incorporate AWS into this lab.  Currently, that’s outside the scope of the roadmap I have for this series.  The primary reason for that is because of the cost associated with AWS.  Once I get through the posts I have planned for this series I plan to investigate if I can leverage the AWS free tier to get useful data.
  • Despite most of the routers being in provider networks, each router has SNMP running.  The reason I did this was to show how ThousandEyes can use SNMP to add additional context to data, and in some cases, it can be used to trigger alarms.  In a real-world scenario you likely can’t get SNMP from provider networks, but you also likely have more than two network devices at a location.  The decrease in realism is more than made up for by not having to build out a complete LAN environment.
I’m sure there are plenty of things that I forgot to include here, and likely some good ideas that I didn’t even think about.  If you have any questions on the lab design please leave a comment below, or you can reach me on Twitter – @Ipswitch

ThousandEyes Walkthrough Part 2 – Lab build

This post will go over getting a ThousandEyes lab built out. To see all the posts in this series expand the box below.

ThousandEyes Walkthrough Table of Contents

There are some behind-the-scenes posts that go into more detail on how and why I took the approach that I did. Those can be found here:

Lab Build


I’ve built out this lab using VMware Workstation and Cisco Modeling Labs to simulate a network for ThousandEyes to monitor.

The lab is broken down into four types of sites, Client, ISP, Public, and External.  There are two Client sites, each consisting of two routers and two Ubuntu instances.  The routers are running BGP, and have SNMP enabled.  The first Ubuntu instance is only running a ThousandEyes agent, and the other instance is running a ThousandEyes agent and an Apache webserver.

The ISP networks are routers running BGP interconnecting all the other sites.  I have SNMP enabled on them just to show what ThousandEyes can do with SNMP monitoring.  Normally isn’t going to be accessible on ISP devices.

The Public zone is also running BGP and has an Ubuntu instance that is running DNS for the entire CML.LAB network.

The External site is used to bridge the lab environment to the network outside CML.  It has a static route out to the LAN gateway that is redistributed into BGP, and a static IP assigned on the LAN.  For traffic leaving the LAN, it has NAT configured.  This should reduce the configuration needs on the LAN side.  A static route can be added to the LAN gateway to send traffic to the External router, or static routes can be added to the individual devices that will connect to the lab network.

In addition to the CML lab, three additional devices will be deployed, an Ubuntu Server running Docker for ThousandEyes Enterprise Agents, a Windows 10 VM running the ThousandEyes Endpoint Agent, and a Raspberry Pi running the ThousandEyes Enterprise Agent.

This table breaks down the resources assigned to each node and the total amount of resources.  The CML VM will need to have enough assigned to it to allow the nodes inside it to run.

The Windows and Ubuntu Docker nodes will sit outside CML, as VMs in VMware Workstation.  There will also be a Raspberry Pi added to the environment.

Installation Prerequisites

Installation Process

The easiest way to get the lab up and running is to import a YAML file.  This file contains everything you need to get started, but some updates may be required.  The lab is configured for internet access, and there is a static IP and gateway assigned.  The LAN addressing might need to be updated to match your environment.

If you choose not to use the YAML import you can find the relevant node configurations in the YAML and then create and configure the nodes accordingly.
Expand each of the following sections for steps on how to build out the lab.

The YAML file can be downloaded from GitHub here:

Create a YAML file with this – Click to expand

Import YAML into CML – Click to expand

To import this into CML follow these steps:

  1. Copy the above YAML data into a new file
  2. Save the file as TE-Lab.yaml
  3. Log in to CML
  4. From the Dashboard Click Import
  5. Click in the File(s) to import area
  6. Browse to the location the YAML file was saved and select it
  7. Click Import
  8. It should import the lab successfully. Click the Go To Lab button

The entire simulation can be started at once, or the individual nodes can be started.  If they are being started manually start with the external connection, and work through all the routers first.  Then move on to PS3-1.  This node will take a few minutes to complete.  The remaining CS nodes can be started after PS3-1 completes its startup.


Routers do not have a username or password to log in.  There is no enable password.
Ubuntu nodes: cisco/cisco

Verification tasks


  • show ip route
    • The route table should be populated, including a default route
  • ping
    • Should receive replies
    • If this fails verify the configuration of Gi0/0 matches the LAN requirements, and the CML VM NIC is configured for bridged access


  • systemctl status bind9
    • Should display active.  If this fails verify internet connectivity and then run these commands:
    • sudo apt-get update
    • sudo apt install install -o Dpkg::Options::=”–force-confold” bind9 -y
  • ping
    • Should receive replies

CS1-2 and CS2-2

  • systemctl status apache2
    • Should display active.  If this fails verify internet connectivity and then run these commands:
    • sudo apt-get update
    • sudo apt install apache2
  • ping
    • Should receive replies

Ubuntu Docker Host Deployment – Click to expand

  1. Open VMware Workstation and create a new VM by pressing Ctrl + N
  2. When the New Virtual Machine Wizard opens click Next
  3. Select the option for Installer disk image file (iso)
    1. Browse to the location of the Ubuntu Server ISO and click Next
  4. Enter a username and password, then click Next
  5. Enter a name for the VM, and verify the path, then click Next
  6. Set the virtual hard drive to 30GB, then click Next
  7. Click Customize Hardware
  8. Select the Network Adapter, change the Network Connection to Bridged, then click Close
  9. Click Finish
  10. Wait for the OS installation process to start
  11. Select your language and press Enter twice to select and confirm
  12. Use the arrow keys to select the NIC and press Enter
  13. Highlight Edit IPv4 and press Enter
  14. Press Enter to change the address assignment method and select Manual
  15. Use the arrow keys to move between fields filling out the IP address info, and then go to Save and press Enter when complete
    1. The default DNS server for the lab is, and the search domain is cml.lab
  16. Highlight Done, and press Enter
  17. Press Enter again to skip the Proxy config
  18. Press Enter again to use the default mirror location
  19. Use the arrow keys to highlight Done and press Enter to accept the default storage config
  20. Press Enter again to accept the file system config
  21. Highlight Continue and press Enter to confirm the storage settings
  22. Use the arrow keys to move between fields, fill out the Profile info, and then go to Done and press Enter when complete
  23. Press Enter again to skip Ubuntu Advantage
  24. Press Enter to enable SSH access, then highlight Done and press Enter
  25. Use the arrow keys to go down to highlight Done, and press Enter
  26. Wait for the installation to complete
  27. When the installation finishes highlight Reboot Now and press Enter
  28. When the server is back up log in
  29. Run the following commands to install Docker

sudo apt-get update
sudo apt-get install
curl -fsSL | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg]
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli -y

Windows 10 Endpoint Deployment – Click to expand

Create the Windows VM

  1. In VMware Workstation press CTRL+N to open the New Virtual Machine Wizard, and make sure Typical is selected, then click Next
  2. Select the option for Installer Disc Image File, and browse to the location you downloaded the Windows 10 ISO to then click Next
  3. Enter the name for the client and select the location
  4. Use the default hard drive size of 60GB (another drive will be added later for the iSCSI target storage), and click Next
  5. Click Customize Hardware
  6. Adjust the CPU and RAM as needed for your environment (2 vCPUs 4-8GB RAM would be recommended), and change the Network Adapter from NAT to Bridged
  7. Click close, verify the box is checked for “Power on this virtual machine after creation” and click finish.

Deploy the Windows OS

NOTE: While in the VM you will need to press Ctrl+Alt to release the cursor to get to your desktop

  1. >While the VM is booting you might see a prompt to press a key to boot from CD.  If that happens click into the window and press a key.
  2. Select the language, and keyboard settings
  3. Click Install Now
  4. On the Activate Windows screen click “I don’t have a product key”
  5. Select Windows 10 Pro and click Next
  6. Read through all of the licenses terms, and if you accept the terms check the box to accept them and click Next
  7. Select the Custom install option
  8. By default, it should already select Drive 0, which is the 60GB drive initially created.  Click next.  The OS install will start, so just let that process run.

OS Initial Config

Windows 10 has several steps to go through to get the OS configured before actually loading to a desktop.

  1. Select your regions and Click Yes
  2. Select your keyboard layout
  3. Skip adding the additional keyboard
  4. Wait a moment for it to progress to the account creation screen, then select “Set up for personal use” and click Next
  5. Microsoft is going to try to link to an online account, but since this is for a temporary lab PC click on “Offline account” in the bottom left.
  6. Microsoft really tries to push the online account, so again look in the bottom left corner and select “Limited experience”
  7. Enter a username and click Next
  8. Create a password and click Next
    1. The next screen will ask to confirm the password.  Reenter the password and click Next
  9. When prompted for the three security questions I just select the first three options and enter random characters.  This is a lab, and if I happen to forget the password I can easily recreate the VM.  Click Next
    1. Repeat the process for the other two questions.
  10. For the privacy settings, this really doesn’t matter, as it’s a lab machine that won’t exist for long.  Everything can be left enabled by default, or it can be disabled.  After applying the settings click Accept.
  11. On the Customize Experience page just click Skip
  12. Cortana… Microsoft really wants people to enable all their stuff.  Click “Not now” to move on.
  13. Success! The post-install prompts are done.  Now, wait for the configuration to complete.

Client OS config

To configure the OS there are only two tasks that are going to be performed.

  • Install VMware Tools
  • Configure DNS

Install VMware Tools

  1. Log into the VM using the password set previously
  2. Right-click on the VM in the Library and select Install VMware Tools
  3. Autorun should prompt to run, but if not then navigate to the D: drive and double click it.  That should kick off the Autorun for the installer.
  4. Follow the defaults for the installation.  Next > Next > Install > Finish and then click Yes when prompted for a reboot.

Configure DNS

  1. Open Powershell as admin
    1. Press the Windows key and type powershell
    2. Press Crtl+Shift+Enter to run as admin
  2. Run these commands:

Set-DnsClientServerAddress -InterfaceAlias Ethernet0 -ServerAddresses

Set-DnsClientGlobalSetting -SuffixSearchList cml.lab

There’s a lot to the lab build, but hopefully, it went smoothly.  If there were any issues you can add a comment to this post, or reach me on Twitter @Ipswitch.
As the lab build-out continues I may need to come back and edit the configuration here.  


  • CML Lab YAML data
    • Corrected IR2-2 Gi0/0 IP configuration and BGP peering
    • Corrected IP assignment on PR3-2 – config was moved from Gi0/3 to Gi0/4
    • Added loopback interfaces to all routers (will be used for SNMP connections)
    • Updated DNS records to use loopback addresses
  • Lab config

What’s Next?

The next entry in this series will cover getting the ThousandEyes agent deployed into the lab, and getting things ready to start building tests and collecting data.

ThousandEyes Walkthrough Part 1 – The What and the Why

This post will go over what ThousandEyes is, and why you should be interested in learning how to use it. To see all the posts in this series expand the box below.

ThousandEyes Walkthrough Table of Contents

There are some behind-the-scenes posts that go into more detail on how and why I took the approach that I did. Those can be found here:

 What is ThousandEyes?

I’m not in marketing, so I’m going to avoid all the “founded in” type stuff (if you want to read that stuff check out the ThousandEyes site: Instead, let’s talk about what it means to IT professionals, and more specifically network engineers.  ThousandEyes is a monitoring tool (I know, one of many, but hear me out) that takes a different approach to monitoring.  We’re all familiar with SNMP monitoring.  Links go up, links go down.  The problem with this sort of monitoring is … well, it sucks for actual performance monitoring.  Sure, I can see the packet rate of a port.  I can use Netflow to look at what type of traffic it is.  None of this actually tells me how that link, or more importantly the service that uses that link, is performing.  More importantly still, what the end-user experience is of that service using that link.

I’ll get more into how ThousandEyes operates shortly, but before that let’s take a look at why we care about it.

Why ThousandEyes? 

“It’s slow” 

I think it’s safe to say those two words are possibly the most annoying words to hear as an engineer.  They are subjective, and often backed with little data.  I can’t look for “slow” in SNMP logs.  These types of issues typically result in spending hours looking at different interfaces, running tests, and often end with a shrug of the shoulders and either saying it’s a transient issue, or it’s on the other side.

“It’s a network problem”

There’s a phrase that can instantly raise the blood pressure of any network engineer.  Again, this statement is often followed with no useful information.  After that phrase is uttered the full weight of a Priority 1 outage is squarely focused on the network team, and now they shoulder the burden of proof before anything else happens.  I’ve had issues drag on for months because people believed, without evidence, there was a network problem, and no matter what I provided, it wasn’t enough.

I’ve often referred to the Internet as the Wild West.  Once traffic leaves the network I manage I lose visibility over it.  Tools like Netflow and SNMP no longer help.  I can’t leverage things like QoS to prioritize my traffic.  Instead, I leave it to the magic of TCP to make sure the traffic gets to the destination.  I’ve lost count of the number of calls where I’ve said “I see the traffic egress our perimeter, and it looked fine.” and similar statements.

I could go on, and on, and on.  I’d wager most network engineers have had similar experiences.

Enter ThousandEyes

With ThousandEyes we have a tool that helps quickly determine if something is slow, and where that might be occurring.  This moves the conversation from the realm of subjective user experience and wild accusations to objective, proactive detection of potential issues.  This is done through the use of Agents and Tests (more on those in a future post).  By running tests we can see hop-by-hop what is happening with that traffic, and most importantly, we can see it through networks that we don’t own.  

What’s the objective of this blog series?   

 The target audience is primarily network engineers, but application developers, server administrators, and countless other people in the IT field would benefit from knowing what this tool can do.

I’ll be building out a virtual lab topology and running ThousandEyes inside it to show what the tool is capable of.

My goal is to show that the tool is incredibly easy to use and powerful.  Over the years I’ve had plenty of vendors talk about how great their product is.  Every vendor thinks whatever their product is will be the greatest product ever.  I’ve watched sales reps move from vendor to vendor, and each new place happens to have the best widgets and gizmos.  No sales pitch here.  Just an IT guy that actually thinks this is an awesome tool, and it would be a great addition to most environments.

What’s Next?

In the next installment of this ThousandEyes Walkthrough series I’ll be detailing the lab environment that I will be using for testing.  Everything will be done using, VMware Workstation, CML, Windows and Ubuntu guests, and a Raspberry Pi for fun.  I’ll provide full configs so you can build out a similar environment.  The lab will include BGP, DNS, and web servers to allow different types of ThousandEyes tests to be configured.
-Spoiler Alert-
Here’s what I’m working on for the lab build: