Backgroound Image

Cisco Live 2023

I was lucky enough to win a trip to Cisco Live this year through the Cisco Insider Advocates program – https://insideradvocates.cisco.com/. The event was amazing and I wanted to share my experience.

The Numbers

In true geek fashion, I thought it’d be fun to share some of my Cisco Live stats.

  • 2,598 miles flown from MSP to LAS and back (yay for direct flights)
  • 6 days (Satuday-Thursday) in Las Vegas
  • 55,000 steps (roughly 27 miles)
  • 3 books signed – CCDE OCG signed by Zig Zsiga, The Art of Network Design signed by Denise Donohue, and DevNet Associate OCG signed by Jason Gooley
  • 30 selfies taken (with too many awesome people to list here!)
  • 1 exam taken
  • 1 exam failed (but I was close and now I know what I need to study)
  • 1 award received – Cisco Advocates Transformation Trailblazer
  • 3 content presentations, a presentation in the Content Corner, a session with Itential at the IP Fabric booth, and an interview with David Bombal
  • 2 MTE sessions – Jason Gooley and John Capobianco
  • 6 ribbons collected

The Event

The event itself was a great time! Plenty (ok, maybe too many) sessions to pick, tons of vendor booths, and more activities than you could shake a stick at.

I wrote about Cisco Live 2022 previously https://www.mytechgnome.com/2022/09/23/cisco-live-las-vegas-2022-recap-good/ so I won’t rewrite all the same stuff. Instead, I thought it’d be fun to review my previous post and see how much of it still applies.

The first big difference – last year I was a delegate of TFDx, and this year I wasn’t because I’ve crossed over to the dark side. I now work for a vendor, which means to avoid any conflict of interest or bias, I can’t participate as a delegate. However, since I work for a vendor that sponsored Cisco Live, I had the opportunity to hang out at our booth and talk to people.

The Good

The people. That part is absolutely 100% accurate. The people are by far the best part of Cisco Live! This year was even better because I’ve had the opportunity to meet people at the last Cisco Live, as well as meeting more people through the Cisco Insider programs.

I did the CCDE techtorial again this year, and it, again, was an awesome session!

The Bad

I tried to take my own advice this time. I didn’t attend some of the parties, and I was far more selective on the sessions I attended. I tried to focus my time on the things that were the most valuable to me, and that was mainly taking the time to talk to people. There was still a lot of walking, but so it goes.

I wasn’t planning to take an exam this year, but I was able to grab a spot Sunday afternoon, so I went for it. This year the test center was in Mandalay Bay, so at least there wasn’t the 20-minute walk to and from Luxor for the exam.

The Tech

This year I wasn’t blown away with the tech announcements. I will admit, I am extremely interested in the Full Stack Observability platform. I didn’t get a chance to look at it much, but it’s on my radar to dig into as I get time.

New Thoughts from 2023

There are a few things that are new for me this year that I wanted to dig into.

Cisco Insider program

Though I touched on the programs in my last post, the Insider team had a lot of new stuff that happened this year. As I mentioned, I won the trip to Cisco Live through the Insider Advocates program. In addition to the trip, I was surprised to see my picture in the Insiders Rockstar Hall of Fame!

I was also shortlisted for two awards in the Cisco Global Advocates Awards, and I ended up winning the Transformation Trailblazer award! I was interviewed after winning the award, and I participated in a panel interview with David Bombal. For a guy that can talk a lot, I don’t have the words to convey how awesome that was!

Moving past the ego boost, the best part of the Insiders program is the community. Through the Advocates and Champions program I had the opportunity to meet tons of amazing people. If you’re not already part of the community I strongly urge you to join.

Booth Duty

Though I wasn’t officially on booth duty, I did stop by the IP Fabric booth a few times to hang out. Again, the best part of Cisco Live is the conversations, and I was able to have some really interesting conversations with people. I’ve found there are two topics that universally resonate with network engineers – STP issues and MTU issues. Any time I asked people if they’ve had either issue (I know I’ve had my share) I could watch their soul die. Then I’d show the STP topology in IP Fabric, complete with root bridge and blocked ports, and I could watch the souls spring back to life. I’m not trying to make a sales pitch here, though I probably should. This just another example of how important the conversations are.

Recommendations for Future Attendees

Yes – I just copied this entire section from my previous post because it’s still accurate.

  1. Wear good shoes. It’s a lot of walking! I think I calculated something like 30+ miles of walking during the week.
  2. Bring a water bottle and stay hydrated. With all the walking, the Vegas heat, and the overall dryness, it’s easy to get dehydrated. Add in air travel and perhaps some alcohol consumption, and that’s a recipe for disaster. There are plenty of water coolers, but sometimes it was challenging to find one that wasn’t empty. Bring a water bottle, fill it when you can, and make sure to drink enough water.

Now that the basic human needs are covered, on to the actual conference recommendations.

  1. Make time to talk to people. Sessions fill fast, making it feel like you need to register for as many as possible. Don’t fall into that trap. Sign up for the sessions you really want to attend, and then use the open time to talk to people. Most sessions are recorded, but the chance to talk to people isn’t.
  2. Don’t be afraid to talk to someone. See your favorite blogger, podcaster, and beard? Go ahead and say “hi” to me. And if I’m not your favorite, that’s fine. Say “hi” to me, and them too. Did you hear someone ask a question in a session, and it sounded like they might be in a similar position to you? Talk to them. Maybe they’ve solved a problem you’re working on. Maybe you have some advice you could give them.
  3. Don’t focus on the parties. Sure, they can be fun, but after an 8+ hour day of talking and tech sessions, if you need some downtime, take it. Maybe doing back-to-back-to-back 16-hour days for a week is something you can do, and if so, go for it. If not, that’s cool. The parties, swag, and all of that are great, but if you risk burning yourself out, make sure to pace yourself.
  4. Swag is great. Prizes are also great. Neither should be the focus of a trip to Cisco Live. The cost of the conference pass, hotel, and airfare far outweigh the value of the swag. There’s always joking about finding the vendors with the best swag, but really, look for the vendors that can help you. Talk to them. Talk to vendors you’ve never heard of. Maybe they have a product that can solve a problem you have, and you didn’t even know it existed. If it ends up not being a good fit, move on. There are plenty of vendors for a participant to talk to and plenty of participants for vendors to talk to, so if there’s no value, then it’s better for both of you to move on.
  5. If you need approval to make the trip, highlight how Cisco Live is a lot more than sales demos and swag. You have the opportunity to meet a lot of people and learn about what they are doing. The odds are pretty good that you can find people that have solved whatever challenge you might be facing or at least people that could provide useful information.
  6. (New recommendation) Book MTE sessions! I didn’t book any in 2022, and I have realized that was a huge mistake! I booked two sessions with people I’ve followed for years (Jason Gooley and John Capobianco) and I am very happy I did! They are both quite popular, and being able to have them alone in a room for 45 minutes… uh… in a non-creepy way… is fantastic! Again, the people and conversations are the best part of Cisco Live, and being able to get 1-on-1 time with people that you follow is massively beneficial! You can jump straight into whatever topic you want, and there’s no sales presentation. Just a couple geeks chatting about geek stuff.

TLDR: Go to Cisco Live. Talk to people. Enjoy.

Intro to Automation – Learning while having fun with Autonauts!

I have heard too many network engineers push back on learning how to automate things because they “don’t want to be a software developer.” There’s a slight problem with that argument, though. You don’t need to be a “software developer” to get into automation. Automation is surprisingly painless, and it can even be fun!

 One recommendation I would give people looking for a fun and easy approach to learning automation would be to look at games like Autonauts (https://www.humblebundle.com/store/autonauts) or if you want to add in some combat Autonauts Vs. Piratebots (https://www.humblebundle.com/store/autonauts-vs-piratebots)

Both games are surprisingly fun to play. You start with some simple resources and need to craft and build your way to greatness. Luckily, you have bots at your disposal that you can train to do pretty much anything. Chop down trees. Collect materials. Craft tools. Build. Autonauts vs. Piratebots adds in attack and defense. The best thing that the bots can do… build more bots!

Bot script

This is an example of a relatively simple script for a bot. It uses a drag-and-drop interface and can learn tasks by recording the player’s character’s actions.

As can be guessed by its name, this bot is a stone miner. To mine stone in both Autonauts games, you need a pick. During a game, hundreds, if not thousands, of units of stone are required. Why mine all of that manually when you can have a bot do it?

This script uses a repeat loop, with a condition of “forever!” meaning the bot will always be running this task.

Inside that first Repeat loop is another Repeat loop, and this one has an exit condition when the bot’s hands are full. When the bot isn’t holding anything, it will perform the action of moving to the Crude Pick Storage location and then the action of removing a pick from storage.

Now that the bot is holding a pick, it exits the repeat loop and moves on to the next step in the script. The next Repeat loop will run continuously until the tool breaks, and the bot’s hands are empty. During this loop, the bot will look for a stone deposit, move to it, and then use the pick to mine the stone.

The final script results in a bot that will mine until its pick breaks, and when that happens, it will get a replacement pick and return to mining. The bot will continue producing stone as long as there are picks in the storage location and open stone deposits.

Of course, the work doesn’t stop with just this one bot. The crafting and storing of picks will require automation. All that stone that was mined will need to be moved to a storage location.

This script could also be improved upon. The bot can move tools to a backpack and retrieve them. The mining location could be tied to a spot indicated by a sign. The script can be saved and then linked to other bots.  

As the player progresses through the game, the tasks become far more complex. Each bot has limited memory in its “bot brain,” which limits each script’s length. A difficult task requires efficient scripting or multiple bots doing parts of the script.

Sure, the game doesn’t specifically cover anything about network automation. However, that’s where I think the value is. The best part of this game is how it makes you think of different ways to build out the various tasks. There are several mechanisms beyond the repeat loops that can be used. It shifts from a linear focus for each task to a more systematic approach. 

Traditional CLI configuration is linear. Log into a device, move through the different config layers, and apply the config change. Network automation can use that same linear methodology, but the real power of it is when logic is layered on. Using if/then/else statements, while and for loops, variables, functions, etc., can all increase the power of a script significantly.

The beautiful part about network automation is much like a script in Autonauts; it can be improved iteratively. As new techniques are learned, they can be applied to increase the efficiency of the scripts. Those scripts can be reused later or modified to suit changing needs. 

Cisco Live Las Vegas 2022 recap – The Good, the Bad, and the Tech

This was my first time attending Cisco Live in person, and it was a wild time. In addition to the normal Cisco Live event, I also had the opportunity to participate in a few extra events. I attended the CCDE Techtorial (detailed post on it coming soon), there were several Cisco Champion events, and I was a delegate at Tech Field Day Extra (TFDx) – https://techfieldday.com/event/clus22/

It was a jam-packed week with a ton of walking, countless conversations, and enough information to make my head hurt. I’ve attempted to distill down my thoughts on the event and provide recommendations from a first-timer to future first-timers.

Overview

If you aren’t familiar with Cisco Live, it’s a major IT industry event that Cisco puts on Every year (virtually in 2020 and 2021) and there are versions of it run globally. The US-based event is the largest, but there are also events in Europe and Australia. I attended the event in Las Vegas at the Mandalay Bay Convention Center from June 12th through 16th. Hundreds of breakout sessions are available, covering the entire Cisco solution portfolio. The sessions can be very high-level, explaining the basics of a technology, or very deep and technically focused. People from all over the world attend, and I believe the 2022 attendance was near 15,000.

The Good

1. The people

I can’t say enough about how much of a factor the social side of Cisco Live was. I spent countless hours talking to people outside of sessions. I had the opportunity to meet so many people, made even more special after two years of quarantine. Many of those people were from other countries, and I likely wouldn’t have had the opportunity to meet them otherwise.

Just to name-drop a bit, I had the chance to talk to Cisco giants Peter Jones, and fellow Champions Daren Fulwell, Mark Sibering, Sijbren Beukenkamp, David Peñaloza Seijas, Bill Burnam, Dustin Gabbett, Joe Houghes, Kenny Paula, Robb Boyd… this list could go on by a few dozen more people… plus the people that I was able to hang out with on the TFDx side (many are also Cisco Champions) – Ben Story, Micheline Murphy, Pieter-Jan Nefkens, Jody Lemoine… (huh, that list is all Champions)…  Again, the list could go on. I also had the chance to meet people I’d recorded Cisco Champions Radio episodes with, including Jason Gooley, JP Vasseur, Shai Silberman, and Carlos Pereira.  I’ve probably not even listed half of the people I should have.

There were definitely moments when I was in awe of the people I was talking to. I have books written by these people on my shelf. I’ve watched their training material. As a professional geek, it was awesome to just be in the same room, let alone actually talking to these people.

I had the chance to talk with a few people about ThousandEyes, and I could discuss how I’ve used it with peers and learn how they are using it. We discussed challenges and how we overcame them. I talked to people about some of the announcements Cisco made and learned from those different perspectives. I listened to questions people asked during sessions; sometimes, it was a question I was wondering myself but hadn’t thought to actually ask it.

The “live” aspect of Cisco Live was massively beneficial. The conversations were far more interesting and engaging than pre-recorded or virtual meetings. The opportunity to sit in one of the lounge areas and talk shop (or talk about anything else) with peers from many different backgrounds was truly awesome.

2. The people (yes, we just did this)

Told you I couldn’t say enough about the people! The Cisco Insider Champs team (Amilee San Jaun, Breana Jordan, Britney McDaniel, Danielle Carter, and Lauren Friedman) did an awesome job getting us access to special sessions and behind-the-scenes access. They truly made the experience better! Much of, if not all, of the above, was made possible because of the Cisco Insider Champions team. I can’t stress enough how much value came out of the community connections, and I am extremely thankful to be part of it.

Shameless plug: If you’re interested in joining the Cisco Insider Champions program (or the Cisco Insider programs -Advocates, User Group, and User Research), you can find more info here: https://www.cisco.com/c/en/us/about/cisco-insider.html#champions The Champions applications are typically available in the October-December timeframe.

The Advocates program can be joined at any time here: https://insideradvocates.cisco.com/join/ReferToInnovation

If you join either program, please let me know so I can follow you.

3. The techtorial session

The techtorial was a premium session held on Sunday, the day before Cisco Live officially kicked off. It was a 4-hour deep-dive session, and it was amazing! Having that much time really allowed the presenters to go into depth with the material, and there was enough time to have an interactive discussion. Those sessions are also very focused, which means there’s not really a level-setting/marketing part of the session. From a value-per-hour standpoint, I think I got a lot more out of the techtorial than I did the normal sessions.

4. Swag.

I have to mention the swag. Sure, I didn’t get sent home with the thousands of dollars of swag that are given out for the Oscars, but I was happy. A few shirts, socks, and a wide assortment of other random bits. I will say that I’m disappointed that the AMD booth wasn’t handing out Threadripper Pros to everyone that stopped by.

5. Did I mention the people?

Yup, back to the people. I had the opportunity to randomly belt out Metallica lyrics with Kenny Paula and the #metaldevops godfather, Jason Gooley. I ate many a stroopwafel that made the trip from the Netherlands. I watched a Dutch man hit the bell in one of those carnival strength games with the oversized mallet. I made solar battery charging kits with fellow Cisco Champions and the awesome Champs team. I got my selfie with Robb Boyd.

The Bad

1. Timing

I think Cisco Live could easily be three weeks long, and I still wouldn’t do everything I wanted to. There were a lot of sessions that I couldn’t make because they overlapped with other things. I only visited a few vendor booths. I didn’t do any labs, play capture the flag, or do activities in the DevNet zone. I would have loved to have had more time to engage with more people, attend more sessions, talk to different vendors, etc. Trying to fit everything in is a challenge, and as a first-timer, I found that overwhelming in many ways. I also found that I was spending time doing things that weren’t as valuable of a use of time, like the Cisco Live challenge game. I thought it would be cool to win a prize, so I did some of the activities and quizzes to get as many points as possible. Well, I didn’t win, and I probably wasted an hour or two on it. Looking at the leaderboard, I’d bet the people at the top spend a significant amount of time on it, and though the prizes were nice, I don’t think it would have been worth the time. Next time I’ll try to prioritize better what I want to do and limit time spent on anything else.

2. The walking

The walking itself wasn’t really the problem, but the time spent walking was. I stayed at the Luxor, about a 17-minute walk from the hotel lobby to the entrance of the conference center. I bought some stuff at the Cisco Store and wanted to drop it off in my hotel room before I went to my next session. It took 45 minutes to go from the store to my hotel room and back to the next session room. Next time I’ll try to get a room closer to the conference and better plan my trip to the store.

3. The on-site exam

When I registered for Cisco Live, I was really excited to get a free exam. There were a few problems, though. First, there wasn’t an exam I had been preparing for, so I just picked one I thought I’d have a chance to pass with minimal study. Second, when you think about it, an exam might be $400, but when you compare that to the cost of Cisco Live, you’re losing a few hours of the conference to take a test that you could take any time. If I could schedule my exam for a time that didn’t interfere with the conference, it might be a different story, but those spots go fast. Third, trying to get an exam done during the conference just adds extra stress that’s not needed. Next time I don’t plan to bother with the exam. I’ll probably take a look at the schedule, and if there’s a great spot open, I might go for it, but it would be quite low on my priority list.

4. The parties

I’ll admit, I’m not much of the party type. The appreciation event concert was Britany Howard and the Dave Matthews Band. Though both might be great acts, they aren’t my style. The food at the celebration wasn’t great either. The CCIE party (I was a +1, I’m a long way from an IE) wasn’t a big hit for me either. That said, I know there were plenty of people stoked to see DMB, and they had a blast at the parties. They were a great opportunity to hang out with people, and many great conversations were had. Next time I’ll approach the parties as more of a networking event than a concert or similar event. Unless they manage to get a concert lineup with Jonathan Coulton (check out Code Monkey) and Psychostick (check out Blue Screen, which also happens to be one of the best music videos ever made).

And The Tech

Cisco Live is always buzzing with new product launches, announcements, and a massive amount of information. This year was no different. The biggest announcement was the ability to monitor Catalyst switches in the Meraki dashboard and even convert them to running Meraki code so they would be fully Meraki-managed. Plenty of other awesome stuff happened, but I want to focus on the event’s overall experience. I’ll save the tech details for another post.

Aside from product announcements and sessions, there’s also the World of Solutions. Essentially, that’s the show floor of the event. Cisco had huge spaces dedicated to different things like DevNet, Emerging Technologies and Incubation, WebEx, labs, etc. There were also dozens of other vendor booths, some even giving their own sessions on the show floor.

Recommendations for Future Attendees

  1. Wear good shoes. It’s a lot of walking! I think I calculated something like 30+ miles of walking during the week.
  2. Bring a water bottle and stay hydrated. With all the walking, the Vegas heat, and the overall dryness, it’s easy to get dehydrated. Add in air travel and perhaps some alcohol consumption, and that’s a recipe for disaster. There are plenty of water coolers, but sometimes it was challenging to find one that wasn’t empty. Bring a water bottle, fill it when you can, and make sure to drink enough water.

Now that the basic human needs are covered, on to the actual conference recommendations.

  1. Make time to talk to people. Sessions fill fast, making it feel like you need to register for as many as possible. Don’t fall into that trap. Sign up for the sessions you really want to attend, and then use the open time to talk to people. Most sessions are recorded, but the chance to talk to people isn’t.
  2. Don’t be afraid to talk to someone. See your favorite blogger, podcaster, and beard? Go ahead and say “hi” to me. And if I’m not your favorite, that’s fine. Say “hi” to me, and them too. Did you hear someone ask a question in a session, and it sounded like they might be in a similar position to you? Talk to them. Maybe they’ve solved a problem you’re working on. Maybe you have some advice you could give them.
  3. Don’t focus on the parties. Sure, they can be fun, but after an 8+ hour day of talking and tech sessions, if you need some downtime, take it. Maybe doing back-to-back-to-back 16-hour days for a week is something you can do, and if so, go for it. If not, that’s cool. The parties, swag, and all of that are great, but if you risk burning yourself out, make sure to pace yourself.
  4. Swag is great. Prizes are also great. Neither should be the focus of a trip to Cisco Live. The cost of the conference pass, hotel, and airfare far outweigh the value of the swag. There’s always joking about finding the vendors with the best swag, but really, look for the vendors that can help you. Talk to them. Talk to vendors you’ve never heard of. Maybe they have a product that can solve a problem you have, and you didn’t even know it existed. If it ends up not being a good fit, move on. There are plenty of vendors for a participant to talk to and plenty of participants for vendors to talk to, so if there’s no value, then it’s better for both of you to move on.
  5. If you need approval to make the trip, highlight how Cisco Live is a lot more than sales demos and swag. You have the opportunity to meet a lot of people and learn about what they are doing. The odds are pretty good that you can find people that have solved whatever challenge you might be facing or at least people that could provide useful information.

Final Thoughts

If you have the opportunity to go, do it! It was an awesome experience, and I can’t wait for my next chance to attend!

I can’t stress the value of the conversations enough. Speak up in sessions. Ask questions. Track down people if you need to. (Twitter can be a great way to find people), but talk to people. You can get much more focused answers and better insight with direct conversations. Plus you might find a friend.

Have questions? Have Cisco Live tips? Drop them in the comments, or reach out to me on Twitter @Ipswitch

I might have missed or misspelled some important names. If I did, I’m sorry. Let me know, and I’ll update this post. If you want me to add Twitter and/or LinkedIn links for you, send those over, and I’ll be sure to add them.

ThousandEyes Walkthrough Part 4.3.2 Scenario 2 – Enterprise DNS test configuration

This post will go over the second scenario for the ThousandEyes lab. To see all the posts in this series expand the box below.

ThousandEyes Walkthrough Table of Contents

There are some behind-the-scenes posts that go into more detail on how and why I took the approach that I did. Those can be found here:

Scenario 2

Scenario: It’s known that critical applications are dependent on other network services, but there is a concern that the underlying services aren’t able to support the applications.
Technical requirements: DNS has been identified as a critical service that other applications are dependent on. The CML.LAB domain must be monitored for availability and performance.
All of the scenario information can be found in this post: https://www.mytechgnome.com/2022/08/thousandeyes-walkthrough-42-scenarios.html
This scenario provides a few options for tests. An agent-to-server test could be used, but those tests don’t give DNS-specific info. Also, DNS could use UDP, and agent-to-server tests do not support UDP.  A DNS Server test would check for DNS server connectivity, and identify if a change was made to a DNS record.  That meets the objective of the scenario, so that’s the test we’ll set up.

Create an Enterprise DNS Server test

  1. Log in to ThousandEyes (I presume this skill has been mastered by now)
  2. On the left side, expand the menu, then click on Cloud and Enterprise Agents to expand that list, and then click Test Settings
  3. Click Add New Test.
  4. This will be a DNS Server test. Click DNS for the Layer, and then DNS Server for the Test Type
    1. Enter a name for the test
  5. Under Basic Configuration, in the Domain field enter cml.lab
    1. Leave the record options as default, IN and A
    2. NOTE: This is an internal domain that can’t be resolved by the ThousandEyes cloud. It will show a warning that it is unable to resolve the target. That warning can be ignored.
  6. As before, set the interval to 30 minutes to reduce the test load
  7. In the Agents field, select all the enterprise agents deployed
  8. Enter the lab DNS server IP in the DNS Servers field: 10.133.100.10
  9. Uncheck the Enable box for alerts
  10. When complete it should look like this:
  11. Click Create New Test
This will create the new test, and it will start running right away. Just like the agent-to-agent tests, this test can be disabled to save test units if it’s not being used.

ThousandEyes Walkthrough 4.3.1 Scenario 1 – Enterprise agent to agent test configuration

This post will go over the first scenario for the ThousandEyes lab. To see past posts in this series expand the box below.

ThousandEyes Walkthrough Table of Contents

There are some behind-the-scenes posts that go into more detail on how and why I took the approach that I did. Those can be found here:

 

Scenario 1

The objective is to monitor the connection between all agents and the two client sites.  Both client sites have agents installed, which means an agent-to-agent test would be the best fit.  An agent to server test could be used, but that is dependent on a specific service running, or an ICMP response.  Additionally, the agent-to-server test can only initiate from the agent side, while an agent-to-agent test can perform bidirectional monitoring.

Create an Enterprise agent-to-agent test

  1. Log in to ThousandEyes (I presume this skill has been mastered by now)
  2. On the left side, expand the menu, then click on Cloud and Enterprise Agents to expand that list, and then click Test Settings
  3. Click Add New Test.
  4. This will be an agent-to-agent test.  For the layer select Network, and then under Test Type select Agent to Agent.
    1. Add a name for the test
  5. Under Basic Configuration there are a few things to set here.
    1. Click in the Target Agent field
    2. On the right side, click Enterprise to filter the list down to only our Enterprise agents
      1. NOTE: the agents listed here are Cloud agents, and those can be used to test public services from locations all over the world
    3.  Select the CS1-2 agent
    4. The interval is how often these tests are performed.  Since this is a lab we’ll back this off to a 30-minute interval to reduce the number of tests being run.
    5. In the Agents field, we’ll select the source agents.  Again, filter based on Enterprise agents, and then select all agents except CS1-2
      1. NOTE: Selecting the North America group (or your local region) will include the CS1-2 agent, and that won’t allow the test to be created because a source and target are the same.
    6. Under Direction, select Both Directions
    7. Leave the Protocol option set to TCP
    8. Check the box to Enable Throughput monitoring, then leave the duration with the default 10s time.
    9. Leave Path Trace Mode unchecked
    10. Uncheck the Enabled box next to Alerts.  We’ll cover alerts later on in this series
    11. When everything is completed it should look like this:
  6. Click Create New Test
That will get the first test created.  Now we’ll want to create the same test, but use CS2-2 as the target.  You could manually create the new test, or duplicate the existing test and make a few changes.
  1. Click the ellipsis (…) to the right of the test, and then in the menu click Duplicate
  2. Correct the test name
  3. Change the Target Agent to CS2-2
  4. In the Agents field uncheck CS2-2 and check CS1-2
    1. NOTE: If you can’t uncheck CS2-2 then click the box next to the region, and that should unselect all agents.  Then reselect all agents except CS2-2
  5. When complete it should look like this:
  6. Click Create New Test again
That gets the two tests required for Scenario 1.  It should automatically start running the tests.  While the tests are running they will consume test units.  If you want to conserve test units you can disable the tests from the Test Settings screen.  Simply uncheck the Enabled boxes to disable them, and when you want to enable the tests again just check the boxes.

The test results will be reviewed in a later post in this series.

ThousandEyes Walkthrough 4.2 Scenarios and Test Types

This post will go over the scenarios that will be used for the ThousandEyes lab, as well as the different test types available in ThousandEyes. To see past posts in this series expand the box below.

ThousandEyes Walkthrough Table of Contents

 Scenarios

Here are business use case scenarios that will be the basis for the ThousandEyes lab testing.  The scenarios are intentionally vague, but should provide a relatable situation that any network team might encounter.  Then, these business cases will be translated into technical requirements that will be used to build out each solution.  Keep in mind, that this is to illustrate the business and technical use cases for ThousandEyes.  In a real-world situation, there would be multiple facets to these scenarios, but we’re focused on one piece of this solution.

Business Use Cases

  1. Both of the corporate campuses (client site 1 and client site 2) house resources critical to business processes.  It is important that these resources are accessible and performance meets the business requirements.
  2. It’s known that critical applications are dependent on other network services, but there is a concern that the underlying services aren’t able to support the applications.  
  3. An externally managed web application is used frequently used by employees, and if it were unavailable or slow that would negatively impact the employees’ productivity.
  4. Two internal web applications are used by customer-facing staff, and if these applications are unavailable or poorly performing for any location that would negatively impact the customer experience.
  5. A critical business process relies on a third-party web service.  If there are delays in any stage of this process it can cause a significant impact.  

Translation into Technical Requirements

  1. Network performance from all enterprise locations should be monitored.  
  2. DNS has been identified as a critical service that other applications are dependent on.  The CML.LAB domain must be monitored for availability and performance.
  3. Connection performance and availability to these HTTP servers must be monitored.  The application vendor should be monitoring the application performance
    1. NOTE: There’s potential value in monitoring application performance both to keep track of SLA metrics, and also to speed the identification of an issue, but for this example, we’ll take the simplest approach.
  4. In addition to monitoring the HTTP connections, the time required to access these applications should be monitored
  5. Multiple web pages need to be monitored, and it needs to happen in a sequence similar to how a user would interact with the application.

Test Types

ThousandEyes has a total of 12 Enterprise test types split across 4 categories.
  • Routing
    • BGP
      • This test looks at BGP peering (the lab environment isn’t peered out to the internet, and setting up private peering is going to be out of scope for the lab)
  • Network
    • Agent-to-Server
      • This creates either a TCP or ICMP connection to the target address and monitors loss, latency, and jitter
    • Agent-to-Agent
      • Creates a connection between two ThousandEyes agents, and allows bidirectional TCP or UDP testing, monitoring loss, latency, jitter, and optionally throughput.
  • DNS
    • DNS Server
      • This is pretty straightforward, a DNS record and DNS server are entered, and the test checks for resolution of that DNS record.
    • DNS Trace
      • A DNS trace queries top-level DNS servers and then works down through name servers to show the path used to resolve a DNS query the fastest.
    • DNSSec
      • With this test, the validity of a DNSSEC entry can be verified
  • Web
    • HTTP Server
      • An HTTP(S) connection is made to a web server, effectively this is like using cURL to check if a connection can be established.
    • Page Load
      • Building on the HTTP test, the page load actually loads the HTML and related objects and tracks the time for each step in the process.
    • Transaction
      • Continuing to build on the Page Load test, a transaction test uses a Selenium browser and a script to simulate user interaction with a web application.
    • FTP
      • Establishes an FTP (or SFTP/FTPS)  connection, and attempts to download, upload, or list files.
  • Voice
    • SIP Server
      • This test checks SIP access to a server (by default TCP 5060), and under the advanced options, it can be configured to attempt to register a device with the SIP server.
    • RTP Stream
      • This is similar to an agent-to-agent test, but since it tests specifically focuses on RTP it includes MOS and PDV metrics in addition to the loss, latency, and jitter provided by normal agent tests.
There’s a lot more detail around tests and some of the advanced options available for each.  More info can be found here: https://docs.thousandeyes.com/product-documentation/internet-and-wan-monitoring/tests
The Endpoint tests have two different methods to choose from.  The first is a scheduled test, which functions in a similar way to the Enterprise tests.  Endpoints can either do agent-to-server network tests or HTTP web tests.  The other option, which is unique to Endpoint agents, is the Browser Session test.  A Browser Session test uses a browser plugin (installed as part of the Endpoint agent installation) that collects data from the browser based on real-time interactions with web applications.

What’s next?

The next few posts will go over each scenario in detail. It will cover the creation of various tests to meet the requirements outlined in each scenario. After all the tests are created then we’ll look at the results and review how that information is useful.

Network Field Day 28 – Recap and Review

  I had the opportunity to participate as a delegate at Network Field Day 28, and I wanted to share my experience.

What is Network Field Day?

Network Field Day is one of the Tech Field Day events put on by Gestalt IT where sponsoring vendors present to a panel of delegates.  Network Field Day is specifically focused on networking solutions, and there are other events including Security Field Day and Storage Field Day with content that aligns to the respective categories.
There are usually about twelve delegates per event, with each one being invite-only.  Each delegate is independent (not employed by a vendor, or an industry analyst), is active in the community through things like blogs, podcasts, social media, etc, and could be considered a subject matter expert on the event topic.
There’s a lot of information on how TFD works at their “About” page – https://techfieldday.com/about/ I recommend checking out the infographic, and reading through the FAQ to get a better understanding of what the event is about.
Want to find out more about the presenters or delegates?  Want to watch the recorded sessions?  Go to the NFD28 page to get all that and more!

Vendor Presentations

The event spanned three days, with 9 presenters, with 13.5 hours of presentations and 4.5 hours of off-camera conversations.  Plus there were plenty of conversations with other delegates throughout the event.  That said, this isn’t an exhaustive review of everything.  I’ll be working on putting more detailed posts together soon.

Day 1

Juniper https://www.juniper.net/

Juniper had two 1.5 hour sessions, so there was a lot of information to cover.  There were a couple specific areas that they talked about extensively – Marvis and Apstra.
Marvis is Juniper’s AI that is used to help improve network operations.  One of the use cases would be streamlined anomaly detection, even to the point of potentially predicting issues before they occur.  There was a lot of discussion around Full Stack AIOps, along with a demo.  Another use case they presented was around wireless performance.  By collecting wireless performance data Marvis can recommend adding or moving APs to improve coverage.
Apstra is a solution that allows network teams to build out templates for data center deployments.  The cool thing that Apstra does is it disassociates the template from the underlying devices.  A single template could be deployed against Juniper, Cisco, and Arista hardware (among others) without needing to make any changes.  It takes the concept of intent-based networking and applies it in a mostly vendor agnostic way.  One of the use cases that was easy to see was environments that are being forced to look to different hardware vendors due to supply chain shortages. Continue reading “Network Field Day 28 – Recap and Review”

ThousandEyes Walkthrough Part 4.1 – SNMP Monitoring

This post will go over enabling and using SNMP monitoring in ThousandEyes. To see past posts in this series expand the box below.

ThousandEyes Walkthrough Table of Contents

There are some behind-the-scenes posts that go into more detail on how and why I took the approach that I did. Those can be found here:

With the lab environment built, and the agents installed and online, now it’s time to start actually getting monitoring data through ThousandEyes!

If you haven’t followed along with the previous posts in this series you can find the lab build here: https://www.mytechgnome.com/2022/04/thousandeyes-walkthrough-part-2-lab.html and the agent installation here: https://www.mytechgnome.com/2022/04/thousandeyes-walkthrough-part-3.html

This lab requires version 1.1 of the lab build.  Verify the lab you are using is 1.1 or newer.  If it’s not, look at the CHANGELOG section near the bottom of the lab build post: https://www.mytechgnome.com/2022/04/thousandeyes-walkthrough-part-2-lab.html

SNMP Configuration

The SNMP configuration will allow basic SNMP monitoring but is not intended to replace existing SNMP monitoring solutions.  Within ThousandEyes the value of SNMP monitoring is to provide more contextual data and visibility, and some capabilities to alert on different conditions.

  1. Open a web browser and navigate to https://www.thousandeyes.com/
  2. Log into your account
  3. Click the Hamburger icon in the top left
  4. Expand Devices
  5. Click on Device Settings
  6. There might be a Get Started with Devices splash screen, or it will take you directly to the Devices page.
    1. Splash screen –
      1. Click Start Discovery
    2. Devices Page
      1. Click Find New Devices
  7. On the right in the Basic Configuration enter the scan details
    1. In Targets enter the following subnet: 10.255.255.0/24
    2. In the Monitoring Agent drop-down select CS1-1
    3. Under Credentials click “Create new credentials”
      1. In the Add New, Credentials pane enter a name, and for the community, string enter: TE
    4. If the Credentials don’t auto-populate then click the dropdown and select the TE-SNMP that was just created
    5. Occasionally devices may not be picked up on the first discovery, but if the box is checked to “Save as a scheduled discovery” it will retry every hour
    6. Click Start Discovery
  8. Wait for the discovery process to complete – this might take a few minutes
    1. NOTE: There seems to be a bug in the UI where it displays a “No devices found” error, even though all the devices were discovered.
  9. Click back to the main section of the page and the Add Devices panel will disappear
  10. Click the Select All checkbox on the top left of the device list, then click Monitor at the bottom of the page.
  11. Wait a few minutes for the devices to show Green under the Last Contact column

SNMP Toplolgy

ThousandEyes includes a cool toplogy builder based on the data collected from the SNMP monitors.  It’s able to determine device adjacency, but not necissarily the best placement for our interpretation.  The good news is the devices can be moved to better align to what we’d like to see.
  1. Hover over the menu icon in the top left, then under Devices click on Views
  2. The Device Views will show some metric data on the top, and the topology on the bottom
  3. Click on Edit Topology Layout
  4. Devices in the topology view can be moved (drag-and-drop) to better represent the actual topology. Click Done Editing when the device positions match the lab topology.
As usual, if there were any issues you can add a comment to this post, or reach me on Twitter @Ipswitch

Conclusion

The SNMP monitoring in ThousandEyes is now configured.  One important note here is that this is a lab build.  In a production environment steps should be taken to secure SNMP access.  Restricting access to SNMP via ACL is always a good idea, as well as using SNMP v3 for authentication and encryption.
Later in this series the SNMP configuration will be revisited.  When data is flowing on the lab network the SNMP views will be useful in getting more information on traffic flows.  The SNMP data can also be used to help troubleshoot issues and to create alarms depending on network conditions.

What’s next

The next task is to define some scenarios to identify what needs to be monitored.  The scenarios will be generic but should be relatable for any IT professional out there.  After the scenarios are defined then the ThousandEyes tests can be built for each unique scenario.

ThousandEyes Walkthrough Part 3 – Enterprise and Endpoint Agent Installs

This post will go over installing the ThousandEyes agents in the lab. To see all the posts in this series expand the box below.

ThousandEyes Walkthrough Table of Contents

There are some behind-the-scenes posts that go into more detail on how and why I took the approach that I did. Those can be found here:

There are going to be a number of agent deployments in the lab that was covered in the previous post:

  • 4x Linux Enterprise Agent installs on the CML Ubuntu instances
    • CS1-1, CS1-2, CS2-1, and CS2-2
  • 2x Docker Enterprise Agent container deployments on the Ubuntu Docker host
    • These two agents will be added to a cluster
  • 1x Raspberry Pi Enterprise agent (optional)
  • 1x Windows Endpoint Agent install on the Windows VM

Prerequisites

The lab needs to be built out.  Details on that process can be found here: https://www.mytechgnome.com/2022/03/thousandeyes-walkthrough-part-2-lab.html
Before we can start with the agent installs some ThousandEyes licenses are required.  It’s possible you already have some ThousandEyes licenses.  Cisco has bundled Enterprise Agents with the purchase of DNA Advantage or Premier licensing on the Catalyst 9300 and 9400 switches.

If existing licenses are unavailable a 15-day trial license can be requested here: https://www.thousandeyes.com/signup/

Additional hardware and software

As a side note – if you plan to work a lot with the Raspberry Pi I strongly recommend getting the USB 3 adapter.  It has a significant improvement in performance over the USB 2 adapters that are typically bundled with Raspberry Pi kits.  The SD cards recommended by ThousandEyes are because of the card performance.  Other cards can be used, but there may be a negative impact on performance.

Installs

Account Group Token

Before getting started with the installs it is important to get your Account Group Token.  This is an ID that is used to associate the agents to the correct account.  When deploying agents it will often require the token to be specified.
There’s multiple ways to find the token, but I think the easiest is to just pull it from the Enterprise Agent deployment panel
  1. Open a web browser and navigate to https://www.thousandeyes.com/
  2. Log into your account
  3. Click the Hamburger icon in the top left
  4. Expand Cloud & Enterprise Agents
  5. Click Agent Settings
  6. Click the Add New Enterprise Agent button
  7. Click the eye button to show the token, or the copy button to store it on the clipboard
    1. In a production environment you would want to keep this token safe.  It provides devices access to your ThousandEyes account, so it should not be made public
  8. Store the token in a safe, convenient location.  It will be used to add agents to the ThousandEyes account throughout this process.

Linux Enterprise Agent install

  1. Open a web browser and navigate to https://www.thousandeyes.com/
  2. Log into your account
  3. Click the Hamburger icon in the top left
  4. Expand Cloud & Enterprise Agents
  5. Click Agent Settings
  6. Click the Add New Enterprise Agent button
  7. Click the option for Linux Package
  8. Copy the commands displayed
    1. curl -Os https://downloads.thousandeyes.com/agent/install_thousandeyes.sh
      chmod +x install_thousandeyes.sh
      sudo ./install_thousandeyes.sh -b <--Your Token goes here-->
  9. Perform the following steps for CS1-1. CS1-2, CS2-1, and CS2-2 in CML
    1. In CLM open the terminal session and log in
    2. Paste the commands into the terminal and press Enter
    3. It may take some time, but eventually there will be a prompt that say:

      The default log path is /var/log. Do you want to change it [y/N]?

    4. Press Enter to accept the default log location
    5. It might take 10 minutes or it could be over an hour for the process to complete and the agent to come online.  When it returns to the user prompt the service should be started.
  10. When the installs are complete they should be listed in the ThousandEyes portal under Enterprise Agents
    1. If the agent status is yellow it likely means an agent update is required, and it should automatically update within a few minutes

Docker Enterprise Agent install

    1. Open a web browser and navigate to https://www.thousandeyes.com/
    2. Log into your account
    3. Click the Hamburger icon in the top left
    4. Expand Cloud & Enterprise Agents
    5. Click Agent Settings
    6. Click the Add New Enterprise Agent button
    7. Click the option for Docker
    8. Scroll down to the sections with the commands
    9. Copy the section to configure seccomp and apparmor profile
      1. curl -Os https://downloads.thousandeyes.com/bbot/configure_docker.sh
        chmod +x configure_docker.sh
        sudo ./configure_docker.sh
    10. Log in to the Ubuntu node that is the Docker host and paste in the commands:
      1. Add listening IPs for the Docker containers
        1. sudo ip add add 192.168.1.51 dev ens33
          sudo ip add add 192.168.1.52 dev ens33
      2. Pull the TE Docker image
        1. docker pull thousandeyes/enterprise-agent > /dev/null 2>&1
      3. Update these commands by putting in your ThousandEyes token and changing the IPs if needed, then run them to create two ThousandEyes agents.

NOTE: These commands have been updated to include DNS and IP settings that aren’t available on the ThousandEyes Enterprise Agent page. If you use the commands from ThousandEyes the DNS and Published ports will need to be updated.

      1. docker run
          --hostname='TE-Docker1' 
          --memory=2g 
          --memory-swap=2g 
          --detach=true 
          --tty=true 
          --shm-size=512M 
          -e TEAGENT_ACCOUNT_TOKEN=<--Your Token goes here--> 
          -e TEAGENT_INET=4 
          -v '/etc/thousandeyes/TE-Docker1/te-agent':/var/lib/te-agent 
          -v '/etc/thousandeyes/TE-Docker1/te-browserbot':/var/lib/te-browserbot 
          -v '/etc/thousandeyes/TE-Docker1/log/':/var/log/agent 
          --cap-add=NET_ADMIN 
          --cap-add=SYS_ADMIN 
          --name 'TE-Docker1' 
          --restart=unless-stopped 
          --security-opt apparmor=docker_sandbox 
          --security-opt seccomp=/var/docker/configs/te-seccomp.json 
          --dns=10.133.100.10 
          --dns-search=cml.lab 
          --publish=192.168.1.51:49152:49152/udp 
          --publish=192.168.1.51:49153:49153/udp 
          --publish=192.168.1.51:49153:49153/tcp 
          thousandeyes/enterprise-agent /sbin/my_init
      2. docker run
          --hostname='TE-Docker2' 
          --memory=2g 
          --memory-swap=2g 
          --detach=true 
          --tty=true 
          --shm-size=512M 
          -e TEAGENT_ACCOUNT_TOKEN=<--Your Token goes here--> 
          -e TEAGENT_INET=4 
          -v '/etc/thousandeyes/TE-Docker2/te-agent':/var/lib/te-agent 
          -v '/etc/thousandeyes/TE-Docker2/te-browserbot':/var/lib/te-browserbot 
          -v '/etc/thousandeyes/TE-Docker2/log/':/var/log/agent 
          --cap-add=NET_ADMIN 
          --cap-add=SYS_ADMIN 
          --name 'TE-Docker2' 
          --restart=unless-stopped 
          --security-opt apparmor=docker_sandbox 
          --security-opt seccomp=/var/docker/configs/te-seccomp.json 
          --dns=10.133.100.10 
          --dns-search=cml.lab 
          --publish=192.168.1.52:49152:49152/udp 
          --publish=192.168.1.52:49153:49153/udp 
          --publish=192.168.1.52:49153:49153/tcp 
          thousandeyes/enterprise-agent /sbin/my_init
          
  1. When the installs are complete they should be listed in the ThousandEyes portal under Enterprise Agents
    1. If the agent status is yellow it likely means an agent update is required, and it should automatically update within a few minutes

Docker Enterprise Agent configuration

There are two configuration tasks that will be performed on the Docker agents.  The IP setting in ThousandEyes will be updated to use the host IPs that are tied to the Docker agents instead of the private Docker IPs, and the two agents will be added to a ThousandEyes Cluster.
  1. Open a web browser and navigate to https://www.thousandeyes.com/
  2. Log into your account
  3. Click the Hamburger icon in the top left
  4. Expand Cloud & Enterprise Agents
  5. Click Agent Settings
  6. Click on the Agent
  7. In the right panel click on Advanced Settings
  8. Updated the IP address with the address assigned to that instance
  9. Click the Save Changes button on the bottom right
  10. Repeat this process for the other container agent
  11. At the Enterprise Agents page select both Docker agents
  12. Click the Edit button
  13. Select Edit Cluster
  14. On the right select Add to a new cluster
    1. In the name field type Docker
  15. Click Save Changes
    1. It will give a confirmation screen, click Save Changes again
  16. The agent icon will be updated to include the cluster icon, and under the Cluster tab it will display the new cluster
Wondering why those changes were made?
The first change to the IP address was because ThousandEyes learns the IP address of the agent from its local configuration.  Docker, by default, creates a bridged network that uses NAT to communicate with the rest of the network.  That means the addresses Docker assigns to containers aren’t accessible on the network.  The additional IPs were added to the Ubuntu host to allow static NAT entries to be created in Docker (the Publish lines), which redirect traffic to sent to those IPs to the correct agent.  Since there are two containers using the same ports, we need two IP addresses to uniquely address each instance.  The change that was made to the agent settings in ThousandEyes forces other agents to use the routed 192.168.1.0/24 LAN network instead of the unrouted 172.17.0.0/16 Docker network.  This is only needed because we are going to build inbound tests into those agents.  If this was only outbound then it wouldn’t matter.
As for the creation of the cluster, this was done for high availability.  Granted, in this scenario both instances are running on the same Docker host which defeats the purpose.  However, it still illustrates how to configure the cluster.  The purpose of the cluster is exactly what would be expected.  Both agents share a name, and are treated as a single agent.  If a test is assigned to a cluster then either instance could run it.  In addition to high availability, this also can provide some load balancing between the agents, and it can simplify test creation.  Instead of managing tests to multiple instances in one location we can use the cluster agent to distribute those tests.

Raspberry Pi Enterprise Agent install

I have an automated configuration process for the Raspberry Pi image: https://www.mytechgnome.com/2023/06/15/automated-thousandeyes-raspberry-pi-image-customization/

  1. Open a web browser and navigate to https://www.thousandeyes.com/
  2. Log into your account
  3. Click the Hamburger icon in the top left
  4. Expand Cloud & Enterprise Agents
  5. Click Agent Settings
  6. Click the Add New Enterprise Agent button
  7. The pane on the right should open the the Appliance tab, under Physical Appliance Installer find the Raspberry Pi 4, and to the right of that click Download – IMG
  8. Wait for the download to complete.  It’s nearly a 1GB file, so it might take a few minutes.
  9. Connect the SD card to the computer that will be doing the imaging
    1. This process erases the entire card.  Make sure you are using a blank card, or you have any valuable data on the card backed up elsewhere.
  10. Launch the Raspberry Pi Imager
  11. Under Operating System click Choose OS
  12. Scroll down to the bottom of the list and click Use custom
  13. Browse to the location of the downloaded image, select it, and click Open
  14. Under Storage click on Choose Storage (or Choose Stor…)
  15. Select the SD card in the window that pops up
    1. If the SD card does not show up try reseating the card
  16. Click Write
  17. Continuing this process will erase all data on the SD card, if that’s acceptable click Yes
  18. A progress bar will be displayed, and after a few minutes the image copy should complete successfully.  Click continue and close the Raspberry Pi Imager software
  19. Remove the SD card from the imaging PC and insert it in the Raspberry Pi.
  20. Boot the Raspberry Pi
    1. You’ll want a monitor connected to find the IP assigned, though this could also be done by looking at DHCP leases, scanning the network, or trying name resolution for the default hostname: tepi
    2. Make sure there’s a network cable plugged in and connected to the LAN (the ThousandEyes agent doesn’t support wireless connections)
  21. When the Pi finishes booting find the IP address displayed on the screen
  22. Use a web browser to connect to the IP of the Pi agent (using the name might work – https://tepi/)
  23. Likely the browser will display a security warning because the certificate is untrusted.  Go through the steps required to accept the security risk and access the site.
  24. At the login page enter the default credentials: admin / welcome
    1. After logging in there may be an error message that briefly appears in the lower right stating the Account Group Token needs to be set.  This will be resolved shortly, and the error can be ignored for now.
  25. The first page will prompt to change the password.  Enter the current password and create a new one, then click Change Password
    1. After the password change is saved click the Continue button at the bottom of the page
  26. The next page prompts for the Account Group Token.  Enter the token value that was collected earlier in this post and then click Continue
    1. Even though there is a button to enable Browserbot here, the Raspberry Pi agent does not support it.  Leave that field set to No.  You can decide if you want to leave the crash reports enabled.
  27. The agent will go through a check-in process and provide diagnostic data.  If everything looks good you can click Complete
  28. That completes the required agent set up.  It will then bring you to the network configuration page.  Scroll down to the DNS section, switch the Current DNS Resolver to Override and enter the IP 10.133.100.10 in the Primary DNS box
    1. For the purposes of this lab none of the other settings need to be changed.  A static IP can be configured and/or the hostname could be changed if desired
  29. The agent should now be listed in the ThousandEyes portal under Enterprise Agent
    1. If the agent status is yellow it likely means an agent update is required, and it should automatically update within a few minutes
That completes the Enterprise Agent installations for the lab.

Windows Endpoint Agent install

  1. Start the Windows VM and log in
  2. Open a web browser and navigate to https://www.thousandeyes.com/
  3. Log into your account
  4. Click the Hamburger icon in the top left
  5. Expand the Endpoint Agents section
  6. Click on Agent Settings
  7. Either a splash screen with a Download button will appear, or there will be a button to Add New Endpoint Agent.  Click the button that shows up – both bring up the same pane
    1. Splash screen – 
    2. Add Endpoint Agent Button
  8. Leave the Endpoint Agent radio button selected and click the button Download – Windows MSI
    1. The Mac installation isn’t being covered here, but there’s instructions on how to install it here: https://docs.thousandeyes.com/product-documentation/global-vantage-points/endpoint-agents/installing
  9. There will be two options for the processor architecture, select the x64 Windows MSI
  10. When the download completes run the MSI
  11. The installation is a typical MSI package, so I’m not going to include screenshots for every step
    1. Click Next to start the install
    2. Read the EULA and if you agree to the terms check the box to accept and click Next
    3. Click on the TCP Network Tests Support and select “Will be installed on local hard drive”
    4. Do the same for at least one browser extension.  Edge is the default browser on Windows 10, but if you want to install and use Chrome then get Chrome installed before continuing the Endpoint Agent installation.  Click Next when you have the browser selected.
    5. Click Install
    6. If there us a UAC prompt for the install, click yes to continue
    7. Click Finish
  12. It might take a few minutes for the agent to check in, but eventually you should see the agent listed under Endpoint Agents in the portal

Conclusion

This was the first post actually working with ThousandEyes, and hopefully it illustrates how powerful this tool is.  As part of the lab there are four different types of agents installed, but there’s many more available:
  • Bare metal install (Intel NUC or other hardware)
  • OVA (VMware ESX, Workstation, and Player, Microsoft Hyper-V Oracle VirtualBox)
  • Application hosting on Cisco platforms (Catalyst 9300 and 9400, Nexus 9300 and 9500, Catalyst 8000, ISR, ASR)
  • AWS CloudFormation Template
  • Mac OS Endpoint Agents
  • Pulse Endpoint Agents for external entities
In addition to the breadth of agents available, the deployment can easily be automated.  I’ve written a script that wrote the Raspberry Pi image to an SD card, then mounted it and applied customizations.  The MSI package can be used with the plethora of Windows software deployment tools, or a link can be given to end users to install on their own.  With DNA Center the image can be pushed to Catalyst switches in bulk.  The Docker images can be build with Docker files.  If that’s not enough, there’s also all the automation tools – Ansible, Terraform…
Getting ThousandEyes deployed throughout an environment can be done with ease.

What’s next?

That completes the agent installation.  The next installment in this series will cover some test scenarios, and walk through getting monitoring configured and tests created.

ThousandEyes Walkthrough Behind the Scenes – The Lab Build

This post will go over the planning of the ThousandEyes lab used in this series. To see past posts in this series expand the box below.

ThousandEyes Walkthrough Table of Contents

There are some behind-the-scenes posts that go into more detail on how and why I took the approach that I did. Those can be found here:

If you are following the series, this post is strictly informational.  It won’t contain any steps that need to be performed in the lab.  The goal is to provide insight into why I made the design choices I did with the lab.

The details on the lab build can be found here: https://www.mytechgnome.com/2022/04/thousandeyes-walkthrough-part-2-lab.html

And here’s an overview of the objective of this series: https://www.mytechgnome.com/2022/03/thousandeyes-walkthrough-part-1-what.html

CML

  • There are plenty of similar tools (GNS3, EveNG, etc) that are available, so why did I pick the paid tool?  The simple answer is licensing.  My understanding is CML is the only way to run virtual Cisco instances without running afoul of the EULA.  Yes, I could have used non-Cisco routers, but since Cisco is a major vendor it seemed reasonable to go with it.
  • The Personal version of CML has two flavors, Personal which allows 20 active nodes, and Personal Plus which allows 40 active nodes.  I built the lab using 20 nodes because the Personal Plus is an extra $150, and because the additional nodes would increase the resource requirements.  I wanted the lab to be as accessible as possible.  It could easily be extended to 40 nodes or higher, but 20 is enough to get basic testing done.
  • Even though the TE agents could be deployed to VMs, I wanted to use CML as a way to easily simulate scenarios where an engineer would need to do some troubleshooting.  Within CML links can be configured with bandwidth limits, latency, jitter, and loss.  The theory is that ThousandEyes should be able to detect and even alert on those conditions.
  • I am using version 2.2.3, even though version 2.3 is available.  The simple reason is that Cisco is still recommending version 2.2.3.  There are some known issues with 2.3, which is why I’m not running that.

IOSv Routers

  • Even though CML can run CSR 1000V and IOS-XR instances I decided to go with IOSv instances.  This was because of resource requirements.  The CSR 1000v and IOS-XR instances each require 3GB RAM, and with 14 routers that would consume an additional 35GB RAM over what the IOSv routers use.  For the purposes of the lab, the IOSv can do everything needed without the overhead.

Ubuntu

  • I wanted to keep as much of the lab in CML as possible, and running Ubuntu in CML aligns with that goal.  Of the Linux flavors that are available out of the box in CML, Ubuntu is the only one supported by ThousandEyes.
  • With Ubuntu being used in the CML lab it seemed reasonable to use Ubuntu for the Docker host as well.

Topology

  • I’ll admit I spent a lot of time working through different topology options.  At one point I had switches and HSRP in the design, but I decided to back away from layer 2 technologies to focus on layer 3.  The primary use case for ThousandEyes is looking at WAN links, and with the node limit in CML, it made sense to drop the L2 configurations to make room for more L3 devices.
  • I wanted to maximize the number of BGP AS configurations while maintaining multiple links, which is why there are 7 BGP AS configurations.  By simply shutting down specific links traffic could hit 6 of the 7 AS networks.  With some BGP reconfiguration that could be extended.
  • The two “Client” networks are intended to be what a network engineer would have in their environment.  Likely they’d have a lot more, but with the node limits having two networks is enough to test with.  Each of the client networks has two Ubuntu nodes that are running the TE Enterprise agent.  One of the Ubuntu nodes is also running Apache.  (more on Apache shortly)
  • In the “Public” network I wanted to add another BGP path outside the redundant ISP paths, and I wanted a service that was accessible.  With this being treated as public I opted to not run a TE agent there.
  • Access outside of the CML environment is done via the “External” network.  ThousandEyes is a SaaS service, which means the agents all need to be able to connect to the TE portal.
  • Even though the entire network is built using RFC 1918 addresses, the design is effectively using public addresses throughout the entire lab.  The “Client” addresses are propagated through the ISP and public networks, which isn’t typical in IPv4 deployments.  This was mainly choosing simplicity and efficiency.  If the client networks were masked then something like a VPN would be required to link the two client networks.  Though that better aligns with the real world, for the functional purposes of the lab it makes no difference.  Both ends need IP reachability and adding more NAT and VPN configuration work doesn’t provide a significant improvement in how the lab operates.

External Routing and NAT

  • On the external router, NAT is configured, which should allow internet access from the lab with no additional configuration needed.  The 192.168.1.0/24 network is excluded from translation with the intent that devices on the LAN (Docker, Windows, and Raspberry Pi agents) would be able to connect directly to devices in the CML lab.
  • For the LAN devices to reach the CML lab routes need to be added either to the LAN router or as static routes to each of the devices.  Using the LAN router requires the fewest changes, and is the most extensible.
  • Unfortunately not every environment is identical.  I suspect that there may be some issues with getting the routing working properly.  I spent a lot of time trying to decide if this routing solution was better than just using DHCP on the external router and doing full outbound NAT.  I decided that having the external agents able to have full connectivity to the internal agents was worth the added complexity.

Services

  • The Apache instances were set up just to create a simple webserver to establish HTTP connections.  For transaction tests, I will be using external websites.
  • Bind is deployed primarily for easy name resolution of the lab devices, and to have another service running inside the lab.  Since ThousandEyes can do DNS tests it made sense to include.

External Resources

  • The Docker, Windows, and Raspberry Pi agents are primarily just to provide the ability to test with those platforms.  The Docker and Pi agents are functionally similar to the Ubuntu agents running in the CML lab.  The Windows agent is an Endpoint agent, which brings a different set of functionality.  
  • I do expect that there will be improvements in test performance with these agents versus the ones in CML because there are fewer layers of abstraction.  I can’t imagine an Ubuntu agent running on a minimum spec VM inside KVM, that is running on the CML VM inside Workstation is going to be the most efficient.  Add in the software layers for the routers connecting those agents, and that only adds more potential performance impact.
  • As mentioned previously, internet access is required for ThousandEyes agents to reach the SaaS platform.  With that requirement in mind, it made sense to just use external websites for most of the testing instead of building elaborate web servers inside the lab.

Misc. Notes

  • Everyone has their preferred numbering scheme.  For this lab, I tried to come up with something that I could easily build on in a programmatic sense.  Yes, for the router links I could have used /30 or /31, but in a lab, I’m not worried about address consumption.  I built addresses based on the nodes being connected.
  • I’m sure someone somewhere will be upset that I don’t have passwords on the routers.  It’s a lab that I tear down frequently, and it’s inside a trusted network.  The risk of an attack is minimal, and worth it to not need to log in to each device.
  • The Ubuntu server version was the latest at the time of writing, and I went with Windows 10 to avoid some of the issues with getting Windows 11 deployed.
  • With the complexity of the build in CML, I decided it was easiest to just publish the YAML code.  Initially, I had intended to write up exactly how to build the lab, and provide configs for each device, but as I built it out it became clear that doing so would be quite cumbersome.  Using the YAML file should give more consistent deployments, with less manual work to get the lab running.
  • I’ve had several requests to incorporate AWS into this lab.  Currently, that’s outside the scope of the roadmap I have for this series.  The primary reason for that is because of the cost associated with AWS.  Once I get through the posts I have planned for this series I plan to investigate if I can leverage the AWS free tier to get useful data.
  • Despite most of the routers being in provider networks, each router has SNMP running.  The reason I did this was to show how ThousandEyes can use SNMP to add additional context to data, and in some cases, it can be used to trigger alarms.  In a real-world scenario you likely can’t get SNMP from provider networks, but you also likely have more than two network devices at a location.  The decrease in realism is more than made up for by not having to build out a complete LAN environment.
I’m sure there are plenty of things that I forgot to include here, and likely some good ideas that I didn’t even think about.  If you have any questions on the lab design please leave a comment below, or you can reach me on Twitter – @Ipswitch