Article: How to Efficiently Discover Network Resources

July 27, 2018

Thomas Stocking, co-founder and vice president of product strategy,  recently wrote an article titled How to Efficiently Discover Network Resources, featured in The Data Center Journal. The article talks about network discovery tools and processes, and why it’s important to automate and standardize. Many business processes (security management, service delivery and service support) depend on the administrator’s knowledge of the network details.

Networks are the backbone of IT. Unless they have a problem, they are (or should be) invisible. If they break, things fall over—sometimes big things that affect revenue. Networks are also organic in a sense: they can be expanded slowly over time, and bits and pieces can move from place to place, be repurposed, and be left in suboptimal states for months or years.

Just knowing where all the pieces are—what’s plugged in where—is a huge task if done manually. Fortunately, this area benefits from standardization. Data sets can be mined to acquire the network inventory and topology and to automatically keep it up to date with minimal effort.

Simple Network Management Protocol (SNMP) is still the main tool in interrogating network devices, but it’s by no means the only one. The link layer of networks has a wealth of information about what physical connections exist, and it can be interrogated using the Link Layer Discovery Protocol. Cisco has a proprietary discovery protocol called Cisco Discovery Protocol (CDP) that’s robust and rich in its ability to find neighboring devices. And don’t forget the Address Resolution Protocol (ARP), which is typically cached and can be interrogated. This is really just the beginning: most administrators never need to go deeper than these protocols to do their jobs, but the network abounds with information about its routing, linking and segmentation of data.

Many tools can help map our networks, and these tools range in price from free and open source to many hundreds of thousands of dollars. Do you get what you pay for? Usually, but there’s a sweet spot for network engineers who know enough to be able to use tools that are powerful and sophisticated, supported, and—often—open source.

Keeping track of such issues as network-topology changes, tracking devices and maintenance contracts, and manipulating network switch and router configurations, administrators can use tools that take advantage of a variety of discovery technologies to locate infrastructure devices across the network, and then another set of technologies to probe and query those discovered devices for details. Typically the data retrieved is accessible through a web interface and/or an API. The web interface often displays information about those network devices and their relationships, giving users the capability to perform network-management tasks, find the suboptimal places and fix them, and receive alerts regarding network changes as they happen.

Clearly, no tool could do all this without using the SNMP, LLDP, CDP and ARP protocols, and more besides. Although administrators can employ commercial tools, such tools tend to be complex and lacking in transparency. Most network administrators know that the devil is in the details: you have to read the logs, test the functions and verify the connectivity at the lowest level. The best tools will do so and show you exactly what they’re doing.

Here are some important features to look for in discovery tools:

  • Intelligent topology awareness
  • MAC-address mapping/tracking
  • Auto-generated network maps and monitoring dashboards
  • Rogue- and missing-device detection
  • Extensive reporting (devices, modules, interfaces, assets, nodes)

How to Use Discovery Tools

Keeping things simple and contained eases automation. We always recommend setting up the discovery tools on secure servers that have good connectivity to the entire network and that use local storage (typically a database). Typically it means a Linux system with a particular technology stack, such as LAMP (Linux Apache MySQL and PHP), but there are many possible variants. The main things to remember are placement and security. You want to make this system hard to breach but able to reach everything.

Your discovered data should reside in an efficient database with an API in front of it to facilitate queries, and it should be kept up to date by subsequent discovery runs. The timing and scope of such runs should be transparent and use standard methods of scheduling (e.g., cron), and it should support your choice of update versus replace operations.

You need a web UI that cleanly portrays the results and offers you easy ways to filter and sort through the data. Reports should be configurable as well as easy to access and export. Hooks to asset managers and software inventories are a major plus for those who need to see and track that detail. Interestingly, much of what network administrators do these days is about maintaining support contracts, so don’t miss this point.

Discovery Is a Process

Networks are constantly changing, and there’s no such thing as a static network. Someone is always plugging something else in, moving things around and replacing broken or aging devices. Discovery is something that must occur frequently (how frequently is up to you) to be valuable. The automation of network management depends on clean, clear, accurate and frequent discovery. Many business processes (security management, service delivery and service support) depend on the administrator’s knowledge of the network details.

If you have a data center and manage networks, you need to look at them with discovery tools that employ standard protocols. How better to check on the function of a network than to ask it about how it’s working using the same protocols it uses to function? That’s our philosophy, and we use it for monitoring everything.