This post is part of a series on building my Ruckus home laboratory environment. Previous articles in this series include:
- Building a Ruckus Home/Office Laboratory on a Budget – Part 1
- Building a Ruckus Home/Office Laboratory on a Budget – Part 2
- My Ruckus Laboratory – Home Network Architecture & Limitations
- My Ruckus Laboratory – Physical Network
In this post I document the high level network topology inside my Dell R610 server and share some of my thoughts on why it is designed the way it is.
Choosing a Topology
A Flat Network
The simplest possible network topology is a flat network with all of the virtual machines, APs and clients in the same subnet. Once you’re done building out the physical network, simply place your virtual machines inside the host on a single subnet behind a virtual switch and you’re good to go right? In one way you are right, this is the simplest possible deployment and the quickest way to getting your hands dirty with various machines, but it is also the most limited approach.
A flat network does not allow you to test scenarios where a SmartZone controller is deployed using 3 separate interfaces for Management, Control and Clustering. It cannot test scenarios where APs and controllers communicate through NAT. It cannot test scenarios where clients require a tunneled connection for mobility between subnets. It prevents you from testing things like dynamic VLANs and Policy control. It also removes the ability to learn about many other entities and technologies that are at play in a network and how they interact with the Ruckus products and that is really the one of the fundamental reasons I am building this laboratory.
A Layer 3 Network
Overall, it makes the most sense to build a layer 3 network for the virtual network, but what should this network look like? I could simply slap down a virtual Router and run my laboratory network with all the entities connected to that. This is a great deal better than the flat network we described above and will enable you to test many more features, but it still doesn’t quite hit the mark in my book.
Before I dive into the topology that I have chosen for my virtual network, let’s consider the three primary requirements of the virtual network topology.
First the chosen network topology must allow testing of as many features and technologies as possible without requiring major architectural changes. I want this laboratory to be as productive as possible, and for that I need to be able to test a feature or a configuration whilst making only limited changes to the underlying network.
Second, if the laboratory is really to have any value, it must be roughly analogous to most customer network topologies in the field. This way when I test a feature or a configuration, it must map to an as yet unknown production network as easily as possible. This is key for any person reading about testing I have done, who wishes to use the same approach in their own network.
Third, the network topology should act as a template that enables best practice network design, segmentation and security. The motivation here is to re-enforce network design best practices and address topics including network performance and security.
The shape I have chosen for the virtual network topology is that of a triangle, hardly an original choice but there it is. The points at the base of the triangle represents two remote locations, separated from the network core at the apex of the triangle by Layer2/Layer 3 connections.
Whether you are looking at an enterprise network with a head office and several remote offices/locations, a carrier network with multiple sites connected into a regional core network, or a campus network with multiple buildings, you will always be able to discern this shape. Even if we think of a scenario in which a hotel group chooses to manage multiple properties from a single controller in a datacenter available over the Internet, this shape persists in some form. Of course, the exact protocols that run between the sites will differ between the different scenarios. In a campus network you would be likely to encounter OSPF, or perhaps some campus fabric implementation, in an enterprise network you may encounter SD-WAN or MPLS. In our last example, you’d be likely to encounter connectivity directly over the internet with NAT traversal on either side. Either way though, the shape holds. The additional benefit of this shape is that it enables you to test communications from the edge to the core network and between two edge locations.
Virtual Network Topology
The final virtual network topology showing the segmentation of the laboratory network according to services is shown below.
Abstraction & Simplification
In order to escape the exponential complexity of assigning and tracking subnets on the fly, I have subdivided my virtual network into functional groups. The functional group into which an entity is placed informs me what services the entity should be providing and which other entities it should be communicating with. For instance, I don’t want any of the client subnets to be able to reach the Core Network Services or OSS Subnets that manage and run the network. I also only want subscribers to be able to use specific services/protocols in the Subscriber Core Services and Subscriber Services subnets. The only entities that should be able to manage the network are those in the Management Access group.
Organizing the network entities this way also allows me to get an idea for how many subnets I may need in each functional group and gives me the ability to assign IP Address ranges at a functional level in a predictable manner, that allow for future customization and expansion.
Security & Role Based Design
Each entity within a functional group will also have a customized security profile based on its role. For instance both Super and Network Admin class users (separate subnets) have connectivity to the OSS Network and the entities that provide services in the Subscriber Services and Subscriber Core Services subnets. An entity in the network attempting to relay API commands to the SmartZone Controller would have to reside in either of the management Access Subnets and have access to the controller API with a valid username and password.
Only Super Admins have the ability to even reach the Network Core Services subnet to manage the entities there, let alone attempt a login. In addition, the only entities capable of receiving services from the Core Network Services subnet are network devices on their own management VLANs.
OSS Network Subnets
The OSS network provides a management interface to the OSS infrastructure including the SmartZone Controller, SmartCell Insight Analytics, SPoT location services and any other NMS (SNMP / Syslog etc) that I have chosen to learn about. It is useful to note that if the SmartZone controller is deployed with 3 separate interfaces, the Management subnet will be the subnet that interfaces with Management Access. The AP Ctrl interface will be given its own policy.
Ruckus’ SmartZone platform provides a wide range of options for deployment and provides features such as CALEA mirroring for lawful intercept and roaming of clients between Data Planes. I have placed the virtual SmartZone – Data Planes asymmetrically in the network to quickly demonstrate the various deployment options and test the software’s features. One is placed in the core network, whilst the other is placed locally in Edge Network B.
That’s all for now!