In the previous post I covered the basic motivation for needing to build a laboratory for working with the growing range of Ruckus Wireless products and touched lightly on the use of NFV in the laboratory as a method of saving money. In this post I will give a basic overview of the hardware requirements of the laboratory.
The laboratory will consist of the following key entities:
- Physical network connectivity
- Virtual Machines
- x86 Server/s
- Virtual network connectivity
Physical Network Connectivity
The first and most obvious item you will need is a NAT router to get your Lab network out to the Internet and to get remote access back to your lab. I am not really going to be too prescriptive here, but I trust you can choose something that meets your needs! If you are looking for tips, keep reading my blog, as I will be documenting how I built my network!
L2 / L3 Switch
You’re going to need something to connect your x86 servers to the physical network and you’re also going to need something to connect / power your APs. Physical network connectivity also gives you quick and easy access to the hypervisor host without having to worry about the state of the virtual environment. Enter the Ruckus ICX 7150-C12P.
The Ruckus ICX 7150-C12P is a 12 port switch that packs a punch. It can provide PoE+ (802.3at / 30 Watts) on any of its 12 ports (total power budget of 124 Watts) using a fan-less design for silent operation – a useful feature in a home lab!
It has two dedicated 10/100/1000 RJ-45 Ethernet ports for uplink connections and two additional 1/10 GbE SFP/SFP+ ports for uplinks or stacking. The 1/10 GbE ports allow you to stack up to 12 of the 7150 family switches together over distances of up to 10km. You can also stack different switch variants from the same family. Stacking two switches together can be accomplished using only a single 10 Gb/s link in a linear stack topology, leaving two 10 Gb/s links in the stack free for other connections.
If using the L3 firmware image you can use features like static routes and Routing Information Protocol (RIP). An additional license provides more advanced features including OSPF routing, Virtual Router Redundancy Protocol (VRRP), Equal Cost Multi-Path (ECMP) routing, Policy Based Routes (PBR) and Protocol Independent Multicast (PIM).
If you are planning on working with Ruckus ICX switches more often, this will be the right switch to start getting used to their basic features! Check out the Ruckus ICX 7150 product family brochure here.
High-End 802.11ac AP Power Requirements
Some of the latest generation 802.11ac APs available in the market today like the Ruckus R720 boast multiple Ethernet ports, onboard USB ports and support NBASE-T compatible 2.5Gb Ethernet connectivity. These APs and others like them can require more than even 802.3at PoE+ can deliver in order to use all of their peripheral hardware. Most AP products like the R720 are capable of operating at 802.3at or even 802.3af power levels, but disable some of their peripherals when operating in this mode.
If you are desperate to use the peripheral interfaces/features on your AP in the laboratory, hate power injectors, and/or absolutely need 2.5 Gb Ethernet connectivity, then it would be a good idea to look at using a more capable switch such as the Ruckus ICX 7150-48ZP which provides 16 x 2.5Gb Ethernet ports with full PoH power (100 Watts) per port. This in my opinion is overkill, but hey, some of you out there may need it!
The first thing we need to determine is the amount of resources we need! The laboratory should be able to host the following Ruckus products:
- Two virtual SmartZone (vSZ) Controllers
- Two virtual SmartZone Data Planes (vSZ-D)
- One Smart Cell Insight instance
- One Cloudpath Enrollment Server.
- One SPoT Location Based Services instance
The laboratory should also have sufficient processing power and memory to host additional servers and virtual machines including:
- One / Two virtual routers
- LDAP / RADIUS
- TACACS/TACACS+ server
- NMS / Analytics
- Other virtual instances of products you wish to test
In order to avoid any copyright / confidentiality infringements, I can only provide the overall hardware requirements that I have calculated for the laboratory. Specific data about the hardware requirements of each virtual appliance are available on the Ruckus Support site which you can access using your own support credentials. Adding up the minimum requirements for each Ruckus product, and factoring in a ±30% creep in processing requirements, memory footprint and storage over time (we want this to last!), gives the following totals:
- CPU: 24 vCPUs / 12 Cores
- RAM: 96 GB
- Storage: 800 GB
We should also add in in some extra resources for any other VMs and products we may want to play with err, I mean, “test”. The final hardware requirements that I have defined for my laboratory are:
- CPU: 32 vCPUs / 16 Cores
- RAM: 128 GB
- Storage: 1 TB
Selecting the right CPUs for your virtual environment is crucially important. I strongly recommend using the Intel Core i7 or the Intel Xeon E5-2XXX series chipsets or newer. According to what I have found in the Ruckus deployment guides, CPUs must be Intel Xeon E55XX series or above, which I believe is part of a requirement to support the Intel DPDK and Intel Virtualization Technology for Direct I/O. Here is a list of Intel CPUs that support Intel Virtualization Technology, Virtualization Technology for Direct I/O, and Hyper-Threading Technology. When I installed VMware ESXi 6.5.0 I received a prompt warning me that the Intel Xeon X5650 CPUs may not be supported in future versions of the ESXi. So try to get yourself something with the E5-26XX chipsets or newer!
It is useful to note that you don’t need a huge number of network interfaces for this laboratory. Each physical machine should obviously have at least 1 Gb Ethernet interface. Testing out a 10Gb/s link on the vSZ-D may sound cool, but really it just makes the whole lab more expensive and doesn’t actually do anything except let you go “Ooooh!”. That said, if you can pick something up with 10 Gb/s interfaces, and want to use them in anger the 7150 switch will handle it just fine!
Intel NUC – Skull Canyon
The Intel NUC Skull Canyon is a brilliant small machine and highly suited to this kind of work. It contains an Intel i7-6770 quad-core processor with up to 32GB of RAM and can use very quick SSD storage. A unit with 32GB RAM and 500GB of SSD storage would set you back about $1000 on Amazon. If you simply want a lab that hosts a single virtual SmartZone Controller, virtual SmartZone Data Plane and a virtual router with some other peripheral software, then this would be a great bet! It also makes for a fantastic option if you are a road warrior and find yourself needing to take your lab equipment with you on your customer visits. However, to provide the CPU and memory resources for all of the products described above, you would need about three or four of these machines and it quickly becomes expensive and unwieldy to manage in comparison to other options.
I found a list of ten good home servers for 2017 on techradar. But after investigation (feel free to do your own) I found that the sufficiently powerful contenders cost about $4000.00 or more for a server that contains the required CPU Cores and RAM. If you’re happy to spend the money, this could be a path for you, but if you’re going to spend that much… why not just buy the Intel NUCs and repurpose them for a fun LAN gaming weekend every now and then?
If silence is a key requirement of your lab, then the right place to look is at the products available on EndPCNoise.com. They have a good selection of servers that are specifically built to be quiet, but like the home servers above, are quite expensive for our use case.
If you are not worried about buying refurbished servers (this is a home lab), don’t mind a little noise, don’t care about moving your lab around with you and can accommodate a rack mount solution, then a refurbished server could be the way to go! (EDIT: – July 25th 2017 – As the folks over at CBT Nuggets have noted the Dell R710 is a great server to grow into!)
DDS purchases decommissioned servers in large volumes allowing them to sell the refurbished equipment at surprisingly low cost. All equipment comes from qualified sources and is thoroughly tested. Most importantly: DDS equipment ships globally and has a solid returns policy. If you buy from the DDS website directly, you can customize your order by adding CPUs, RAM and storage and other peripherals. If you opt to purchase via the Deep Discount Servers’ eBay store, you could get a much better deal on a pre-built server that you can customize later. Having had a look around I am confident you will find a server with the necessary resources for approximately $1500 to $2000.
Aventis Systems is another company that sells refurbished systems and should be worth a comparative look when shopping for refurbished products. Aventis offer a wide array of customizations and additions enabling you to basically build your server from the ground up as if it was a new system.
My Lab Hardware (Updated – July 25th 2017)
In my own lab I am using a refurbished Dell R610 from Deep Discount Servers (DDS) with the following specs:
|CPU||2 x Intel Xeon X5650, 6 Core, 2.67 GHz
(12 Cores /24 vCPU Total)
|Memory||128 GB, DDR3-1333MHz (8 x 16GB)|
|Storage||2 x Seagate 2TB, 2.5″, 7200RPM, 12GB/s, SAS HDD
|Networking||4 x 1 Gb/s RJ45 Interfaces|
|Power||Dual Redundant Power Supplies|
Since we are building a Ruckus Laboratory, the hypervisor can be either VMWare ESXi 5.5 or later or KVM on CentOS 7.0 64-bit. Both of these hypervisors are available free of charge. I am running the free version of ESXi 6.5 (6.5.0a available here) on my hardware as I know my way around this product better.
I have had a look at things like Docker, Canonical’s LXD and the like and it makes a lot of sense to learn about this technology. If you do decide to use containers in this environment, it will most likely be Docker inside a Linux VM on top of the ESXi hypervisor. This will give you the ability to spin up a multitude of small containers quickly and easily inside a single Linux OS at a much higher density than ESXi can do it. That could be a real game changer when you’re trying to replicate scenarios or spin up VMs quickly and easily in a constrained environment. It will also simplify a lot of your networking efforts inside the hypervisor layer as containers are hidden behind a NAT into their Linux OS Host.
Virtual Network Connectivity
The Virtual Router
You may be asking yourself: “Why do I need a virtual router if the Ruckus ICX 7150 is already performing layer 3 services?”
Here are 5 good reasons why having the virtual router (or even more than one) is a good idea.
Flexible Network Topologies
In this laboratory inside the hypervisor, many of the entities you are testing and learning about can be placed into multiple different Layer 2 or Layer 3 network configurations in the real world. For instance, the vSZ can have a single interface and IP address for AP Control, Clustering and Management or it can have 3 separate interfaces on 3 separate subnets. The vSZ-D can communicate with the vSZ controller via a NAT or directly over a Layer 3 network. The APs too can be placed behind a NAT, or not. The other entities in the network may also need to be placed into separate network segments. Running a flat network will be simple, but isn’g going to give you the flexibility you need to implement, experiment with or learn about supported network topologies.
Minimizing Physical Link Utilization
Some of the virtual appliances require Layer 3 connectivity. You may only have a single physical NIC entering your server. Do you really want all that ethernet traffic between layer 3 entities going back and forth to the L3 switch on the ONLY ethernet link?
Compatibility with Other Network Environments
You may need to NAT out of the LAB to get onto your home network, or into a customer’s network for the purpose of a trial or proof of concept. You can’t always change the network you have to plug into, NAT is a good way of taking all that pain away.
Separating the virtual and physical network environments. You can use the 7150 switch to provide layer 3 services to locally connected APs and to the hypervisor hosts on separate subnets. The virtual routers in the virtual environment can manage connectivity to the virtual entities. This configuration allows you to keep the hypervisor host network out of scope in testing and customer trials.
Remote Access – The Layer 3 switch is not going to give you the ability to gain remote access to your lab environment. You also want to be able to completely limit remote access to the virtual environment only. This is useful when allowing someone else to work and play in the virtual lab. Something gone wrong? No worries, they can’t have touched the hypervisor or physical network – simply reset everything to the snapshot you took and carry on.
vRouter Firmware Options
Here are a couple of the options I am presently aware of for implementing a virtual router. If you have other options feel free to investigate those!
Mikrotik RouterOS is a well proven platform that can run inside an x86 server environment on top of the VMWare ESXi hypervisor. It comes with a plethora of features and the ability to scale to very large networks. It also has a decent Web UI that you can use via a broser if thats your thing. The routerOS software also supports an API that closely mimics the CLI commands. For the purposes of this laboratory, the Level 4 license will prove more than sufficient. Each license costs only $45.00 which is great value considering the capabilities it gives you.
If any of you out there have used the older Vyatta Core routers and were sad to see them meet their demise, the VyOS router is for you! VyOS is an open source project that continued from where the Vyatta Core project ended. This is a very capable router without the frills. If you haven’t had a peek at this, you really should.
That’s all for now!