How DreamHost Builds Its Cloud

How DreamHost Builds Its Cloud thumbnail

This is the first in a series of posts about how DreamHost builds its Cloud products. Written by Luke Odom straight from the data center operations.

Clouds are amazing. Not the big floating buckets of water, but cloud computing—users don’t have to worry about the underlying hardware infrastructure. With a traditional hardware install, if you don’t get a sufficiently powerful machine (or if your needs grow), you have to either physically add RAM, change the processor, or add new drives. With a cloud, you can change the type, or “flavor,” of your machine and suddenly your application’s performance improves dramatically. You can add and remove RAM, processor cores, drive space, and even change physical location as needed.

The cloud gives you, the customer, an incredible amount of power and flexibility. To provide this flexibility, the DreamHost Cloud team needed to design the backend infrastructure in a way that could support the virtual machines our customers wanted to run and the software we wanted to run.

This series of blog posts gives you a look from the datacenter, shows you the exact hardware your VM is running on and explains how we came to those choices. We are going to describe how we picked each major component of DreamHost’s OpenStack-based cloud architecture,  describe the different choices available, and explain why we made each decision.

First OpenStack Implementation: DreamHost Cloud Beta cluster

Before we dive into what our new hardware looks like, let’s take a look back at what our original hardware was. Four years ago, we built our first Openstack cluster, a groundbreaking territory for us. We lacked meaningful data, and it took significant brainstorming and guesswork to get through the hardware selection phase and get the original cluster operational. Once we allowed customers in, we collected lots of feedback and used that feedback to help shape the new hardware.

The DreamCompute Beta cluster used AMD Opteron 6200 series processors. These processors allow an incredible 64 cores per machine, although they are relatively slow. With so many cores per machine, we could put a large number of virtual machines on a single host.  We also had the BIOS configured to throttle CPUs when they weren’t in use as a power-saving method. The specs looked great, showing off things like the ludicrous flavor: a 32 CPU, 64GB RAM virtual machine size. Customer feedback was not as exciting, though. We learned that having many cores was important to users, but having powerful cores was more important. While designing the new cluster, we gave priority to a careful balance between powerful and plentiful cores.

Get Content Delivered Straight to Your Inbox

Subscribe to our blog and receive great content just like this delivered straight to your inbox.

Another problem that came with that amount of density was the necessity of two 1600W power supplies for the machines housing the hypervisors. Because of the numbers of processors and RAM sticks in each system, combined with the fact that each chassis housed two systems, the servers couldn’t be run off a single power supply. This meant that redundant power was impossible. Any time that there was a power blip, maintenance, PDU death, or a cord came loose during a game of data center flag football, we would lose power to hypervisors. This problem was exacerbated by the fact that the power cables that came with the machines had C13 connectors that were slightly smaller than the socket on the PDU, so the power cable easily slipped out. We made power redundancy another top priority in our new cluster.

Ceph is the DreamHost-developed object storage software that handles the block storage for DreamCompute. When we designed the first DreamCompute cluster, the Ceph team was still working hard to push out their first stable release. The only large-scale Ceph deployment we could base DreamCompute storage backend on was our own DreamObjects, which was also fairly new. For the storage machines dedicated to DreamCompute, we chose ones identical to those running in DreamObjects.

The beta period taught us that the requirements of a large object storage cluster and a cloud computing storage cluster were quite different. A large object storage cluster, like DreamObjects, is meant to hold a lot of data that is infrequently accessed. As most of the input and output (IO) is writing and the percentage being read is really low, you can use somewhat low end processors, very little RAM, and a simple RAID card. The storage behind a cloud storage backend, however, needs to be much faster. The data backend of an OpenStack cloud must be capable of handling multiple ongoing processes, such as VMs spinning up and down, and applications like MySQL doing IO operations. Our DreamCompute beta cluster had almost 10 times more storage than we needed, but the IO performance on it was subpar because the cluster had a low end processor, RAM, and RAID card. The third tenet of the new cluster became a super fast IO, designed for immediate access!

One place where we got a lot of good feedback in the beta cluster was networking! With Ceph, all IO traffic goes over the network, including the IO traffic from the compute machines to the storage cluster and Ceph’s internal balancing. To make sure all our machines could keep up, we gave every hypervisor and Ceph storage machines two 10Gbps connections to switches in their rack and 40Gbps connections from each switch to the other switches in each “pod.” The beta cluster had 3 “pods” each containing 3 racks for a total of 9 racks. With this setup, we had fast and redundant connectivity from any point in the cluster to any other point. The new cluster didn’t need many changes on the networking side.

Cloud Computing and DreamHost

When you partner with us, your website is in good hands! Our services pair friendly expertise with top-notch technology to give you all you need to succeed on the web.

Things we can’t change

When designing the infrastructure for a new product, there are a lot of factors to consider, and a lot of decisions to make, while others can’t be argued. The racks we use in our data centers are 9 feet (58 rack units) tall. These racks are already installed so we had to use them. The racks have two 60-amp, 3-phase, 208-volt power strips installed. To give you an idea how much power that is, we can redundantly power about 250 60W bulbs (or 3,000 LED bulbs) per rack if we really want a bright data center. We also don’t have much control over the processor/RAM/disk usage ratio. We can change the combinations we offer or change prices, but the ratio is driven by our customers’ demand and has stayed fairly constant over time. The operating system software we would be using would be Ubuntu Linux on the hosts, and Cumulus Linux on the switches so we are limited to hardware that supports those operating systems. All of the decisions we made had to fit within these physical and software limitations.

This is our starting point. The next post will be looking at the server processor market and how we chose what processor to use in the new DreamHost Cloud cluster, also known as US-East-2!

Photo of Luke Odom
About the Author:

Luke is the Director of IT Operations. He is responsible for the teams that keep operations running smoothly... In his free time, he enjoys reading fantasy/sci-fi and hanging out with his wife and 4 kids. Connect with Luke on LinkedIn: https://www.linkedin.com/in/luke-odom-039986a/

1 Comment

Comments are closed.