Canopy, the cloud services line of Atos, has introduced Canopy Cloud Fabric, a managed Platform-as-a-Service based on the Pivotal CF distribution of the Cloud Foundry open-source PaaS.
Colocation web hosting provider Data Foundry recently opened its newest data center in Austin, Texas. The company claims the facility is the first “purpose-built, carrier-neutral data center in the Central Texas region.” Preparations for the opening of the new data center began in May 2011 when the company held a large job fair in Austin to hire talented employees for the new location.
Location of the Data Center
The data center is located on a 40-acre data ranch campus in Austin. The opening is the initial phase of the 250,000 square foot facility which is fully functional. Despite the recent opening, the first customers have already begun to move in. Data Foundry currently operates two other carrier-neutral data centers in the state. The first is the ADC which is also located in Austin. The second is the HDC in Houston.
Texas 1 Specifications
Aptly named, “Texas 1”, the infrastructure boasts a diversity of independent power sources, water and network connections with no single points of failure in its cool, power and network systems. The location’s power system is maintained by two independent substations. Also, there is an unground power connection that is enclosed in a concrete duct. This is an extremely rare feature at colocation facilities.
The water system is chilled allowing for flexible cooling solutions which include high-density configurations and single cabinet deployments. Therefore, the most complicated, high performing computing environments can be effectively cooled by the Texas 1 data center. Furthermore, the location boasts over 17 network carriers so customers have a choice between providers.
Announcement from Data Foundry
According to representatives from the Data Foundry implementation team, Texas 1 is the result of 17 years of experience in the operation of data centers. Throughout those years, the team has toured many facilities across the globe which has led to the creation of the Texas 1 data center. These visions will allow the company to compete on a global level. The facility provides an innovative design with flexibility which will benefit customers for many years.
The amount of planning that went into the design, development and opening of the data center is incredible. The company is even offering a full online tour of Texas 1 so customers can view the incredibleness of the location. No detail was missed so customers have complete security and control over their equipment in this high-tech colocation facility.
This post is intended to be a general guide for configuring “stickied” load balanced HTTP servers. Whether it’s F5 load balancers, foundry load balancers or open source based load balancers (keepalived/lvs), the concepts are the same and can be migrated across said platforms.
If you have a paid of foundry’s and are looking to configure stickied load balanced HTTP servers, hopefully this guide will provide some assistance.
Logging into the load balancer
Telnet to the box and ‘enable’ to allow admin access. The first thing you want to do is show the current configuration to view the existing setup for other working boxes :
Real servers : defining the multiple load balanced boxes
Show the existing configuration on the foundary :
Take a look at the configuration of two “real” servers, which are the two servers that are behind the load balancer that will have balanced sticky connections :
The above example is balancing TCP 8001 traffic, which is for TOMCAT. Here are entries for two servers doing simple HTTP traffic :
This example is similar to the tomcat example, except you have several options. “port default disable” disables all other ports. “port http keepalive” and “port http url “HEAD /”” define the http checks that take place to ensure apache is running on that box. If not , it will fail over to the second box and stop sending traffic to it.
SSL incoming connections are handled by the load balancer initially, then passed off to the actual server as regular http / port 80 traffic. The internal box configuration would be similar to the above configuration examples :
Configuring the external IP to NAT to the internal virtual
Typically, you will have a firewall in front of the load balancer that actaully holds the external ip addresses. The traffic is filtered initially by the firewall, then NAT’d to the virtual ip (VIP) of the load balancer, which then handles balancing the traffic.
You will need to either establish a new external ip , or use an existing one (for instance, if you are moving from 1 web server to 2 web servers , and want to balance the traffic using the load balancer). You need to setup the external IP address, and NAT it to the internal VIP.
Verifying the configuration works
Once everything is setup properly, and the external IP is being NAT’d to the load balancer, it is time to ensure the load balancer is seeing the connections. You could do this before doing the switchover on the firewall as well, just to ensure everything looks right before actually doing the switchover.
To see the active connections being load balanced, issue the following command (replacing the servername for whichever one you want to check) :
That should display information similar to this :
The above is displaying the specific connection details for a single real server. To check the VIP / Virtual server :
Which will display the following :
You can see that “ServerConn” is displaying 46 connections. Thats it!