Canopy, the cloud services line of Atos, has introduced Canopy Cloud Fabric, a managed Platform-as-a-Service based on the Pivotal CF distribution of the Cloud Foundry open-source PaaS.
Foundry
Data Foundry Opens Data Center in Austin
Colocation web hosting provider Data Foundry recently opened its newest data center in Austin, Texas. The company claims the facility is the first “purpose-built, carrier-neutral data center in the Central Texas region.” Preparations for the opening of the new data center began in May 2011 when the company held a large job fair in Austin to hire talented employees for the new location.
Location of the Data Center
The data center is located on a 40-acre data ranch campus in Austin. The opening is the initial phase of the 250,000 square foot facility which is fully functional. Despite the recent opening, the first customers have already begun to move in. Data Foundry currently operates two other carrier-neutral data centers in the state. The first is the ADC which is also located in Austin. The second is the HDC in Houston.
Texas 1 Specifications
Aptly named, “Texas 1”, the infrastructure boasts a diversity of independent power sources, water and network connections with no single points of failure in its cool, power and network systems. The location’s power system is maintained by two independent substations. Also, there is an unground power connection that is enclosed in a concrete duct. This is an extremely rare feature at colocation facilities.
Cooling Solutions
The water system is chilled allowing for flexible cooling solutions which include high-density configurations and single cabinet deployments. Therefore, the most complicated, high performing computing environments can be effectively cooled by the Texas 1 data center. Furthermore, the location boasts over 17 network carriers so customers have a choice between providers.
Announcement from Data Foundry
According to representatives from the Data Foundry implementation team, Texas 1 is the result of 17 years of experience in the operation of data centers. Throughout those years, the team has toured many facilities across the globe which has led to the creation of the Texas 1 data center. These visions will allow the company to compete on a global level. The facility provides an innovative design with flexibility which will benefit customers for many years.
The amount of planning that went into the design, development and opening of the data center is incredible. The company is even offering a full online tour of Texas 1 so customers can view the incredibleness of the location. No detail was missed so customers have complete security and control over their equipment in this high-tech colocation facility.
Related articles:
Foundry Load Balancers HTTP sticky sessions
This post is intended to be a general guide for configuring “stickied” load balanced HTTP servers. Whether it’s F5 load balancers, foundry load balancers or open source based load balancers (keepalived/lvs), the concepts are the same and can be migrated across said platforms.
If you have a paid of foundry’s and are looking to configure stickied load balanced HTTP servers, hopefully this guide will provide some assistance.
-
Logging into the load balancer
Telnet to the box and ‘enable’ to allow admin access. The first thing you want to do is show the current configuration to view the existing setup for other working boxes :
Trying 192.x.x.x…
Connected to 10.x.x.x.
Escape character is ‘^]’.
User Access Verification
Please Enter Login Name: admin
Please Enter Password:
User login successful.
SLB-telnet@XXXX>enable
Enable Password:
Error – Incorrect username or password.
SLB-telnet@XXXX>enable
Enable Password:
SLB-telnet@XXXX#
-
Real servers : defining the multiple load balanced boxes
Show the existing configuration on the foundary :
Take a look at the configuration of two “real” servers, which are the two servers that are behind the load balancer that will have balanced sticky connections :
port default disable
port 8001
!
!
server real serverposapp03-tomcat01 192.168.1.143
port default disable
port 8001
The above example is balancing TCP 8001 traffic, which is for TOMCAT. Here are entries for two servers doing simple HTTP traffic :
port default disable
port http
port http keepalive
port http url "HEAD /"
!
server real serverapp02-vhost01 192.168.1.196
port default disable
port http
port http keepalive
port http url "HEAD /"
This example is similar to the tomcat example, except you have several options. “port default disable” disables all other ports. “port http keepalive” and “port http url “HEAD /”” define the http checks that take place to ensure apache is running on that box. If not , it will fail over to the second box and stop sending traffic to it.
-
SSL Connections
SSL incoming connections are handled by the load balancer initially, then passed off to the actual server as regular http / port 80 traffic. The internal box configuration would be similar to the above configuration examples :
port default disable
port ssl sticky
port ssl ssl-terminate portal
bind ssl serverapp01-portal01 http
Notice how instead of "port http sticky" , its "port ssl sticky". First of all, the sticky option is only set on the "virtual" configuration directives. Secondly, the SSL traffic is bound to the real servers via http in the last line of this example. Its pretty self explanatory.
[edit] Regular HTTP Sticky Connections
If no SSL Is being used on the site at all, then all you need is to set an HTTP virtual configuration :
<code>
server virtual serverapp-vhost01 192.168.1.106
port default disable
port http sticky
bind http serverapp02-vhost01 http
-
Configuring the external IP to NAT to the internal virtual
Typically, you will have a firewall in front of the load balancer that actaully holds the external ip addresses. The traffic is filtered initially by the firewall, then NAT’d to the virtual ip (VIP) of the load balancer, which then handles balancing the traffic.
You will need to either establish a new external ip , or use an existing one (for instance, if you are moving from 1 web server to 2 web servers , and want to balance the traffic using the load balancer). You need to setup the external IP address, and NAT it to the internal VIP.
-
Verifying the configuration works
Once everything is setup properly, and the external IP is being NAT’d to the load balancer, it is time to ensure the load balancer is seeing the connections. You could do this before doing the switchover on the firewall as well, just to ensure everything looks right before actually doing the switchover.
To see the active connections being load balanced, issue the following command (replacing the servername for whichever one you want to check) :
That should display information similar to this :
========================
State(St) – ACT:active, ENB:enabled, FAL:failed, TST:test, DIS:disabled,
UNK:unknown, UNB:unbind, AWU:await-unbind, AWD:await-delete
Name: serverapp02-vhost01 State: Active Cost: 0 IP:192.168.1.196: 1
Mac: 0012.7990.d06a Weight: 0 MaxConn: 2000000
SrcNAT: not-cfg, not-op DstNAT: not-cfg, not-op Serv-Rsts: 0
tcp conn rate:udp conn rate = 1:0, max tcp conn rate:max udp conn rate = 8:0
BP max local conn configured No: 0 0 0 0 0 0
BP max conn percentage configured No: 0 0 0 0 0 0
Use local conn : No
Port St Ms ServerConn TotConn Rx-pkts Tx-pkts Rx-octet Tx-octet Reas
—- — — ——- ——- ——- ——- ——– ——– —-
default DIS 0 0 0 0 0 0 0 0
http ACT 0 104 13094 181671 150813 162364862 20325115 0
Server Total 104 13094 181671 150813 162364862 20325115 0
The above is displaying the specific connection details for a single real server. To check the VIP / Virtual server :
Which will display the following :
Name: tomcat State: Enabled IP:192.168.1.101: 1
Pred: least-conn ACL-Id: 0 TotalConn: 149959
Port State Sticky Concur Proxy DSR ServerConn TotConn PeakConn
—- —– —— —— —– — ——- ——- ——–
default disabled NO NO NO NO 0 0 0
ssl enabled YES NO NO NO 46 149959 443
You can see that “ServerConn” is displaying 46 connections. Thats it!