‘Disaster’ – the word is just enough slip a breath. Imagine, if this word gets associated with Business, blood starts gushing speedily through the heart. What if it happens in real? Can’t or rather don’t want to imagine, right? To … Continue reading
Planning
Planning for High Availability and Site Resilience : It’s How Professionals Do It
On 23rd May the digital world got swapped with the news of eBay’s data breach which estimated to affect nearly 145 million users. Consider a situation where the site would have faced a DDoS (Distributed Denial of Service attack), it’d … Continue reading
Web Hosting UK Blog | Dedicated Servers VPS Hosting Technology Updates
Planning Virtual Desktop Infrastructure? Be Careful With These Mistakes
Virtual Desktop Infrastructure (VDI) is a virtualization service, which hosts your user desktop environments (user states) and applications on the remote servers. This enables the IT administrators to deliver consistent, personalized and secure desktop environments to users while enabling them to access their desktop environments regardless of their location. Most broadly, it is as if taking a user’s desktop – including applications, data, profile, everything – and putting it on a server, and telling the user that he can reach it from any device that supports VDI standards. For a detailed look, you can check Yung Chou’s excellent post on TechNet blogs.
VDI offers tangible returns in the medium to long run if planned and executed correctly. Having worked for 8+ years in a VDI environment, both as a user, administrator and consultant, I have seen that there are certain same mistakes repeated over and over in every company. Those mistakes not only lower the return on investment (ROI) but also leads to poor user experience.
The first mistake is planning for VDI for today’s requirements. When you have about 30 users working on the same server, there will be high resource uses. The users will highly consume CPU, memory, disk read/writes (I/Os) and network bandwidth. Make sure that you start small on the VDI implementation, measure before and after resource use, determine average resource use per user and plan accordingly. When measuring resources, do not forget to include various scenarios, from running an Excel macro to visiting a Java/Flash/Silverlight intensive web site, from running multiple applications concurrently to draw reports from ERP applications.
Next, be sure to use existing resources whenever possible. Evaluate your existing servers carefully. They may not be suitable for running a couple of users in a VDI setup but they may as well be powerful as a Remote Application delivery. The same goes with your existing desktops. Your consultant may be telling you to purchase shiny new thin clients but that simply may not be the case. You can simply install a Windows Thin PC, lightweight Linux distribution or any other open source/free application to repurpose your dying PCs as thin clients. You can use your new desktops to run other workloads or use them in branch offices where you don’t have VDI implementation.
When these are in place, make sure that your administrators are well trained. VDI is not simply virtualizing a user’s desktop on the server with an image/restore. VDI is a detailed solution with user profiles (settings that define their wallpaper, resolution, location of the icons on the desktop, application preferences and the line) and/or folder redirection. A good understanding of both these concepts and their implementation scenarios are crucial for the administrators. In addition to these, administrators also need to know about VDI load balancing, VPN and mobile device access and desktop support at the very least. On the other side, the users need to know what to expect. Folder redirection can be a lot of headache if the users are left unsupervised by saving their personal data (music, videos, non-work related images) on the server. They have to be told that non-company data is strictly forbidden. They have to know how and with which credentials they can access their corporate desktops. Training for both the administrators and the users is essential.
Administrators also need to know how VDI works with various applications, such as collaboration, financial applications and in-house developed applications, if any. At this point, I recommend setting up a feedback mechanism – SharePoint, Google or a simple wiki – where users can log their experiences – what works and what not, performance and any other problems.
Then comes the importance of the redundancy. You cannot put all your eggs in one basket and let the business halt when there is an outage. There will be an outage and you have to be prepared for it. Make sure that you have pen and paper and you draw your entire VDI implementation. There you will be able to see what you have to do when a particular item fails – a switch, a server, a connection, whatever the name it is. According to my experience, VDI works best with Bladecenter – Storage implementations, where servers are clustered and virtualized. This way, hardware and software outages are eliminated altogether at once. Do not worry if that solution is too expensive for you, you can create clusters with any servers, for any resources with the current operating systems.
Security is another issue to consider. You cannot simply install software on a server and think everything is over. Or do not feel that disabling a user’s access to his desktop with a few clicks is all it takes to lock a user out. The same principles that apply to your data center and your servers also apply to your virtual infrastructure. This is the first point. The second point is to choose, implement and monitor your antimalware application. The third point is the problems that are introduced with the ubiquitous access that comes with the smartphones, tablets atc.. Make sure that you cover these scenarios with your implementation as well.
Last pitfall is the immediate cost savings. You will not benefit from immediate cost savings. Nor you will have a shiny income statement at the end of the quarter. VDI implementation is a medium to long-term investment. As you phase out desktops with thin clients, you will see lower replacement and maintenance costs, as well as lost time waiting for repair/replacement of the equipment. Next, you will see reduction in administrative efforts; since VDI is managed by a central policy, you will not have desktop support staff running around fixing end-user issues. You will have a predictable, standard, controlled, yet customized and a flexible working environment (Windows 2008 R2 SP1 VDI works very well over a 1 Mbit ADSL connection). You will be able to better utilize your IT staff – developing them and employing in business scenarios. Don’t bet on the immediate savings.
This is not an exhaustive listing of the traps, pitfalls and mistakes you will encounter down your VDI road. As goes with every project, plan, document, make mistakes, learn from them, document the outcomes and carry on. Hopefully, after you succeed in implementing VDI, you will wonder how you were living without it.
Featured Image:
-
http://www.nttcom.tv
Related posts:
Big Data #2: Planning for the Big Data Analytics
Implementing big data and business analytics are two intertwined areas in current business and IT infrastructure. For many enterprises, big data analytics are in implementation or development stages, offering almost infinite opportunities to be exploited. But to fully exploit the opportunities, big data analytics has to be carefully thought of. Throughout the article, I will talk about the biggest issues that the enterprises need to address before implementing big data analytics.
First of all, the business and the IT need to be tightly aligned in all concepts of big data. The business should lay down clearly its expectations from the IT and the IT should lay down how it can meet the expectations. The expectations should also include possible scenarios, such as shift in consumer buying behavior, new product / competitor product launch etc. and ask IT about how the analytics can be used to respond to such changes. Without speaking everything out and addressing possible scenarios, big data analytics would be doomed from the start.
Then, the business and IT has to talk about their know-hows on the big data analytics. From the IT perspective, big data comes with big changes. There should be high performance clusters to crunch the data and specialized storage solutions to both store and serve the data. The storage solution has to have high-speed disks (SSDs or big caches) to store the frequently accessed data and lower-speed disks to store the less frequently accessed data and perhaps even lower-speed solutions to keep historic data for “just in case” scenarios. There are many solutions for these types of scenarios from the storage vendors. It would be wise to ask for specialized consultancy from the vendors to prioritize and to access big data. IT managers should include their storage staff in the strategic meetings and keep their training up to date.
From the business perspective, big data analytics need training. Many businesses fall into the trap that if they can receive the reports they manually prepare from the big data system, big data analytics is implemented. The the case with the many migrations “we have been preparing these reports every month, can you make sure that we can receive them from the system with a few clicks?” is one of the worst things that you can do with big data. Big data analytics is not about automating reports, rather it is a tool to answer complex business questions. The consultants will also come with their preformatted report templates and tell you how it is easy to use big data analytics. Often those reports are not aligned with business requirements. To overcome these problems, it is necessary to send the end users to training to have an understanding of big data queries and how to answer business questions.
From the datacenter perspective, the load on the servers and their roles will also change. High performance servers will be deployed, background data processing will be prioritized and the storage I/Os will increase. Considering the current values on the data center and the required results, there will be a different workload on the data center, which I will discuss extensively in the next article in the series.
The discussion about big data will come to a point of receiving outside know-how, which comes with big data vendors and/or consultants. In an area that is so new and so unknown, it will be wise to do so. Unfortunately many of the consultants (including my fellow colleagues) choose to work with predefined solutions in all the companies. I, on the other hand, believe that each company needs a different solution. To receive the tailor-made solution, the companies need to address clearly what they want to receive from the consultants.
As you may have already realized, I have not yet talked about the budget planning. The gap between the business requirements and the current data center offering will be the biggest item in determining the budget. In terms of big data and the government requirements, the gap is more than a few more servers and some more storage space.
References:
- Featured Image: www.greenbookblog.org
Related posts:
Most Common Unrealized Data Center Mistakes: #1 Lack of Proper Planning
When I visit my clients’ data centers, I often see the same mistakes repeated over and over. When I find the time to speak with the IT managers – or the owners- I find out that those mistakes stem from three different facts:
- Lack of proper planning
- Lack of proper operations
- Lack of proper business mind
The planning phase starts with the projected growth of your business, which will drive the IT, which in turn will require more resources to meet the needs of the business. Many businesses start with the “enough for today” decision, just to see it turn against themselves when the business begins to take off. Always think about 5 to 10 years later and always assume that you will be growing with the projected rates, not with the current rates.
The first thing that you need to consider in terms of your data center is the floor space. If you do not plan for the floor space, it will prove much costly to expand it when the time comes. Of course this takes the power supply and cooling into consideration. When you are designing your data center, you need to account for a generator and an uninterruptable power supply (UPS) at the very minimum. The generator and the UPS also has to have the capacity to feed the cooling equipment in addition to the servers and the network infrastructure. Although one generator and one power supply looks enough, there may be cases when you need to purchase a second generator for failover purpose and a more capable UPS – if you are planning for a hotel’s data center for example.
Of course, “enough for today” also applies to the server and network capacity. You can work with one or two servers for today – which is enough – but when you need to deploy additional servers, there will be a deployment time. This deployment time is the sum of the purchasing decision, purchasing process, delivery time, data center implementation, installation, configuration and deployment times. If you are purchasing servers from well-known vendors, the whole deployment time can take about, or more than, 8 weeks (no, this is not an exaggeration, vendors state delivery times as 4 – 6 weeks). That means, when your business begins to take off, you will not be able to provide it with the necessary power for about 2 months. The customers and the business lost due to the lack of the IT resources will be much more costly than the initial incremental investment that you would have made to purchase a more powerful server.
You also need to reflect the same planning in your data center. Given that you have enough floor space (you just planned for that), place the racks so that you can service the equipment both from the front and the rear. Make sure that a tall person can service the bottom-most equipment with ease while holding his laptop connected to its management port. Also make sure that all the cables and patch panels are correctly labeled and smartly connected (just as a side note, from an IT manager’s perspective, I honestly believe that the correctly labeled and a tidy datacenter should be one of the performance criteria of the IT staff).
In the planning phase, do not forget to plan for security and remote access. Often, the business owners fail to accept the back-stab fact, which is the internal threat from the organization. Once this happens, the shock and the remedies take quite some time. To take a preventive measure, at least, you have to plan for the physical security – these times you can purchase a biometric access control device and software for about USD 300 retail price. You can collect the access logs on a server or on a controlled computer which has additional or different security measures.
Security also applies for the remote access. Although you need to provide yourself remote access to your data center, both on and off premise, the access should be secured as well. Even if you go for hosted solutions, make sure that you have access to your servers with methods other than those provided by the host; don’t go for just a control-panel-type gateway.
Once you have planned for your data center, you have to make sure that it is operated properly. The lack of proper operations either boosts your investment or trashes it, which I discuss in full detail in the next article in the series.
References
- Inline image: http://itknowledgeexchange.techtarget.com/
- Inline image: http://wunderconst.com/
Related posts:
Microsoft Planning Online Windows Azure Conf, Targeting Cloud Developers
November 5, 2012 — According to posts made this week, Microsoft is planning an online event, the November 14, 2012 Windows Azure Conf, which aims to provide cloud developers with case studies demonstrating how the Windows Azure cloud platform can successfully be used to develop apps for distribution via the cloud.
Keep on reading: Microsoft Planning Online Windows Azure Conf, Targeting Cloud Developers