Posted By: Darren Bonawitz
I want to share the recent event calendar of summer training courses for SecureSkills hosted by Fishnet Security. The first upcoming event in Kansas City is the National F5 Networks Course from May 17-21, 2010.
Posted By: Darren Bonawitz
By: Greg Elliott
Hi, I’m Greg Elliott with 1102 GRAND, Kansas City’s carrier hotel and collocation facility. Thanks for taking the time to join me for another podcast concerning what we’re seeing out there in the collocation industry. Today, I’m going to focus on why networking events are important to 1102 GRAND to facilitate the growth of the IT and the telecom community in the area. 1102 GRAND is a major hub for voice and data traffic in the midwest and we see a lot of deals come through the doors and we offer to help in any way we can. Whether it’s a potential customer who needs cabinet or cage space, or just needs a recommendation on who they should go to for their solution to their problem. We always try and help.
For years, we have sponsored networking events to help bring people together. We have our annual golf tournament, our Boulevard Brewery event, along with other informal events throughout the year. We find that when you bring individuals together in a relaxed atmosphere people share ideas, and in turn, deals happen. So we feel it’s our place in the community to be a hub for the commerce, as well as a hub for the internet. I invite you to come take a look at what we’re building at 1102 GRAND.
Click here to register for the April 1st networking event at Boulevard Brewery
By: Darren Bonawitz
Here are some ways to go green.
1. Start with a energy audit to determine current carbon footprint and serve as a baseline for
2. Install blanking panels to prevent air mixing between hot and cold aisles
3. Maintain proper under floor static pressure
4. Ensure the area under raised floors are as free from debris/congestion as possible
5. Replace older computer room air conditioners (CRACs) with newer and more energy efficient models
6. Implement hot aisle/cold aisle concepts including containment strategies
7. Utilize virtualization to reduce server footprint
8. Utilize low power servers when applicable
9. Convert from three-way to two-way valve CRAC Units
10. Invest in a robust environmental monitoring and control system
11. Measure temperature at the front of the cabinets and make temperature control decisions on that data
12. Replace older networking gear with more energy efficient models
13. Utilize “free cooling” if the geographical environment makes it possible
14. Evaluate replacing metal-halide fluorescent fixtures with T5HO lighting
15. Determine reasonable goals and a realistic plan and get going on a set date rather than always waiting
Click here to register:
Posted By: Darren Bonawitz
A few days ago, I was featured in an article written by Sixto Ortiz Jr. on processor.com focusing on energy savings in the data center. Thank you so much for the feature. The following is an excerpt where I’m featured. To read the entire article, click “read more” at the bottom of this post.
Energy Savings In The Data Center
Power and cooling in the data center go hand in hand. Servers need power to function but also need plenty of cooling so power dissipated as excess heat does not interfere with server functionality. So, energy savings can easily be captured by performing tasks that optimize power management and cooling.
Darren Bonawitz, principal owner of a Kansas City data center called 1102 GRAND (www.1102grand.com), says administrators should install blanking panels to prevent air mixing between hot and cold aisles, maintain proper under-floor static pressure, remove debris and congestion from the area under raised floors, replace older computer room air conditioners with newer and more energy-efficient models, and utilize low-power servers. (read more)
Welcome to Darren Bonawitz’s podcast.
By: Darren Bonawitz
Hi, this is Darren Bonawitz, co-owner at 1102 GRAND, which if you’re not familiar with us, were one of the primary Internet hubs in the Midwest. Also, a collocation facility that offers cabinets, cages, private suites and raw space for customer build outs.
In this particular podcast I wanted to talk about the announcement Microsoft made regarding their collaboration with the U.S. National Science Foundation or the NSF. Essentially, researchers are going to be selected by the NSF and those that are so lucky will then have free access to Microsoft Windows Azure cloud computing platform, which is a nice perk. The interesting thing to me on this is the fact that the cloud-computing platform itself is still relatively young in its own infancy, but utilizing it for research in this manner is an interesting way to go about it.
By: Greg Elliott
Hi, I’m Greg Elliott with 1102 GRAND, Kansas City’s Carrier Hotel and collocation facility. Thanks for taking the time to join me for another podcast, concerning what we’re seeing out there in the collocation industry. Today, I’m going to focus on healthcare IT, specifically the manage service companies that serve doctors offices and healthcare networks.
As healthcare entities start to become more and more engaged with health IT and electronic health records, 1102 GRAND is seeing a growing demand for space in our collocation facilities in a number of ways. One of the first things that we’re seeing, is our current customers are increasing their collocation footprint to accommodate for increased data storage back-ups. These manage service companies work with smaller healthcare offices and function as their IT department. This allows these offices to start complying with mandates set forth in the American Recovery and Reinvestment Act, without having to hire a full-time IT department or supplement their current IT department with the expertise that they receive from these manage service companies.
Another thing we’re seeing, hospitals and healthcare networks, their traditional data centers may have only access to two or three different carriers or providers. But at 1102 GRAND, being a carrier hotel we have 24 different carriers or providers, all the way from Tier 1 carriers to local wireless providers. So, as a business grows and changes, they have options for flexibility to choose which carriers or providers best fit their model. Plus, having all those carriers and providers in one place, it tends to keep pricing very competitive. Not to mention, the access or the loop charge goes away in most cases. It can provide quite a bit of cost savings for the organization.
Finally, another benefit that we’re seeing is healthcare networks or hospital data centers choosing an off-site data center facility, so that it frees up resources and space for them to focus on their core business, which of course, is healthcare. They can take advantage of 1102 GRAND’s SAS 70 certification, and we just completed a PCI, ISO and HIPAA (most important to healthcare is the HIPAA piece) readiness audit. We also have redundant power, redundant cooling systems, dual power grids, security and manage services. So, instead of the hospitals or the healthcare networks focusing on taking care of the data center, they leave that to us.
By Darren Bonawitz
I wanted to share some thoughts on an article written by Justin Lee from www.thewhir.com about the EPA finalizing energy star ratings for data centers.
I will be very interested in following how this plays out, and I think this will be adopted much more quickly by pubic data centers than private data centers. While many private corporate data centers will likely pursue the Energy Star data center label, it will largely be determined by company culture. For example, companies with senior management looking for ways to cut costs and those lead by progressive IT leaders will be much more likely to pursue it. For public data centers, however, this will be more than just about energy efficiency. While these facilities will be equally interested in cost savings through efficiency, the real value will be from a marketing standpoint. If a particular collocation or hosting data center is Energy Star “certified” they have a decided leg up on the competition that is not. The resulting competitive advantage will be far more valuable than the energy savings.
In addition, I would like to learn more about how the Energy Star program plans to consistently measure Power Usage Effectiveness (PUE) which is a ratio of power in versus power out. In short, PUE = power as measured at the utility meter divided by the load associated with all of the IT equipment.
For more information about PUE, check out this link:
Rather than going into a lot of details about my issues with PUE as it is defined today, I recommend checking out James Hamilton’s blog post at:
In short, the problem is that PUE itself needs to become a very clear and consistent standard throughout the industry or at least in Energy Star testing. Otherwise companies will use the most favorable definition of PUE as possible to improve their data center’s rating and chances of being Energy Star certified. In addition, I am curious who will be in charge of taking the measurements. Are there going to be neutral third parties that data center operators have to pay to come out and measure? Is the Energy Star program going to facilitate the measurements themselves? Surely they won’t allow data centers to self measure and submit results. There is far too much risk and incentive for miscalculation at best and fraud at the worst. I guess I’ll have to wait and see the details as they are released.
To read the original article by Justin Lee, click here.
By Darren Bonawitz
Attention data center relocation consultants and enterprise customers considering a new data center location, here is another reason why to look at the Midwest and specifically Kansas City.
Water Shortage to Put a Damper on Data Center Cooling?
From The Data Center Journal (datacenterjournal.com), Written by Jeffrey Clark
Recent droughts throughout the United States, along with a growing concern about water shortage (especially in the western US), could also be an increasing threat to data centers. To be sure, water is not the first topic that comes to mind when considering data center design and operation, but it is in many cases a paramount concern.
The vast majority of data centers rely to some extent on water as part of their cooling system for keeping equipment from overheating. Some estimates have places water usage at a typical 15-megawatt data center at 360,000 gallons of water per day. A Microsoft data center facility in Northlake, Illinois, uses about eight million gallons per month—the equivalent of almost 270,000 gallons per day. (read more)