Out Of Work Graduates Go To High Tech Bootcamp In Search For Jobs

Data Centre Alliance BootcampThe plight of unemployed graduates in the UK, particularly London its capital, has been highlighted in the press and on the national news.

Bizarrely, the UK’s major high tech industry – data centres – have been having major difficulties finding suitable recruits to work in these ‘factories of the future’ – they are often the size of five or six football pitches and packed with tens of thousands of computer servers.

Today sees the start of Data Centre Bootcamp which aims to help out of work graduates and forces-leavers to find work in this exciting industry.

As well as tens of thousands of computer servers, Data Centres also contain massive electrical and mechanical installations, with generators as big as a ship’s engine and an amazing array of industrial-scale pipework.

A medium size data centre can use as much electricity as a small city – yet they are 100 per cent more efficient than company server rooms.

A very wide range of skills from Electrical and Mechanical Engineering to IT and sales are needed.

Probably the fastest growing area of the UK economy – and behind almost everything we do in today’s digital world, data centres are almost the only ‘factories’ remaining in the UK economy.

And they’re absolutely critical because everything from airline booking systems and air traffic control, to traffic-light phasing, Facebook status updates, tweets, e-mail, supermarket tills and stock control, Amazon, e-commerce. In fact just about every business you can think of now relies upon data centres for its operation.

“Amazingly,” says Simon Campbell-Whyte, executive director of international industry body the Data Centre Alliance, “the average age of people in the data centre industry is fifty-something and there’s a major skills shortage coming in this vital industry.

“We’ve worked with our many Data Centre Operator members to come up with ‘Data Centre Bootcamp’ which started today with TV news coverage by ITN. We hope this Bootcamp will give many unemployed graduates, and some of the highly able people now being forced out of our armed forces, the extra skills they need to become credible interview candidates for data centre employers.”

Today’s first pilot of Data Centre Bootcamp was devised by the Data Centre Alliance and is being run at the University of East London’s Dockland’s campus.

Said Campbell-Whyte; “The Data Centre Bootcamp is free to the attendees thanks to the sponsorship of training company C-Net and of two of London’s biggest data centre employers: Telecity and Telehouse.

“Both Telecity and Telehouse run massive data centre complexes in Docklands and are hoping that at the end of the Bootcamp they will have some of their best interview candidates in years.”

For the members of the Data Centre Alliance (which represents individual data centre professionals and equipment manufacturers as well as the data centre operators) – their expectation is that the pilot 10-day intensive will turn most of the 21 attendees into highly employable recruits.

If successful as expected, Data Centre Bootcamp will be run on a much larger scale in London, throughout the UK, Europe and the Far East.

The 21 ‘Bootcamp-ees’ on today’s pilot are mostly out-of-work Londoners including graduates of UEL, Queen Mary and Middlesex Universities. Additionally, three are forces leavers, plus a PhD student from Leeds who sees the Data Centre Bootcamp as her best chance of getting into this exciting and challenging industry.

DCA0025

Brand-Rex Appoints Aldo Strawbridge to Top Regional Post for Middle East

Aldo Strawbridge Brand-Rex Middle East Regional Sales DirectorFollowing recent growth in the Middle East, data networking solutions provider Brand-Rex, is delighted to announce the appointment of Aldo Strawbridge as Regional Sales Director.

Taking up his post in August, Aldo will be based out of the Brand-Rex Regional HQ in Dubai. With partners in 12 countries throughout the region, he expects to spend much of his time visiting customers over the first few months to gain a detailed understanding of their needs and challenges.

Aldo Strawbridge said, “Information and communication technology is at the heart of every organisation’s activities. In the Middle East we are seeing a significant increase in demand. A major driver is the requirement for new infrastructure solutions resulting from the convergence of CCTV, building management, security and access control systems to Internet Protocol (IP). Instead of deploying five or six separate cabling networks through the entire building, just a single converged network is needed. This leads to significant cost savings and improved system integration. Brand-Rex has been developing IP/Ethernet structured cabling solutions for over two decades and is at the forefront of converged network solutions.”

Aldo will join Brand-Rex from NEC UK where he previously held the role of Director of Sales and Service Operations. Aldo has been involved in the telecoms industry for over 35 years and brings with him experience of the Middle East market, having previously worked for Lucent Technologies in the Kingdom of Saudi Arabia.

Announcing the appointment, Brand-Rex CEO Martin Hanchard said,
“I am delighted to welcome Aldo to the Brand-Rex team. His extensive business development and industry experience will be pivotal to help drive our plans to extend our reach throughout the Middle East and to continue to provide our customers with excellent support and service. Brand-Rex has been active in the Region since the 1990’s and has an impressive track record of reference projects including Dubai’s Silicon Oasis HQ in UAE, Bahrain Financial Harbour and Hamad Medical City in Qatar”

Aldo Strawbridge welcomes contact from customers and prospective customers and partners. He can be contacted at astrawbridge@brand-rex.com or via the Dubai Regional HQ on +971 (4)454 8644

BRX0593

New: The Ideal Carrier-Grade Ethernet Tester For Saving Valuable Engineer Time on Site.

Ideal Networks UniPRO MGig1 Tester with UniPRO SEL1 Intelligent Loop-back UnitNew from IDEAL INDUSTRIES NETWORKS is the UniPRO MGig1 carrier-grade handheld Ethernet tester including the latest Y.1564 NetSAM multiple concurrent service test capability.

Designed for all engineers and subcontractors involved in carrier and metro Ethernet service turn-up, mobile Ethernet backhaul, microwave and wireless-link Ethernet setup. It’s also ideal for enterprise users who want to check up on supplier SLA (service level agreement) performance.

With its comprehensive NetSAM Y.1564 multiple concurrent service stream testing capability it is also ideal for setup testing and troubleshooting in a wide range of other applications including on-train/metro command, control and communications, trackside communications and signalling, electricity, gas and water distribution, petrochemical plants and rigs plus many other industrial Ethernet applications.

Launching the new rugged tester, which boasts IPv6 as well as IPv4 and the latest Y.1564 test program, Xing Ye, carrier-Ethernet product manager with IDEAL INDUSTRIES NETWORKS, said “most testers simply limit themselves to the prescribed tests, but by doing that they don’t help the field engineer to overcome the many hours of wasted and unproductive time on site sorting out network configuration and mis-patching issues.

“However, we have designed UniPRO MGig1 to be the engineer’s friend, adding in a suite of functions that go beyond the prescribed test and which can significantly speed up the traditional ‘trial and error’ troubleshooting of these pre-testing problems, meaning UniPRO MGig1 can save hours on site.”

For many years, technicians and engineers have had to use the rather long winded RFC 2544 for service turn-up and acceptance testing. However, RFC 2544 was devised for the lab-testing of isolated pieces of network equipment and can only test one parameter at a time, so it takes a long time and is not representative of today’s multi service-stream networks with often multiple VLANs and multiple QoS (Quality of Service) requirements all competing for limited bandwidth.

Ye continued, “the ITU Y.1564 test regime – implemented in the NetSAM software on UniPRO MGig1 – enables testing of up to eight services concurrently including colour-aware and non-colour-aware networks with Q-in-Q; VLANs nested up to eight-deep; and three levels of Label, Class, and TLL on MPLS networks.

“Layer 3 QoS tags, ToS and DSCP are also encompassed. It brings Ethernet testing forward by leaps and bounds into the 21st century and finally makes it representative of real network requirements and SLA’s.”

UniPRO MGig1 is available in copper-only or copper plus fibre formats and can perform
single ended testing, pass through testing and long distance loopback testing (in combination with its low cost companion the UniPRO SEL1 remote control active loop-back unit or a second UniPRO MGig1). For Bi-Directional testing two UniPRO MGig1s are used but the far end unit can be remote controlled removing the need for a second engineer.

UniPRO MGig1’s ‘Autotest’ button can be programmed to run a sequence of tests without further user intervention saving even more engineer time, by allowing the engineer to conduct other work without having to babysit the tester.

UniPRO MGig1 and its SEL1 companion will test copper Ethernet at 10Mb/s, 100MBb/s and 1Gb/s with the fibre enabled models also testing Gigabit Ethernet over a choice of 850nm multimode, 1310nm or 1550nm single mode fibres using interchangeable SFP modules.

The UniPRO MGig1 price starts from £1200.68, and UniPRO SEL1 is £632.61 (excl. local taxes). The UniPRO MGig1/SEL1 series of test instruments are available through a comprehensive distributor network. To read the full product brochure and find distributors visit: http://turt.co/idn15

The UniPRO MGig1/SEL1 test capabilities include:
• Y.1564 NetSAM
• RFC 2544
• BERT (bit error ratio test)
• SLA-Tick single stream and multi service streams
• Separated target and service tests
• Top-ten bandwidth users
• Simultaneous IPv4 and IPv6
• PoE and PoE+ voltage and power check
• Hub-blink cable trace
• Copper cable check
• Optical receiver power
• Detect and warn on circuits with ISDN, PBX and unexpected voltages
• Additional network stress-test traffic generation on dual-port models

 

Queen Mary, University of London chooses Virtus Data Centres to host its business critical IT platform

London, 13th June 2013 – Virtus Data Centres (“Virtus”), a leading London provider of environmentally efficient, modern, interconnected carrier neutral data centres, announced today that it has completed implementation of a new IT platform in its LON1 data centre for Queen Mary, University of London. Queen Mary is one of the UK’s leading research-focused higher education institutions and Virtus’ LON1 data centre now supports all of its business critical IT applications.

With around 16,900 students, 3,800 staff and an annual turnover of £300m, Queen Mary is one of the biggest University of London colleges. It teaches and researches across a wide range of subjects including the humanities, social sciences, law, medicine, dentistry, science and engineering. Queen Mary is also a member of the prestigious Russell Group which represents 24 leading UK universities that are committed to maintaining the very best research, an outstanding teaching and learning experience and unrivalled links with businesses and the public sector.

Jonathan O’Regan, Queen Mary’s Assistant Director IT infrastructure, said: “Virtus’ facility is hosting the core infrastructure of our new IT platform supporting the University’s business critical applications. Queen Mary chose to partner with Virtus because of their highly pragmatic and professional approach to doing business whilst delivering ‘best in breed’ data centre facilities.”

Neil Cresswell, CEO at Virtus, commented: “We are delighted that such a high profile educational and research organisation has chosen our facility to host its business critical platforms and applications. At Virtus we have a 100% track record of service availability and are committed to delivering an agile, scalable infrastructure platform to support Queen Mary’s business requirements.”

http://www.virtusdatacentres.com/news/queen-mary-university-of-london-chooses-virtus-data-centres-to-host-its-business-critical-it-platform

– ENDS –

For more information please contact:
William Horley
T: +44 (0) 207 499 1300
E: wh@virtusdatacentres.com

About Virtus

Virtus Data Centres own, design, build and operate environmentally and economically efficient data centres around the heart of London’s cloud and digital data economy. Our customers can rely on our secure facilities and uptime record for their data and applications, enabling them to store, protect, connect, process and distribute data to customers, partners and networks within our data centres and the global digital economy. For more information please visit www.virtusdatacentres.com.

VTS0013

Brand-Rex takes centre stage at the Guildhall School of Music & Drama

Guildhall Milton Court1, Brand-Rex, the leading supplier of copper and fibre optic based network infrastructure solutions for enterprises, data centres and extreme environments, is proud to announce that its products and systems have been used extensively within the Guildhall School of Music & Drama’s new £90m Milton Court facility in London.

Offering aspiring musicians, actors, stage managers and theatre technicians a creative and stimulating environment in which to develop as artists and professionals, Milton Court is a state-of-the-art teaching and performance building and, as such, it requires the best possible network infrastructure for its staff, students and visitors.

The Guildhall School of Music & Drama’s Head of IT, Richard Antonel, was charged with overseeing the implementation of a network infrastructure that could provide the highest possible levels of performance and future proofing. He explained, ‘The scope was to install a high specification structured cabling system that would last in excess of 25 years. We decided to install 10Gbit/s to every outlet. After carrying out research I decided the Brand-Rex 10GPlus Category 6A system would offer the best combination of value, reliability and longevity.’

Chris Chandler, technical manager at Brand-Rex, who advised on the system design commented, ‘For this environment a shielded foil twisted pair (F/FTP) low smoke zero halogen (LSOH) cable was the most appropriate option due to the quantity of electrical services in the building that could cause electromagnetic interference (EMI). Over 150km of it was used to configure 2,000 10GBASE-T channels and, in addition, singlemode (OS2) and multimode (OM4) optical fibre from our FibrePlus range has been used in the campus wide area network (WAN) and internal backbones respectively.’

A truly converged multimedia environment, as well as running data, voice and video conferencing applications, the IT infrastructure at Milton Court has to be able to handle the many radio and TV broadcasts that will be made from the new facilities as well as supporting the Guildhall’s recording studios for sound and video. It will also facilitate a large digital signage system, as well as wireless networking to accommodate the increasing preference for ‘bring your own device’ (BYOD) functionality amongst students. It is also used to network a building management system (BMS) that operates access control, CCTV, and heating and ventilation.

As a third party tested and approved cabling system, 10GPlus not only meets and exceeds the performance outlined in EIA/TIA for Category 6A, but also meets and exceeds requirements in ISO/IEC for Class EA. The 10GPlus system therefore has unparalleled levels of certification and is manufactured in accordance with ISO 14001 and ISO 9001:2008.

Antonel is confident that Milton Court’s communications infrastructure will fully support the technologies of the future. He concluded, ‘Milton Court will open in September and I know that its network infrastructure will perfectly complement and enhance the outstanding quality of the Guildhall School of Music & Drama’s training and the success of its graduates.’

=ENDS=
{Words 489}

Photo Available Here: http://turt.co/brx567
Caption: Guildhall Milton Court
[username: pics | password: pics]

About the Guildhall School
The Guildhall School is a vibrant, international community of young musicians, actors and theatre technicians in the heart of the City of London. Rated No.1 specialist institution in the UK by the Guardian University Guide 2013, the School is a global leader of creative and professional practice which promotes innovation, experiment and research, with nearly 900 students in higher education, drawn from nearly 60 countries around the world. It is also the UK’s leading provider of specialist music training at the under-18 level with nearly 2,500 students in Junior Guildhall and the Centre for Young Musicians. The School is widely recognised for the quality of its teaching and its graduates, and the new building, Milton Court, will offer state-of-the-art facilities to match the talent within its walls, ensuring that students enter their chosen profession at the highest level.

Milton Court will have a new 608-seat concert hall plus two theatres (227- seat and studio), three rehearsal rooms, television studio suite, departments for costume, wigs and make-up, office space, tutorial rooms and public foyers. This will allow the School to offer facilities commensurate with those in the profession, helping to attract the most highly regarded teachers and best students internationally. The total cost of Milton Court is £89 million. The two foundation donors are Heron International and the City of London, and together they have generously provided a combined contribution of £75 million. The School is raising the remaining £13.5 million to fully equip the building to the highest professional standards.

With the opening of Milton Court the cultural quarter in the heart of the City of London will stretch from the School’s two buildings and the Barbican on Silk Street up to the Barbican’s two new cinemas, and northwards to LSO St Luke’s on Old Street, offering a richly varied range of venues and performances spaces. The Milton Court Concert Hall will also be used by the Barbican’s two new Associate Ensembles – the Academy of Ancient Music and Britten Sinfonia – where part of their Barbican season performances will take place, for Barbican classical and contemporary own promotions and commercial hire, as well as for Creative Learning, the Barbican’s joint outreach division with the School.

ABOUT BRAND-REX
Brand-Rex is a global operation, designing, developing and manufacturing the most sophisticated, high performance copper and fibre cabling systems for communications and extreme environment applications. Headquartered in Scotland, the company is committed to being a trusted market leading provider of best-in-class communications infrastructure solutions. As well as developing products and systems of the highest quality, the company is entirely carbon neutral and offsets all the CO2 created by the manufacture and distribution of its products. For more information visit www.brand-rex.com

BRX0567

Second Only To Dell For Brand-Rex in Green IT Awards 2013

Kennedy Miller kilted With Brand-Rex And Customers Celebrate Green-IT Awards SuccessAt the 2013 Green IT Awards in London, Network Infrastructure leader Brand-Rex was named runner-up to Dell in the Manufacturer of the Year category.

“Being named ‘runner-up manufacturer of the year to a giant company with Dell’s resources is a massive achievement and firmly places Brand-Rex in the league of the world’s most environmentally advanced companies,” said Brand-Rex environmental development manager Kennedy Miller. “This places us far in advance of any other company in the copper and fibre optic network infrastructure sector.”

Brand-Rex, which also won a Green Apple award in September last year, became the first cabling company in the world to achieve Global Carbon Neutral status for its worldwide operations back in 2011 and in 2012 it was the first company in its sector to launch carbon-neutral products.

The Green IT Awards 2013, which were supported by the UK Government’s department of Energy and Climate Change, set out to showcase and reward the organisations that have made the most significant contribution to improving the IT industry’s environmental performance over the preceding 12 months.

Amongst the judging criteria winners were required to demonstrate ethical management practices – including showing they went ‘beyond the call of duty’ and that its ‘green’ efforts are reflected in its advertising, marketing and sales – to help increase visibility of the topic throughout the industry.

Said Miller, “Taking the environmental approach to business is not just an add-on it requires a complete transformation in thinking across an organisation. Far too many companies have failed to grasp the idea that once you’ve done a full environmental audit – it immediately shows you where you can cut costs and improve profits.

“To be judged in the same league as Dell is a massive honour and recognition of the foresight of the staff, management and the whole supply-chain at Brand-Rex. My profound thanks go to them all.”

=ends=

ABOUT BRAND-REX
Brand-Rex is a global operation, designing, developing and manufacturing the most sophisticated, high performance copper and fibre cabling systems for communications and extreme environment applications. Headquartered in Scotland, the company is committed to being a trusted market leading provider of best-in-class communications infrastructure solutions. As well as developing products and systems of the highest quality, the company is entirely carbon neutral and offsets all the CO2 created by the manufacture and distribution of its products. For more information visit www.brand-rex.com

BRX0573

Demand Set to Outstrip Supply in Latin American Colocation Data Center Market

New Space Latin AmericaMarket Ripe for New Entrants says DCD Intelligence

Demand for colocation data center facilities in the Latin American region looks likely to outstrip supply according to new research released by DCD Intelligence today.

The new report, ‘Latin American Colocation’ (http://turt.co/dcd46) concludes that there is room in the market for new entrants, especially in cities outside of the major hubs.

 

Most of the countries in Latin America are classed as emerging markets in terms of data center market growth and in common with other emerging markets such as those in Asia Pacific, Colocation is now seen as a viable option for businesses in terms of outsourcing their data center requirements. In fact, a larger percentage of data center white space is outsourced in the LatAm region than in many western countries.

 

According to Nicola Hayes, managing director at DCD Intelligence ‘What we see in the Latin American markets is that companies are far less reluctance to outsource data center requirements than has been the case at same stage of market development in other regions.  In previous years the colocation market was hampered in Latin America by a lack of suitable pure colocation space but this has changed over the past 2 years with a greater variety of stock now available”.

 

“2012 saw a significant amount of new build plus expansion to existing facilities in many of the countries researched and –  whilst Brazil continues to offer the largest amount of colocation space and has the largest number of providers across the region – other countries are gaining momentum. For example Colombia witnessed the highest number of new entrants to the market and Chile’s growth rate in terms of available space has overtaken that of Mexico.” Hayes reported.

 

Chart  here: http://turt.co/dcd46p2  [user: pics | pwd: pics]

Caption: New Colocation Space Latin American Markets

 

Although supply is increasing, there is still room for new entrants – particularly as demand is rising not only in the major cities where the majority of space is located but also in secondary locations.

 

The Latin American Colocation report also identified that providers with a regional rather than single country presence are for the most part international providers rather than native Latin American companies.

 

“There are indications that some of the larger country providers are looking to establish a presence in neighbouring countries in order to capitalize on opportunities outside of their own markets but at present ‘regional’ coverage is the domain of the large international players,” Hayes concluded.

 

For more information about the report Latin American Colocation please visit http://turt.co/dcd46.

 

About DCD Intelligence

Specialist research and analyst company in the Data Center, Telecoms and IT sectors – with the global reach of parent company DatacenterDynamics, DCD Intelligence is uniquely placed to offer professionals in these sectors holistic yet statistically sound business intelligence whether in form of research reports, data tools or fully bespoke projects.

 

DCD Intelligence is committed to basing our analysis on stringent research techniques accepted by research communities worldwide. An understanding of the process we employ gives our client base confidence in the reports, studies and bespoke analysis produced by our team of respected researchers and analyst.

 

About DatacenterDynamics

DatacenterDynamics is a full service B2B information provider at the core of which is a unique series of events tailored specifically to deliver enhanced knowledge and networking opportunities to professionals that design, build and operate data centres.

 

With 50 established annual conferences in key business cities across the world, DatacenterDynamics is acknowledged as the definitive event where the leading experts in the field share their insights with the top level datacentre operators in each market.

 

In 2012 over 28,000 senior data center professionals have attended a DatacenterDynamics event, creating the most powerful forum in the industry today.

 

DatacenterDynamics is renowned for its proximity to the data center market in all its event locations, seeking to establish one on one relationships with every professional with whom we engage. This personal touch coupled with a deep understanding of the market mechanics makes DatacenterDynamics much more than just a B2B media provider, which is highlighted by the number of long-term relationships that have been forged internationally with end-users, vendors, consultants, governments and NGOs alike.

 

‘Rain Umbrella’ For Computer Servers Wins US Patent – And Sees Orders Triple Post SuperStorm Sandy

DCME0014_TurtleShellServerShield - An UmbrellaTo KeepWaterAnd DebrisOff DelicateEquipmentWhenLeaks AndStormsOccurAbove-W600Hundreds of thousands of data centers and server rooms are in multi-tenanted buildings across the US and elsewhere. Within them, critical computer servers and telecommunications equipment are regularly damaged by water dripping – and sometimes cascading – through ceilings from leaking or burst pipes on the floors above.

And worse: In storm-prone areas, it is only too frequent for roof-damage to allow water to come flooding in (even when the roof is several floors up) – causing widespread destruction of the server equipment and disruption to businesses.

An invention called the Turtle Shell server shield – effectively a massive umbrella for data center servers and telecoms racks – is already protecting thousands of servers around the US and as far afield as Norway and Pakistan.

Completely innovative, the Turtle Shell server shield has just been granted US patent number 8,413,385 in recognition of its uniqueness.

Glenn Mahoney, president at Turtle Shell Industries, and his team have been developing the product for four years – with considerable success.

Said Mahoney, “We’ve been called to many disaster sites where storms and pipe bursts have sent water cascading through the ceiling and right through $millions-worth of server and telecoms equipment – not only interrupting vital business operations but in most cases damaging the equipment beyond repair. It’s a highly distressing sight to see”.

“In one such situation – a major cable operator’s network center in New York – thousands of customers were offline because of the water damage. While the center was being rebuilt, the operator asked us to fit Turtle Shells, as one of several new disaster precautions. Less than two years later torrential storms hit again but even the newly reinforced roof gave way and water came cascading through again. However this time, the unique Turtle Shell ‘umbrellas’ kept the water out of the electronics and the equipment kept on working. With $millions worth of equipment saved to carry on earning revenue.”

You can see an amazing video of both the first and the second storm damage as this NYC data center is struck in 2008 and 2010 at ‘Turtle Shell in action’ here: http://turt.co/dcme14.

Turtle Shells are made from a very strong polycarbonate and shaped like a sideways “(“ extended over the full length of each suite or racks.

They can be installed over, under and around all manner of cables, conduits and support rods or brackets. Once installed Turtle Shells are total watertight. They can also be fitted with flexible curtains which can be operated manually or automatically to ensure that water doesn’t splash into the front and rear of racks.

“We’ve seen a 300 percent raise in sales since October,” said Mahoney. “As people on the East coast are recovering from super storm Sandy they are thinking seriously about how to build-in extra disaster protection. And Turtle Shells are proving to be the ideal solution not just for data centers, but telecoms and cable operators, hospitals, schools, universities and Government sites too.”

For further information on Turtle Shells and advice on how to protect your sensitive equipment from damage by falling water and debris visit turtleshellproducts.com

(Note: Turtle Shell Products is a client of Turtle Consulting Group, but the companies are not related. GREAT NAME THOUGH!)

 

Data Centres Could Experience 30 Per Cent More Failures as Temperatures Increase

Alan Beresford EcoCooling Managing DirectorMany data centre operators have been increasing the operating temperature in their data centres to reduce the massive costs of cooling.  But, warns Alan Beresford, technical director and md with EcoCooling – they run the risk of significantly more failures.

ASHRAE (the American Society of Heating and Refrigeration Engineers) is generally considered to set the standards globally for data centre cooling. A few years ago it relaxed its recommended operating range for data servers from 20-25C (Celsius) to 18-27C.

“For decades,” said Beresford, “data centres have operated at a 20-21C temperature. With the relaxation in the ASHRAE 2011 recommendation plus the pressure to cut costs – data centres have begun to significantly increase the ‘cold aisle’ temperature to 24-25C and in some cases right up to 27C.

“But many of them have not taken into account the study of server reliabilities detailed in the ASHRAE 2001 Thermal Guidelines for Data Processing Environments – which predicts that if the cold aisle temperature is increased from 20C to 25C, the level of failures increases by a very significant 24%. Upping the temperature to 27.5C increases the failure rates by a massive 34 per cent.

Warns Beresford: “And if the air temperature going into the front of the servers is 27C it’s going to be very hot (34-37C) coming out of the rear. For blade servers it can be a blistering 42C at the rear!

“It’s not just the servers that can fail,” states Beresford, “at the rear of the servers are electric supply cables, power distribution units and communication cables. Most of these are simply not designed to work at such elevated temperatures and are liable to early mortality.”

Interestingly, again according to ASHRAE’s published figures, if the temperature is reduced to 17C – the server reliability is improved by 13 per cent compared to conventional 20C operations.

“To cool the air to 17C would be completely uneconomic with conventional refrigeration cooling,” said Beresford, “our modelling shows it would require over 500kW kilowatts of electricity for every megawatt of IT equipment.

“However, with our evaporative direct air cooling CRECs (Computer Room Evaporative Coolers), this 17C operation would require
less than 40kW kilowatts – a saving of over 450kW compared to conventional refrigeration and a reduction of PUE (Power Usage Effectiveness) of 0.45.

When given the option of cooling a data centre with refrigeration at 27C compared with evaporative cooling at 17C at less than 10% of the energy use, 40% less temperature related server failures and a more stable environment for other components it is clear why over 150 UK data center operators have adopted this approach.

Alan Beresford has prepared an informative webinar explaining how evaporative cooling works and why it uses so little energy compared to conventional refrigeration. To watch it visit http://turt.co/dcme12

Beresford adds a final point, “when Engineers and technicians are working on servers, it’s usually at the rear where all the connections are. If they are going to have to work regularly in temperatures of 34C to 42C there might be health issues to consider too. Keeping their working environment under 30C is a far more acceptable solution.”

To find out more about EcoCooling CRECs visit www.ecocooling.org

 

Fifth Year At The Top For Brand-Rex

Martin Hanchard CEO Brand-RexFor the fifth year running Brand-Rex, the data networks solutions provider has re-affirmed its leadership position in the UK market for volume of structured cabling systems sold.

The latest independent market research report conducted by specialist building consultants BSRIA – shows Brand-Rex shipping a significant 15 per cent more ‘outlets’ than its closest rival. In product terms. ‘Outlets’ represent complete system solutions, incorporating copper data cables, patch panels, patch cords and outlet sockets.

Martin Hanchard, CEO at Brand-Rex commented, “It is very pleasing to note that although certain manufacturers sell more cable kilometres and others more component units, Brand-Rex remains the preferred UK supplier for fully integrated solutions.“

According to the research, Brand-Rex has 18.6 per cent of the UK market by volume with 1.33 million outlets sold in 2012 compared to the market total of 7.13 million.

A star performer for Brand-Rex last year was its Data Centre Zone Cable product. Thinner than competing 10Gigabit/s cables this innovation has been a real success with installers. By reducing overcrowding in equipment racks and solving the problems of conventional cables blocking under-floor flows of vital refrigerated air, end users and system integrators can more efficiently plan their Data Networks.

“We have secured exciting new UK contracts including Manchester City FC’s new training ground; media companies MTV and Viacom; Odeon Cinemas; The Welsh Assembly; and the South Glasgow Super-Hospital. Brand-Rex has also supported many of the London Olympics venues,” said Hanchard.

Despite the challenging economic climate in the UK, Brand-Rex continues to invest heavily in new product development – evidenced by the launch of its Fibre Superset, a complete range of 10 to 40 Gigabit migratable and 10-to-40-to-100 Gigabit migratable products. Unlike most products these are designed for in-situ re-configuration to higher speeds providing for the much sought after future proofing required by network managers and specifiers.

“Brand-Rex environmental credentials continue to contribute substantially to the company’s success. Brand-Rex leads the industry,” said Hanchard, “we were first in the world to be able to state that our complete global operations were carbon neutral and continue to challenge ourselves every day to further develop this competitive advantage.”

BRX0570

Brand-Rex Launches New Structured Cabling Systems Catalogue

 Brand Rex New Cabling CatalogueBrand-Rex, the leading supplier of copper and fibre optic based network infrastructure solutions for enterprises, data centres and extreme environments, has just announced the completion of its latest catalogue, which details its entire range of structured cabling systems.

The 2013 Structured Cabling Systems catalogue is easy to navigate and features more than 7,500 products and over 178 pages, including up-to-date information and technical data regarding the company’s range of copper, optical fibre, data centre and intelligent infrastructure management (IIM) systems. As well as including a comprehensive overview of the well known 10GPlus, Cat6Plus, GigaPlus, FibrePlus, Blolite, MTConnect and SmartPatch ranges, latest innovations such as the 10GPlus PCB Patch Panel Set and MT Connect SuperSet pre-terminated fibre optic cabling range are also profiled.

‘The new Structured Cabling Systems catalogue represents the culmination of over 40 years of design and manufacturing excellence,’ commented Audrey O’Brien, marketing communications manager at Brand-Rex. ‘With over 200 new products featured in its pages, it is an essential tool for those involved in the specification and installation of network infrastructures, and its design and layout makes finding information about our systems quicker and easier than ever before.’

Although previous editions have been published in Italian, Spanish and Portuguese, as well as English, this year’s Structured Cabling Systems catalogue has for the first time been produced in a French version due to demand from those countries where it is the mother tongue.

Also, in line with Brand-Rex’s ongoing commitment to sustainability, and as part of its corporate social responsibility (CSR) strategy, the catalogue will be available in an electronic format. Installers will be able to download a fully searchable PDF of the catalogue, while those that prefer a hard copy version will be able to obtain one that is printed on Forest Stewardship Council (FSC) approved paper.

The new Brand-Rex Structured Cabling Systems catalogue will be available from early April. A copy can be requested by emailing marketing@brand-rex.com or at www.brand-rex.com/newcatalogue

BRX0529

Retrofitting Cold-aisle Cocooning Does Not Mean Massive Disruption

CAN0072 Cannon TEchnologies Retrofit Cold Aisle Coccoon_2CAN0072 Cannon Technologies Retrofit Cold Aisle Cocoon_1CAN0072 Cannon Technologies Retrofit Cold Aisle Cocoon_3Working round infrastructure that has evolved over time makes retrofitting hot/cold aisle containment a challenge. Multiple data and network cable runs, cooling pipes and mismatched cabinets mean many solutions will not work effectively.

Mark Hirst, T4 product manager, with Cannon Technologies looks at the options available to those who want containment but are not sure if their environment can handle it.

What is hot/cold aisle containment?
Hot/cold aisle containment is an approach that encloses either the input or output side of a row of cabinets in the data centre. The goal is to effectively control the air on that side of the cabinet to ensure optimal cooling performance.

With hot aisle containment, the exhaust air from the cabinet is contained and drawn away from the cabinets. Cold aisle containment regulates the air to be injected into the front of the hardware.

In both cases, the ultimate goal is to prevent different temperatures of air from mixing. This means that cooling of the data centre is effective and the power requirements to cool can, themselves, be contained and managed.

Challenges
Over time, all environments evolve. The most common changes in a data centre tend to be around cabling and pipe work. What was once a controlled and well order environment may now be a case of cable runs (power and network), being installed in an ad hoc way. In a well run data centre, it is not unreasonable to assume this would be properly managed but the longer it has been since the last major refit, the more likelihood of unmanaged cable chaos.

The introduction of high density, heat generating hardware such as blade systems has seen greater use of water based cooling. This requires changes to the racks and the addition of water pipes. These make enclosing a rack difficult as many solutions need to have pipework holes cut into them. The other challenge here is that you cannot simply drill a hole and the retrofit will not include disconnection and reconnection of pipes to run them through the holes.

These are not the only challenges. Just as the type of hardware in the cabinets has evolved, so have the cabinets themselves. What started out as a row of uniformly sized and straight racks may now be a mix of different depths, widths and heights. This is common in environments where there are large amounts of storage present as storage arrays are not always based on traditional rack sizes.

Cabinet design can also introduce other issues. If the cabinet has raised feet for levelling, something often seen with water-based solutions, there may be existing backwash of air under the cabinets. There may be gaps in the cabinets either down the sides or where there is missing equipment. These should already be covered by blanking plates. The reality in many data centres, however, is that there will be missing plates which is allowing hot and cold air to mix.

The floor also needs attention. Structurally, there may be a need to make some changes to accommodate the weight of any containment system. This is not just the raised floor but the actual floor on which the data centre sits. The evolution of data centres and changes to equipment is rarely checked against floor loads. Before adding more weight through a containment system, it is an opportunity to validate loads

Floor tiles degrade over time. They get broken, replacements may not be the right size or have the right size of hole. No air containment system can be effective if there are areas where leaks can occur.

Prerequisites
It would be naïve to assume that retrofitting hot/cold aisle containment will not require some potential changes to the existing configuration. However, there are very few prerequisites to deal with.

1. Weights and floors as mentioned above.
2. Each enclosure should ideally line up height-wise with its counterpart across the aisle. Don’t worry about small gaps, we will deal with those later.
3. The height of each pair of enclosures should ideally be the same. However, there are ways around this but within reason. A height difference of a few cm can be managed easily. A difference of a metre or more is increasingly common in older environments. Whilst most containment solutions could not cope with this, we have designed our retrofit solution specifically for such “Manhattan Skylines” which are highly prevalent in many older data centres and where a cost effective upgrade path to containment can significantly extend the useful life of the existing racks, data cabling and M&E infrastructure.
4. Normally, each row must line up to present an even face to the aisle that is being contained, in order to create an air tight seal.

The prerequisites may require a little planning inside the data centre and in the most extreme case, require a little moving of cabinets to get the best fit. Again it is possible, as we have done with our own retrofit system, to design a solution for situations where it is not reasonable to move cabinets to create an even line to the containment aisle.
What equipment is required?
Once the prerequisites have been met, fitting aisle containment is a mix of installation and good practice cabinet management.

There are four steps to fitting containment.

1. Fix the ceiling eyebrow to the top of the cabinets. Where there are differences in cabinet heights, it will be necessary to fit blanking panels to create a uniform height both sides of the aisle. It may also be necessary to arrange for cables or pipes to be moved if they are too close to the edge of the cabinet.
2. Install the ceiling panels. These sit inside a framework which should be a uniform size. If the ceiling panels do not fit snugly the containment will be seriously compromised.
3. Fit air skirts under the cabinets to prevent any return flow of air. If the cabinets do not butt up against each other, fit skirts to cover the gaps between the cabinets.
4. Attach the doors at both ends of the aisle.
Tidying up
Tidying up the installation is about good data centre management. Here the typical steps include:

1. Fit blanking plates wherever there are gaps inside the cabinets.
2. Fit blanking plates to the sides of cabinets where cables run.
3. Replace broken or damaged floor tiles. If the containment is to the cold aisle, rebalance the cooling by changing the floor tiles for those with the right size of vent.
4. If the containment is for the hot aisle, check that the extraction and venting is evenly spread across the length of the aisle to prevent hot air zones being created.
5. Fit monitoring devices such as temperature and humidity sensors to ensure that there are no unexpected challenges caused by the containment.

None of these steps should cause problems for data centre facilities managers and will provide an opportunity to validate the benefits from the aisle containment.

Conclusion
There is a belief that retrofitting aisle containment to data centres is highly disruptive, requires a lot of time, is expensive, can have limited benefits and may not be suitable.

As can be seen, the process to retrofitting is not essentially complex and builds upon good data centre best practices for housekeeping and management. Additionally, the time required to do the retrofitting is easily manageable and can be done without any impact on data centre operations.

Retrofitting aisle containment is one of the easy wins when it comes to recovering money spent on power by reducing excessive cooling while retaining, for a good few years more, the significant investment in racks, data cabling and M&E.

For more information go to: www.cannontech.co.uk

CAN0072

Why Grey Is The New Black

CAN0029 Cannon ServerSmart cabs and roof mounted Raceway installed at Wi-Manx Heywood Data CentreCAN0029 Cannon ServerSnart Cabs installed within an Aisle Cocoon for Cold Aisle Containment at Wi-Manx Heywoood Data CentreCAN0029 Cannon Technologies grey-white cabinets inside cold aisle

 

 

 

 

 

Power prices continue to rise leaving data centre owners facing a double increase in power costs this winter. Some have already seen power costs increase by over 6% as contracts have renewed and in 2013 there will be even more pressure from green taxes across the EU.

Although hardware is getting more efficient, what else can a data centre owner do? Mark Hirst, T4 Product Manager, Cannon Technologies Limited looks at an unexpected saving that many data centre owners are ignoring.

Cutting power costs a daily task
Data centre owners spend as much time today looking for cost savings as they do running their facilities. Each generation of computer hardware is more energy efficient than the last and capable of reaching higher levels of utilisation. While hardware replacement cycles have increased from an average of three years to five years, inside the data centre, the most power hungry hardware is still being changed at three years.

Another power efficiency gain has been raising the temperature inside the data centre from 16C a decade ago to around 26C today. This has helped drive hardware replacement to ensure the operating envelope of servers and storage are not exceeded. It has also reduced the cost of cooling.

Smaller measures such as enforcing blanking plates and keeping floors and plenums clear have also had a beneficial impact on reducing power bills.

Despite all this, power costs have continued to rise. This means that data centre owners need to look for new areas in which to make savings.

Lighting
One area that has been looked at before is lighting in the data centre. It may surprise some readers to discover that lighting can account for 3-5% of the power costs in a data centre. This has already led smart data centre owners to look at where savings can be made.

An increasingly common solution is to use intelligent lighting solutions that only come on when someone is in the data centre. These systems help to reduce cost but have to be properly designed. If someone is working inside a rack, they may not trigger the lighting system frequently enough to keep the lights on. This can result in being plunged into darkness and having to keep stopping to walk down an aisle to force the lights back on.

Bulbs and fittings
Another solution is to use longer life, lower power, bulbs. This can result in several separate savings. Longer life means fewer replacements which in turn means lower inventory costs. Fewer replacements also means less time spent by maintenance crews changing light bulbs.

The type of light fitting also has an impact on the efficiency of the lighting. Direct light can be harsh and too much reflectivity off of surfaces is hard on the eyes. One solution is to use recessed light fittings so that the light source is not directly reflecting off of a surface.

A second alternative for light fittings is to use a lighting channel. This works by creating a scatter effect on the light, giving it a more even distribution around the room.

Harsh light
Good examples of harsh light environments are hospitals and operating theatres. Bright white light and bright white walls may make it ideal for surgery but make them places where extended exposure can cause eye fatigue.

By comparison, most server halls are an off white wall at best, grey the floors are a grey/blue colour and they are filled with black server cabinets. These choices of colour have not been arrived at to create a less harsh working environment. Instead they have been arrived at often due to the lower cost of equipment and the ready availability of certain colours and types of paint.

Reflectivity
It may seem strange but it often takes more light to illuminate a server room than an operating theatre. The reason for this is the way light is absorbed by colour.

The light reflective value (LRV) of a colour determines how much light the colour absorbs. Black reflects as little as 5% of the ambient light while grey-white reflects up to 80% of the light.

This means that a server room filled with black cabinets, many of which are filled with black cased hardware, absorbs a lot of light. For this reason, it is not unusual to find engineers working at the back of cabinets wearing head torches to ensure that they have enough light to see by.

Grey the new black
Changing the colour of the server cabinets from black to grey or even white can make a significant difference to the amount of available light in a room and the cost of lighting that room.

Using the LRV of the server cabinet colour, it is not unrealistic to see a saving of around 30% on the lighting in a data centre. It may even be possible to get a greater saving but one of the problems of making an environment too bright, is that it becomes difficult to work in.

Effective lighting is a health and safety issue and it may surprise some readers to know that the recommended light level in a data centre is 500 lux at 1m off of the floor. How bright is that? Most building corridors are 100-150 lux, entrance halls around 200 lux and office spaces are 300-400 lux. That means that you need to put a lot of light into a data centre to deliver 500 lux. If the surfaces are absorbing a lot of the light, you need to use a large amount of energy to reach that target.

Calculating the savings
A small 1MW data centre with 5% of its total power consumed by the lighting is spending 50kW just to light server rooms.

Reducing that by 30% delivers a saving of 15kW.

Assuming 10p per kWh, the savings work out at:

(15 x 10) / 100 = £1.50 per hour
1.5 x 24 = £36 per day
36 x 365 = £13,140 per year

While £13,140 might not seem an excessive amount, the average UK data centre draws around 5MW with the larger data centres drawing in excess of 20MW. Scale up the savings and the larger data centres could be saving over £240,000 per year.

The savings are not just on the power. The larger the data centre the more it will be paying in carbon taxes. This means additional savings are made through the reduction of power.

An additional benefit is that it creates a more engineer friendly environment. If the engineer does not need to use a head torch, then they can work more easily around the cabinet.

Conclusion
It is often easy to overlook the little things when saving money. The use of intelligent lighting systems has managed to cut costs in some data centres. Changing the type of lighting and light fittings can also deliver savings and improve the working environment.

The bigger savings, however, comes from the colour of the cabinets and the equipment in the room. In a small data centre, it is possible to save the equivalent of a junior operator’s salary. In larger data centres, the savings could easily cover the cost of several highly trained system and security engineers.

Whether the money saved is invested back into the business or simply taken off the bottom line, it is time to pay attention to lighting costs and the colour of the equipment in the data centre.

CAN0029

When Is A 19-Inch Rack Not?

CAN0025 Cannon award winning SLIDING PART PANEL NEWCAN0025 CannonGuard fitted in ServerSmart CabCAN0025 Cold Asile wtih air management solutions installed from Cannon Technolgies

 

 

 

 

 

You’d probably think that a 19-inch rack is a fairly standard item. These days that couldn’t be further from the truth. Most data centres have their idiosyncrasies – and so does most “supposedly 19-inch” active equipment. In this article Mark Hirst, T4 product manager with Cannon Technologies, explains how to cope.

In the data centre industry, everyone seems to be doing their bit to ‘take the heat’ off the embattled managers, who need to save on their power bills.

Server manufacturers for example, are creating new models that don’t need to be chilled quite so religiously. Other vendors are working out ways to stop fans running needlessly. Fans account for 15 per cent of a server’s power usage, so by reconfiguring the algorithms that control them massive power savings can be made without any loss in processing power.

In Facebook’s data centres they have discovered that heat and humidity rarely coincide in its environments, which means they can save power by using cheaper water vapour cooling techniques.

Switches. A load of hot air?
One handbrake on progress, oddly enough, has been the lack of help from networking vendors. Yes, they are wonderful at communicating on almost every level of the seven layer ISO model, from applications through protocols to the data link layer. But at the physical level, something goes amiss on one tiny area. Air-flow is a case in point.

It seems odd that network equipment, for all its multiple features, has to be improved upon by its 19-inch rack. But sadly that is the case. This is only possible because a good 19-inch rack is now far more sophisticated than most people could imagine.

Take the problem of airflow and heat exchange. No two vendors ever seem to agree on the best way to make the air flow over a piece of networking equipment. In fact, no two systems seem to be alike, even when the models are made by the same company. So even if you standardize with all your networking kit from a single manufacturer, you might find they all have different air outlets and different mountings. Making them all fit into one standard rack could be a big pain in the assets. But, as we shall see, the insertion of clashing kit may be a mounting problem, but it’s not an insurmountable one.

So let me tell you what a rack manufacturer should do to work around the limitations of the equipment – to make your data centre efficient and effective. Airflow isn’t the only consideration; there are also some rather fussy vendor specifications to worry about. For example, there is one vendor which specifies 1000 mm width cabinets to house certain models of its switches. But this is not always necessary because I have known people who manage to house such kit in a 600 mm cabinet, and we can certainly manage it properly with an 800 mm wide rack.

To accommodate all of the quirky not-quite-so-standard 19-inch equipment, your rack manufacturer needs to have all kinds of instruments and tools that can be applied. To cater for all kinds of air inlets and outlets, to police the cold aisle and the hot aisle and to keep their separate contents truly away from each other.

To achieve that you ideally need to have all your boxes facing the same direction, with the fascia at the front and the cables coming out the back and still have a consistent flow of air. Of course in reality they don’t – so your rack manufacturer needs to have rack-components that divert airflow and cables so that both end up in the right place.

The major switch vendors all have seriously powerful switching platforms that would delight a network manager with their multiple fabric switches. But while the warp speed data flows they deliver on the backplane are astonishing, the lack of cohesion on airflow that they offer undermines some of their progress.

Many of these switches are magnificent pieces of engineering from a data comms perspective. But without rack air management the hot air they would channel back into the cold aisle of a data centre increases the workload of the cooling systems.
The money that these network vendors save for companies – by improving the flow of data and making them more productive – is counter balanced by the money that’s lost by inefficient heat management and users often have to spend more on electricity, to maintain the cold aisle temperature.

So, your rack manufacturer needs to be able to do ‘magic’. They need to make air go in from the front and come out at the back even if that reverses the current situation. This requires specialist air diverters that will channel all the hot air from the various equipment into one harmonious airflow.

It’s draughty in here
Governments spend a lot of money trying to persuade home owners to insulate their houses. Part of the solution is to stop expensive heat leaking out and cold draughts coming in. That is another simple, but highly effective, regime that we can apply in the data centre. Properly implemented it can bring instant savings on a significant scale.

There are a few hurdles to overcome first. The key to maintaining good discipline is keeping everything tidy – which isn’t always easy when there are so many complex connections and the potential for a rapid turnover of moves and changes.

In an ideal world, all cables will be dressed to one side, all space neatly allocated and the full capacity of the rack will be efficiently allocated. The ‘Bobby Charlton’ as it’s often known.

But this does not always happen because busy cabling engineers do not always find the time to maintain that discipline. Sometimes it’s more important to get the connections in place and get the job done.
Patching-up
Air control can be one of the first casualties of moves and changes, because a new cable will effectively punch a hole in any carefully created barrier. The leakage of hot air through this hole, as explained earlier, can be very expensive. But plugging that gap needs to be far easier for the busy cabling engineer. We’ve created a simple solution that grants a cabling engineer freedom of movement and access for the cable, while blocking off the flow of air. It uses a brush system similar to those in letter boxes and (though not often enough) in under-rack floor cut-outs. This simple but highly effective device will save engineers valuable time and save the operations manager’s power bill too.

To make the job even easier we devised a way to manage these brush strips too. Our solution is a brush strip on a hinge. This means engineers can pull the strip out when you need access, insert new cables and patch them in to the relevant connections, then rotate the whole thing back into place again once they have finished and hey-presto, perfect airflow control is restored.
This sort of ‘usability based approach’ is particularly useful for data centres which have to make lot of moves and changes but want to maintain their high standards of organizational discipline and power saving practices.

BBC and QVC engineers, for example, often have to patch in audio visual feeds for broadcasting projects, then a short time later reconfigure them for the next event. In the circumstances they could be forgiven for forgetting to keep their closets tidy. But tools like ours that the top-tier rack manufacturers provide, within the framework of a high quality but versatile 19-inch rack, enable data centres to keep their shape and their discipline month after month and year after year.

Some in the industry argue that patching frames are an alternative to enclosed racks. That might be fine outside the heat-controlled data hall. But inside the data hall, maybe they are, but only if you don’t value security. Or airflow. Or long term cable management.

Security
There’s sadly not room in this article to tell you all about security – but when planning your new data centre or upgrade please think security. The value of data and service in your DC is immense and human error is the main cause of downtime. It’s now imperative to keep racks secure – so make sure your racks can accommodate a choice of key code, RFID, fingerprint and iris scanning access control. You may not need them now, but you sure need an upgrade path!

And they will probably need to be linked to NOC control so they can centrally allow or deny access; give timed access slots; or require that a supervisor also authenticates to watch over a vendor engineer.

Also ask yourself whether you might need the facility for automated photo or video on door-open. Accountability and traceability are becoming far more onerous and ‘standard’ racks don’t have these capabilities nor the in-built management systems to control them and interface to NOC systems.

Cooling Capability
In-rack power density has gone up by 63% as a global average (source DCD Intelligence 2012 Census) and standard racks simply don’t have the wherewithal to have suited in-row coolers let alone in-rack solutions.

A rack is a long-term piece of infrastructure, very difficult to change-out. Make sure it’s going to be capable of adapting over its ten or so year life.
Stay in control
Measure everything! There’s a universal truth that ‘if you can’t measure it, you can’t manage it’ and nowhere is this more true than within the DC rack. From dynamic equipment power consumption to multiple in-rack temperature and humidity sensors – to instantly identify hot-spots and equipment problems before they cause outages or start fires. And to allow close control of cooling equipment so it only consumes the minimum power possible.

To consolidate all of this measurement information, make sure your rack vendor has a rack management system or DCIM to consolidate it and communicate with the NOC.

The best way to stay efficient is to plan for every eventuality and keep everything in its place. They key to this is to be well ordered, and to have a plan for where everything goes and why. Never assume anything is OK – whether it’s a misplaced cable that seems to be harmless, or a rogue but invisible draft of hot air. Everything has a consequence and consequences need to be planned for.

The best way to plan is to keep everything well ordered and visible. There’s an old military saying “Assumption is the mother of all foul-ups.” We’ve learned over thirty years of designing racks to leave nothing to chance and make no assumptions.

With a top tier 19-inch rack, you get complete visibility, a range of tools and components to manage everything from cables to airflows whatever your mix of legacy and new equipment vendors and a well ordered system of measurability.

The once humble rack has come a long way. But beware – not all racks are created equal!

CAN0025

Safehosts London City Data Centre Opens With A 1.06 PUE

What’s very light, hardly eats, sits outside and loves the heat? Phil Turtle went to Safehosts new 5MW co-location data centre in the City of London to find out.

To be honest, this was my third visit to this highly impressive new facility disguised as a 1970s office block in the vicinity of London’s Borough tube station. The first – a midnight tour already infamous in the data centre industry – cannot be described in these pages, save to say that a second visit during daylight hours was necessary to unscramble recollections of bright LED lighting, temperatures that were approaching arctic and Safehosts’ technical wizard Lee Gibbins telling the group that they’d got 100kW of IT load running and the cooling system was only drawing 400 watts. It was so cold we envisaged polar bears moving in sometime soon.

This third visit, however, was to meet with Safehosts’ CTO Lee Gibbins and Alan Beresford, md of EcoCooling whose equipment Safehosts had installed – not only because it hardly ‘eats’ any energy – but for a host of other practical and business reasons such as: It ‘sits outside’ koala-style half-way up the wall as it turns out and doesn’t mind very hot days. Alan Beresford said, “The cost of energy and the capital cost of cooling equipment are by far the biggest non-productive costs of any data centre. For a new co-location site in central London, space is also at a premium and any white space which has to be given over to CRAC (computer room air conditioning) units or even to in-row coolers uses up many racks-worth of space which could otherwise be used for operational racks to generate revenue.

Safehosts’ Gibbins explained that their building used to be a five story commercial office block. “I initially selected it because I foresaw the opportunity to use the existing windows as air ducts without extensive building works and hence to have a very fast project turnaround with low development costs. It also meant that we could deliver the project with very little impact on the visual appearance of the building and most unusually – no discernible noise.”

However there were limitations with the building that meant normal cooling would not have been possible. The top story being a lightweight upward extension meant that the roof was essentially non load-bearing and unsuitable for heavy refrigeration plant. The small yard at the rear was large enough only for the initial two Himoinsa. 500KVA generator sets, the fuel store, the substation for one of its dual 5MW main feeds and parking for four vehicles. So an innovative approach to cooling equipment space requirements was very high on Gibbins’ agenda.

Having used EcoCooling’s fresh air CREC (Computer Room Evaporative Cooling) system for over three years at Safehosts’ Cheltenham data centre, Gibbins had no qualms over using the same technology to achieve the PUE target of 1.06 that the business had set itself.

“And that’s 1.06 PUE from day one with an initially sparsely populated co-location facility, not a hopeful full-capacity prediction,” Gibbins said.

“Few data centres spend much of their life at, or even near, full capacity,” explained Beresford. “If we take one floor at Safehosts as an example; at around 1MW capacity then using the conventional approach you would need to install three big, heavy, 500kW coolers to provide n+1 redundancy – possibly deploying two initially.

“But whilst monsters such as these are maybe 3:1 coefficient of performance (CoP) at full load – i.e. 100kW or electricity for 300kW of cooling output – at part load it quickly falls to at worst 1:1. So for 150kW of cooling it will still be consuming 150kW! This is why some partly populated data centres routinely have a PUE of 2.5 or worse”

Direct fresh air evaporative cooling on the other hand requires very little power and the EcoCooling units come in much smaller ‘chunks’ – 30kW, 60kW or 90kW. So, as Gibbins explained, “we could have started by installing just two or three units initially though in fact, as the CapEx is so much lower, we decided to start with six.

Compared to the 50-100kW consumption that conventional cooling requires for up to 100kW of IT load, this solution draws a maximum of 4kW per 100 kW. “That’s not only a massive energy saving,” explained Gibbins, “it also means I’ve got an extra 1.3 MW of my 7 MW incoming supply available for revenue generating IT load.

“I imagine everyone knows just how easy it is to max-out the utility power feeds these days – long before a data centre is full. So having an extra 1.3 MW for production kit is a major bonus.”

Returning to the lack of space for cooling equipment at the Safehosts City of London site, the question had to be asked: How did Beresford’s team at EcoCooling solve the space problem? Sky hooks? Not far off as it transpired. “We like to throw away the ‘it’s always done like this’ approach – which frankly is all too prevalent in data centre design,” said Beresford, explaining that by applying a little lateral thinking, the matrix of window openings on the rear wall of this old office block was ideal to enable exterior wall-mounting of the small and lightweight EcoCoolers. “Each one only weighs around 90kg, well within the wall’s load bearing strength.”
This writer had noted, with some confusion, that the air-flow routing within the data hall was far from conventional. In the cold aisle, cold air fell down through filter panels in the ceiling rather than coming up through floor tiles. “Hot air rises, cold air falls,” explained Beresford with a wry smile. “Conventional setups push the cool air under floors and upwards through floor grills working against natural convection. We work with convection – since it’s free – and not against it.” That answered one question, but why were there servo-operated louvered flaps between the hot aisles and the cold air inlet from the external cooling units? Strangely it turns out that, whilst in conventional data centre cooling arrangements great lengths are needed to keep the expensive cold air from being contaminated by hot air leakage, in the evaporative cooling scenario, the incoming air is frequently so cold that hot air needs to be re-circulated and mixed into it in order to keep the servers warm enough! ”Many servers, if they get down to 10° C, will actually close themselves down” explained Beresford, “and we don’t want outages because the servers are seeing air which is too cold!”

Of course three of the big questions around direct air evaporative cooling are atmospheric contamination, relative humidity and ‘very hot’ days.

On the contamination front, the coolers are wrapped in a filter ‘blanket’ giving a EU4/G4 grade standard as a first line of defence.

Further filters to G4 standard are fitted instead of ceiling tiles to the cold aisles in the Safehosts installation – but these, it turns out, are a defence against particulates from both the re-circulated hot air and the incoming cold air. This gives the same filtration standard as a conventional CRAC installation.

“And using 600mm square ceiling filters saved me the cost of ceiling tiles,” quipped Safehosts’ Gibbins.

One other misconception that needs to be explained,” said Beresford, is that direct evaporative cooling cannot meet the relative humidity (RH) standards required in a data centre. The unique patented EcoCooling control system manages both temperature and RH. The temperature is stable and the RH never goes over the allowable limits – so contrary to rumour, the incoming air is not over-laden with moisture.”
And ‘very hot’ days? Well of course, explained Beresford, in a temperate climate such as the UK, there aren’t actually that many – which is not so good for us humans – but great for data centres.

He went on to paint a very interesting picture. “Very hot days are actually quite short term events. We can always be sure, in the UK for example, that come night-time the temperature of the external air will fall below 20° C. So there is only a limited time when it is technically ‘very hot’.”

Refrigeration units become much less efficient as the external ambient temperature rises. Because the condenser units are out in the sun they get extremely hot – far hotter than ambient temperature. They also suffer from their hot air exhaust being sucked back into the inlet raising internal temperatures even higher and causing failures.

As readers will know, on very hot days conventional cooling often can’t cope and it’s quite common to see the data centre doors wide open and massive portable fans deployed to move lots more external air through the datacentre to try to keep things cool. And, to be honest, just getting more air through like that usually works.

Evaporative direct air cooling actually has two very significant advantages over refrigerator cooling on ‘very hot’ days, Beresford claims. Firstly, airflow is not restricted because EcoCoolers have masses of airflow available. So as the ambient temperature increases the system controller ramps up the fans delivering far more cool air to the server rows than CRACs or DX (direct exchange) systems are able to. Without having to open the data centre doors to the outside air.

“What’s more, the higher the temperature the better the cooling because in the UK the hotter the day the lower the humidity so the level of cooling actually increases. So on a hot day in the UK an EcoCooler can reduce the temperature by 12 degrees or more – the air coming off the cooler will never be above 22C whatever the outside temperature.

Although direct air evaporative cooling seems to have many advantages, Beresford is a realistic engineer. “Direct air evaporative cooling isn’t for everyone or everywhere. But in many countries and many operations it offers massive energy savings and significant data hall space saving – allowing better, revenue-earning, use of scarce power and sellable space – as Safehosts have demonstrated here.”
From Safehosts’ perspective Gibbins concludes, “Using evaporative direct air cooling with its zero rack space internal footprint, lightweight wall-mounted coolers and 0.04 effect on PUE, has allowed us to turn an unlikely building into a state of the art colocation centre right in the City of London and enables us to start with a 1.06 PUE from day one. I’m very happy with that.”

New On-demand Professional Press Release Writing Service For Small Companies In The Data Centre Industry

Many thousands of small specialist companies have great stories to tell but don’t get featured in the media because they can’t afford expensive PR agency fees.

A new service from industry directory portal DATACENTRE.ME solves this problem by providing an on-demand press release, feature article and case study writing and distribution service.

The DATACENTRE.ME PR service  is staffed by technology journalists who are expert in the technology and business of both data centers and the vast array of manufacturers, suppliers and service companies that operate in the fast growing sector.

Said Caroline Hitchins, founder of DATACENTRE.ME, “As one of the main ‘meet-me’ organisations in the technology market, companies large and small regularly ask me who I recommend to do the very specialist PR they need in this sector. But often their budgets are too small for it to be cost effective for them to take on a full service PR agency.

“So we’ve teamed up with the specialist PR agency we frequently recommend DataCenterIndustryPR to provide an expert on-demand PR service for smaller companies”

DataCenterIndustryPR md and CTO Phil Turtle said, “I’ve always been passionate about helping small businesses and it had always upset me to meet business people with phenomenal products and services yet neither of us could afford to work together.”

The concept of an on-demand press release copy plus distribution service is, of course, not new. What is new is one specifically for a highly technical market which has, on-tap, all the specialist knowledge and experience of the key boutique PR agency in the sector. Backed by a distribution service with probably the industry’s most comprehensive global database of journalists that write about data centre industry topics.

The DATACENTRE.ME PR Service starts from as little as £280 (€330, $440) to edit a company’s own draft press release and distribute it worldwide or £480 (€560, $760) for one of DataCenterIndustryPR’s journalists to interview you, write and distribute a release.

The service includes feature article writing, case study writing plus a PR strategy and ideas brainstorm phone conference for just £180 (€220, $285).

For more information visit here  or go to www.datacentre.me and choose the‘Media’ tab, then ‘Our PR Service’.

 

 

EcoCooling New Approach To Co-Location Adds £millions To The Bottom Line

EcoCooler-CREC-DontWasteInternalWhiteSpace-W600Co-location data centres are under increasing pressure to cut prices whilst increasing resilience, power density and power availability. A new approach can make space for 20 per cent more revenue earning racks and cut the power bill by £ 1million a year.

A new technique pioneered by EcoCooling and described by Amazon’s James Hamilton as “using the building as an air-handler” when combined with the company’s patented CREC (computer room evaporative cooling) units, has been demonstrated to free-up space for an additional 200 revenue-generating racks in a typical 1000 rack large data centre whilst cutting the energy requirement for cooling from over 1,700kW to a mere 160kW.

“With increasing demand for co-location rack space many operators have run out of space. The EcoCooling CREC cooling system can create an additional 20 per cent of rack capacity compared with either perimeter CRACs or in-row cooling because is installed external to the white space”, explained EcoCooling md and technical director Alan Beresford.

As each co-location rack can generate potential revenue in excess of £1000 per month, this extra capacity has real significance. 200 extra racks in a 1000 rack data centre could generate over £2 million in additional annual revenue.

At the same time, cutting the cooling energy bill by over 10,000,000 kWh could mean in the region of £1 million cost saving per year.

“And that’s on top of significantly lower capital costs of EcoCoolers compared to standard refrigeration equipment,” said Beresford.

These are not merely theoretical calculations either. Beresford explained, “with over 150 data centres now using the EcoCooling CREC evaporative cooling system, the figures above are based on real installations.

“At one such site, the combination of evaporative cooling and the ‘building as an air-handler’ design technique meant that the rack complement on each of that data centre’s four floors is increased from 150 to 180 with the EcoCooling strategy. Over four floors that will add an extra 120 revenue earning racks when the build out is complete.”

A realist Engineer, Beresford points out “Evaporative cooling, such as used by our own EcoCoolers is a relatively young technology and is not suitable to every data centre in every country.”

He pointed out that, however, evaporative cooling has now come of age and can be used by a great many data centres throughout the temperate regions of the world.

“Considering the potentially massive increases in revenue and reductions in operating costs possible, data centre operators would be short-sighted not to include evaporative cooling in their due diligence at the planning stage.” Beresford said.

More information on the EcoCooling products and cooling solutions can be found at www.ecocooling.org

 

Grey Cabinets Help Cut Data Centre Lighting Costs By A Third

Data centres could save a third of their lighting costs by replacing black server cabinets with grey ones, believes Mark Hirst, T4 Product Manager, Cannon Technologies Limited. In fact, choosing to use grey or white cabinets during an equipment refresh can make bigger savings than intelligent lighting systems which use movement-sensitive lights.

Ambient light levels in a server room are crucial for the engineers who maintain the systems but ironically it can be the equipment itself which can make a significant impact. The light reflective value (LRV) of black server racks can be as little as 5% whereas grey or white will reflect up to 80% of the light. “Using the LRV of the server cabinet colour, it is not unrealistic to see a saving of around 30% on the lighting in a data centre,” says Hirst.

“It may also be surprising to some people that lighting can account for 3-5% of the power costs in a data centre,” continues Hirst. This comes at a time where data centre owners are facing a double increase in power costs. Not only have some seen power costs rise by over 6% as contracts have been renewed but the carbon taxes introduced as part of the European Union’s green policies will further increase bills.

The savings that can be made are significant. A small 1MW data centre where 5% of the total power is consumed by lighting will use 50kW to light the server rooms. A 30% reduction would deliver a saving of 15kW. Assuming 10p per kWh this data centre could save £36 per day, or £13,140 per year, the equivalent of a junior operator’s salary.

An average UK data centre, which would draw around 5MW, would save over £65,000 per year. On top of those power cost reductions there would also be the savings in carbon tax and all of this could be achieved by just choosing different colour cabinets. “Manufacturers have already taken this on board,” says Hirst. “HP’s kit is now in grey and Cisco’s new data centre in the USA used bright white cabinets. In the data centre, grey is definitely the new black.”

Ref: CAN0044

Shaun Barnes Named As New MD UK And EMEA at IT And Critical Facilities Consultancy i3 Solutions Group

I3 Solutions Group the vendor neutral critical IT and facilities consultancy – established by well-known industry veterans Ed Ansett and Mike Garibaldi last year – announces the appointment of banking sector IT guru Shaun Barnes as its UK and EMEA managing director.

Announcing the appointment, i3 Solutions Group managing partner Ed Ansett said, “because i3 Solutions Group is trailblazing the concept of fully integrating IT application architecture, technology infrastructure and critical facilities at the design phase of projects such as data centres and trading floors, it was imperative for us to get away from the conventional consultancy model of only one or two ‘names’ constantly on aeroplanes to meetings.

“With the appointment of Shaun Barnes as md for UK and EMEA, we are underlining our mission to have very senior consultants permanently on the ground in each territory – able to interface at ministerial and C-levels into governments and major enterprise – and to take all necessary business decisions. This will mirror the capabilities we already have in North America and AsiaPac.”

Barnes comes to i3 Solutions Group with a track record that includes critical facilities, IT, networking, M&E (mechanical and electrical) and corporate real-estate with Barclays Capital, ABN Amro, Royal Bank of Scotland and most recently consultancy firm ITPM where he ran the global data centre and corporate real-estate practices, delivering large programmes of work for ICAP and DBS Bank.

Barnes, who was based in Singapore for several years but is now operating out of London said, “I’m looking forward to re-establishing many UK contacts and making new contacts in the UK and EMEA. I will be attending many industry events in the coming months.”

One of i3 Solutions Group’s unique points is that they bring the normally siloed disciplines of IT applications, IT infrastructure, Facilities Management and M&E together in equal weight in one consultancy.

According to Ansett, though, “only a small percentage of CIOs and CEOs currently ‘get’ this concept — there are massive advantages to having IT, FM and M&E co-planning and co-designing critical facilities like data centres, trading floors and new corporate offices.”

“There is a lack of consistency across technology strategy, design, implementation and operation in the market. There is also inconsistency across the IT stack. The major consulting firms tend to focus on business process and applications consulting and less on technology infrastructure. Similarly systems integrators concentrate on technology hardware and have little ability in critical facilities,” said Ansett.

“Naturally the large OEMs have expertise in many areas, but are predisposed to their own hardware or software products. This continuation of ‘silos’ of expertise can often lead to £millions of additional downstream costs because the design thinking was not properly ‘joined-up’. As an independent consultancy with expertise in each field we engage at any point in a project.”

Barnes concluded, “I’m happy to explain the advantages of silo-removal and joined-up planning for critical facilities – be they data centres, trading floors or even corporate office. It’s going to be several years before this thinking is adopted by the conservative mainstream – but we are here to help enlightened companies set new trends.”

Ref: i3SG0010

Effective Datacenter Cooling Means Big Savings

As the density and power usage in the datacenter continues to rise, so does the heat. When planning the design of the datacenter, it is important to understand the workload and the expected increase in power in order to effectively deploy cooling technologies. Without this, equipment and cooling are not properly aligned and money is wasted.

 Hot aisles, cold aisles, containment, the correct management of racks and when to deploy HVAC are all key considerations in heat management. The biggest challenge in cooling the datacenter is designing a solution that is flexible and which doesn’t create its own hot spots that cannot be easily cooled.

 Cannon Technologies has decades of experience in managing complex cooling issues within the datacenter and Mark Hirst, T4 product manager explains how to plan and deploy effective cooling.

 The numbers tell the story

In late 2011, DatacenterDynamics released their annual energy usage survey for datacenters. It made for stark reading.

1. Global datacenter power usage in 2011 was 31GW. This is equivalent to the power consumption of several European nations.

2. 31GW is approximately 2% of global power usage in 2011.

3. Power requirements for 2012 was projected to grow by 19%.

4. 58% of racks consume 5kW, 28% consume 5-10kW and the rest consume more than 10kW per rack.

5. 40% of participants stated that increasing energy costs will have a strong impact on datacenter operations going forward.

If these numbers come as a shock, they should be considered against several other factors that will impact the cost of running datacenters.

1. Environmental or carbon taxes are on the increase and datacenters are seen as a prime target by regulators.

2. As a result of the Fukushima nuclear disaster, several European countries are planning on reducing and even eliminating nuclear as a power generation source. This will create a shortage in supply and drive up power costs.

3. Around 40% of all power usage in the datacenter is to remove heat and could be considered waste.

4. The move to cloud will only shift Capital Expenditure (CAPEX) out of the budget. Power is Operational Expenditure (OPEX) and will be added to the cost of using the cloud thus driving OPEX at a faster rate than CAPEX is likely to come down.

 Design for cool

Removing heat effectively is all about the design of the cooling systems. There are several parts to an effective cooling system:

1. The design of the datacenter.

2. Choosing the right technology.

3. Effective use of in rack equipment to monitor heat and Computational Fluid Dynamics (CFD) to predict future problems.

 Datacenter Design

A major part of any efficient design is the datacenter itself. The challenge is whether to build a new datacenter, refurbish existing premises or retrofit cooling solutions. Each has the ability to deliver savings with a new build and refurbishment likely to deliver the greatest savings. Retrofitting can also deliver significant savings on OPEX, especially if reliability is part of the calculation.

1. Build

Building a new datacenter provides an opportunity to adopt the latest industry practices on cooling and take advantage of new approaches. Two of these approaches are free air cooling and splitting the datacenter into low, medium and high power rooms.

In 2010, HP opened a free air cooling datacenter in Wynyard, County Durham, UK. In 2011, HP claimed it had only run the chiller units for 15 days resulting in an unspecified saving on power.

2. Refurbish

Refurbishing an existing datacenter can deliver savings by allowing easy access to rerun power and change out all the existing cooling equipment.

In 2011 IBM undertook more than 200 energy-efficiency upgrades across its global datacenter estate. The types of upgrades included blocking cable and rack openings, rebalancing air flow and shutting down, upgrading and re-provisioning air flow from computer room air conditioning (CRAC) units. The bottom line was a reduction of energy use of more than 33,700MWh which translates into savings of approximately $3.8m.

3. Retrofit

Retrofitting a datacenter can be a tricky task. Moving equipment to create hot aisles and deploying containment equipment can have an immediate impact on costs.

Kingston University was experiencing problems in its datacenter. IT operations manager, Bill Lowe admits ‘As the University has grown, so too has the amount of equipment housed in its datacenter. As new racks and cabinets have been added, the amount of heat generated started to cause issues with reliability and we realised that the only way to deal with it was to install an effective cooling system. Using Cannon Technologies Aisle Cocoon solution means that we will make a return on investment in less than a year.’

 Choosing the right technology

Reducing the heat in the datacenter is not just about adding cooling. It needs to start with the choice of the equipment in the racks, how the environment is managed and then what cooling is appropriate.

1. Select energy efficient hardware.

Power supplies inside servers and storage arrays should all be at least 85% efficient when under 50% load. This will reduce the heat generated and save on basic power consumption.

2. Raise input temperature.

ASHRAE guidelines suggest 27oC as a sustainable temperature which is acceptable to vendors without risking warranties. Successive generations of hardware are capable of running at higher input temperatures, so consider raising this beyond 27oC where possible.

3. Workload awareness.

Understand the workload running on the hardware. High energy workloads such as data analysis or high performance computing (HPC) will generate more heat than email servers and file and print operations. Mixed workloads cause heat fluctuations across a rack so balancing workload types will enable a consistent temperature to be maintained making it easier to remove excess heat.

4. Liquid cooling.

Liquid cooling includes any rack that uses water or any gas in its liquid state. Industry standards body ASHRAE has recently begun to talk openly about the benefits of liquid cooling for racks where very high levels of heat are generated. This can be very hard to retrofit to existing environments due to the problems of bringing the liquid to the rack.

5. Hot/cold aisle containment.

This is the traditional way to remove heat from the datacenter. Missing blanking plates allow hot air to filter back into the cold aisle reducing efficiency. Poorly fitted doors on racks and the containment zone allow hot and cold air to mix. Forced air brings other challenges such as missing and broken tiles allowing hot air into the floor while too much pressure, prevents air going up through the tiles.

6. Use of chimney vents

This can be easily retrofitted even in small environments. Using fans, the chimney pulls the hot air off the rack and vents it away reducing the need for additional cooling.

7. CRAC

Computer Room Air Conditioning (CRAC) has been the dominant way of cooling datacenters for decades. It can be extremely efficient although that depends on where you locate the units and how you architect the datacenter in order to take advantage of the airflow.

One danger with poorly placed CRAC units, as identified by The Green Grid, is the problem of multiple CRAC units trying to control humidity if air is returned at different temperatures. The solution is to network the CRAC units and do coordinated humidity control.

Effective placement of CRAC units is a challenge. When placed at right angles to the equipment, their efficiency drops away over time causing hot spots and driving the need for secondary cooling.

In high density datacenters, ASHRAE and The Green Grid see Within Row Cooling (WIRC) cooling, to get the cooling right to the sources of the heat, as imperative. WIRC also allows for cooling to be ramped up and down to keep an even temperature across the hardware and balance cooling to workload.

If the problem is not multiple aisles, just a single row of racks, use open door containment with WIRC, or alternatively liquid based cooling. This is where the doors between racks are open, allowing air to flow across the racks but not back out into the aisle. Place the cooling units in the middle of the row and then graduate the equipment in the racks with the highest heat closest to the WIRC.

For blade servers and HPC, consider in rack cooling. This solution works best where there are workload optimisation tools that provide accurate data about increases in power load so that as the power load rises, the cooling can be increased synchronously.

New approaches to CRAC are extending its life and improving efficiency. Dell uses a sub floor pressure sensor to control how much air is delivered by the CRAC units. This is a very flexible and highly responsive way to deliver just the right amount of cold air and keep a balanced temperature.

Dell claims that it is also very power efficient. In tests, setting the subfloor pressure to zero effectively eliminated leaks and while it created a small increase in the power used by the server fans, it heavily reduced the power used by the CRAC units. Dell states that this was a 4:1 reduction. Dell has not yet delivered figures from the field to prove this saving but it does look promising.

8. Workload driven heat zoning

Low, medium and high power and heat zones allow cooling to be effectively targeted based on compatible workloads. An example of this is the BladeRoom System where the datacenter is partitioned up by density and power load.

 Effective Monitoring

Effective monitoring of the datacenter is critical. For many organisations, this is something that is split across multiple teams and that makes it hard to identify problems at source and deal with them early. When it comes to managing heat, early intervention is a major cost saving.

There are three elements here that need to be considered:

1. In rack monitoring.

2. Workload planning and monitoring.

3. Predictive technologies.

All three of these systems need to be properly integrated to reduce costs from cooling.

 In Rack Monitoring

This should be done by sensors at multiple locations in the rack; front, back and at four different heights. It will provide a three dimensional view of input and output temperatures and quickly identify if heat layering or heat banding is occurring.

As well as heat sensors inside the rack, it is important to place sensors around the room where the equipment is located. This will show if there are any issues such as hot or cold spots occurring as a result of air leakage or where the air flow through the room has been compromised. This can often occur due to poor discipline in the datacenter where boxes are left on air tiles or where equipment has been moved without an understanding of the cooling flow inside the datacenter.

Most datacenter management suites, such as CannonGuard provide temperature sensors along with their CCTV, door security, fire alarm and other environmental monitoring.

Workload Planning and Monitoring

Integration of the workload planning and monitoring into the cooling management solutions should be a priority for all datacenter managers. The increasing use of automation and virtualisation means that workloads are often being moved around the datacenter to maximise the utilisation of hardware.

VMware, HP and Microsoft have all begun to import data from Data Center Information Management (DCIM) tools into their systems management products. Using the DCIM data to drive the automation systems will help balance cooling and workload.

Predictive Technologies

Computational Fluid Dynamics (CFD) and heat maps provide a way of understanding where the heat is in a datacenter and what will happen when workloads increase and more heat is generated. By mapping the flow of air it is possible to see where cooling could be compromised under given conditions.

Companies such as Digital Reality Trust, use CFD not only in the design of a datacenter, but as part of the daily management tools. This allows them to see how heat is flowing through the datacenter and move hardware and workloads if required.

Conclusion

There is much that can be done to reduce the cost of cooling inside the datacenter. With power costs continuing to climb, those datacenters that reduce their power costs and are the most effective at taking heat out of the datacenter will enjoy a competitive advantage.

CAN0183