EcoCooling joins the Node Pole Alliance

EcoCooling, the leaders in direct-air evaporative cooling, has joined the Node Pole Alliance, an active international network of over 80 world leading knowledge partners coming together to build the data centres of the future.
The Node Pole region encompasses three municipalities in the very north of Sweden, just by the Arctic Circle, and has the potential to become a global hub for data traffic. This is mostly due to its reliable power infrastructure, the ample supply of low cost renewable hydroelectric energy and low air temperatures ideal for natural cooling.
The Alliance members are companies from the technology and construction sectors who combine their knowledge and experience to build world-class data centres.
“We are very proud to have been able to join the Node Pole Alliance”, said Alan Beresford, MD at EcoCooling. “The direct-air evaporative cooling systems we have developed are ideal for the climate in the Node Pole region and make the most of the resources available.”
Air temperatures so close to the Arctic Circle are not only cool enough to make refrigeration in data centres redundant – they can even be too cold for the IT equipment. Some systems shut down if the temperature drops below 14 degrees Celsius. EcoCooling has designed patented control systems and atemperation processes to keep the cooling air within a tightly controlled temperature band – typically 18 to 21 degrees Celsius.
By joining the Node Pole Alliance EcoCooling will work alongside some of the most innovative companies like Hydro66, Vattenfall, Facebook, KnC Miner and ABB.

-ends-

About EcoCooling:
Established in 2002, EcoCooling is a UK based manufacturer of direct-air evaporative coolers.
http://www.ecocooling.co.uk/

High Levels of Connectivity Announced For London Gateway Data Centre

Unknown to the majority of the data centre community, London’s new Gateway Data Centre in London’s West Thurrock sits on top of a massive level of Internet connectivity.

The Thurrock area is thought by many in the industry to be provided only by four carriers: BT, KPN, Cable & Wireless (now Vodafone) and Colt Telecom. However, the new 50MVA Gateway data centre sits a mere 20 metres from Fujitsu’s Data Centre and their 6,000 km UK national fibre backbone.

The network gives direct connectivity through a diverse routed ‘figure of 8’ diverse network through Birmingham and Leicester to Manchester, Southport and Leeds. This ensures that direct low-latency access can be immediately available (subject to contracts) to some 400 Tier-1 and Tier-2 Carriers via London’s Telecity (Harbour Exchange), Telecity (London East) plus Global Switch (London East). The network also gives direct access to peering exchanges LINX and LoNAP plus easy connection to AMSIX and NLix. High levels of network security are also available with all Fujitsu routes being classified for either IL3 or IL2 and suitable for a variety of financial, government and military uses.

Announcing the connectivity research Charles Carden, a director of GVA Connect (the data centre specialist division of property agents GVA), said: “Not only are the connectivity options for the new Gateway data centre superb, we believe from our research that a number of further carriers are considering fibre digs into the area, which is earmarked to become the London East data centre hub, over the coming years. ”Diverse dark fibre routes are possible to The City of London, London’s financial centre, and estimated to have round-trip latency of just 0.19 to 0.2 milliseconds. The availability of BT, Colt, KPN and Cable & Wireless plus Fujitsu’s IL-3 and IL-2 secure IP transit network gives access to some 400 possible carriers and High Density Computing capabilities (thanks to the 50MVA power potential) mean that The Gateway Data Centre is now demonstrated to be one of the most capable sites currently available.”

The Gateway facility is located close to the M25 London Orbital motorway and with easy physical and electronic access to the UK’s financial centres in The City and London Docklands.”

Gateway Data Centre consists of a 2.3 hectare site designed to have 8,000 square metres (86,000 sq. ft.) of white space with a gross internal floor area of 19,500 square metres (210,000 sq. ft.). With up to 50MVA of power potentially available, the Gateway Data Centre is ideal for high density as well as normal density computing uses and is only a few kilometres away from the New York Stock Exchange’s disaster recovery and European Hub data centre in Basildon.

The new Gateway Data Centre already has all necessary planning permissions and is a secure site within an existing trading estate. It can be rapidly delivered as either: ‘Shell and Core’, ‘Powered Shell’, or it can be ‘Fully Fitted’ to customer requirements.
Full plans are available for the conversion of the existing building and these can be viewed by contacting Charles Carden at GVA Connect’s London Stratton Street office. Or visit www.gatewaydatacentre.co.uk

Photos

CGI illustration available at: http://turt.co/dcme31p [user: pics | pwd: pics] Contacts for further Editorial information
Charles Carden GVA
E: Charles.Carden@gva.co.uk
T: +44 20 7911 2529
or
Phil Turtle DataCenterIndustryPR
E: phil.turtle@turtleconsulting.com
T: +44 7867 780 676

TO UNSUBSCRIBE: Please reply with NO GVA (or NO DATA CENTRE if you don’t cover data centre topics at all) in the subject line as appropriate

UK Data Centre On The Market – Ideal For Overseas Entrant to UK

GVA Connect, the data centre specialist division of property agents GVA, has taken new instructions to market an existing 1.6 MVA data centre ‘Nottingham Portal’ with the potential for expansion to 18 MVA and 70,000 square feet of space – all of which is fully consented.

Announcing the availability, GVA Connect director Charles Carden said, “Nottingham Portal data centre is just one mile from Nottingham city centre and nine miles from J24 of the M1 motorway. The site is centrally located in the UK and, having been originally developed by British Rail in the 1980s for its own IT needs, is alongside what was the British Rail National Fibre Network and as a result is extremely well connected.”

Phase 1 of the data centre is partly operational with customers occupying around 3,000 sq. ft. and producing an existing income stream.
Three further data halls have already been created meaning that some 8,775 sq. ft. of additional technical space can be delivered with 1.6 MVA of reserved power via the site’s dedicated 2MVA, 11 kV transformer.
Expansion

Planning consent exists to allow the existing facility to be extended to some 70,000 sq. ft. of gross space with Phase 1 having been altered to facilitate this work to go ahead with ease. Phase 2 of Nottingham Portal data centre has been designed as a high density Tier 3 facility providing an additional 35,000 sq. ft. of technical space and allowing power and cooling in N+1, N+N or 2N configurations to suit customer requirements.
Dual 18 MVA diverse power feeds have already been surveyed and would be available within 18 months. Additional land can also be made available adjacent to the data centre site if required. Said GVA Connect’s Carden, ”this site is ideally placed for an out-of-London UK data centre with readily available power, connectivity and land. It’s in a low risk area of a major city in the midlands of England.

 

With existing customers and an existing income stream it would make an ideal low risk, low-cost expansion option for either an existing UK data centre operator or an overseas entrant to the UK market.” Full details are available for the existing data centre and consented plans the expansion. These can be viewed by contacting Charles Carden at GVA Connect’s London Stratton Street office. Or visit http://turt.co/dcme42

-Ends-

Photo:   CGI illustration of the Nottingham Portal data centre is available at: http://turt.co/dcme42p   [user: pics | pwd: pics]

UNSUBSCRIBE: Please reply with NO GVA or NO DATA CETNRE in the subject line as appropriate

 

Research shows Data Centre Space Take-Up Jumps 18 Per Cent

 

Gateway Data Centre Exterior

Gateway Data Centre Exterior

Research just released by GVA Connect, the data centre specialist division of property agents GVA, showed UK take-up of data centre space amounted to 830,000 square feet (gross retail area) in calendar year 2013.

 

“This represented an 18 per cent jump in data centre space take-up over the previous year,” said GVA Connect director Charles Carden, adding that the company expected take-up to remain at this level during 2014.

 

Confirming London’s position as a leading international data centre hub, Carden revealed that over three quarters of the 2013 data centre take-up was within the so-called London Synchronous Locations (locations which deliver round-trip latency of under three milliseconds).  The second most popular location was Manchester with the remainder of take-up spread across the UK.

 

Said Carden, ‘Not only does this research indicate a very healthy increase in data centre activity, the average power requirement – at over 500kw – demonstrates that the majority of these new projects are of a very significant size.

 

GVA Connect sees these figures as a very sound indicator of continued occupier confidence declaring that this follows-on from the steep rise in both the number and size of enquiries they reported in Q4/13.

 

“We continue to see an increase in serious enquiries from US operators for space in both UK and right across EMEA (Europe, Middle East and Africa),” said Carden.

 

Availability

Availability of data centre space in the London Zone is illustrated by new data centre space such as Gateway in London’s West Thurrock (gatewaydatacentre.co.uk).  This has the ability to deliver up to 86,000 square feet of data halls, 50MVA of diverse power, excellent connectivity and 2.0ms round trip latency to the City of London. Similar opportunities currently exist at both London’s Bracknell and Perivale data centres (bracknellandperivaledc.com) which, like Gateway are available as Shell and Core, Powered Shell or can be delivered as Fully Fitted data centres to specific customer requirements.

 

Also currently available are self-contained opportunities within existing established London region data centres such as 30,000 to 50,000 sqft of space at Heathrow in West London with 41MVA of power and 50,000 sqft at Croydon in South London with dual 10MVA potential. (gvaconnect.com/heathrow   gvaconnect.com/croydon ).

 

In the regions too there are data centre opportunities such as the Portal Data Centre in Nottingham with excellent connectivity, existing clients on lease plus planning consents to expand to 70,000 sqft of gross space with power expandable to over 10MVA via diverse feeds.

 

Rising trend

GVA Connect also reports a further rising trend in interest for data centre space from corporate users, cloud service providers and media-related businesses – indicating a relaxation in IT budgets as we move into 2014.

 

Carden added, “We anticipate enquiry and activity levels remaining steady at this higher level throughout 2014 with transactional activity driven by USA operators in Q3/14 and Q4/14. With GVA Connect being  the leading data centre property agency we are ideally placed to assist with both lettings and acquisitions during what promises to be a high-activity year.”

 

 

UK Government Breathes Life Back Into UK’s Essential Data Centre Industry

Steven Norris, President of Data Centre Alliance

The UK Government has finally recognised the economic importance of the Data Centre sector – one of the few industries where UK plc has a world lead – by their inclusion for the first time ever in a Government announcement.

(Data centres are the giant computing factories that drive industry, commerce, the cloud and social media.)

 

Said Data Centre Alliance President and former Minister of Transport Steven Norris, “The Chancellor of the Exchequer finally recognised Data Centres in his Autumn Statement as the major industry and economic wealth creator that they are, by removing the punitive tax regime that had been making the UK non-competitive in this business and forcing companies to off-shore their data centres.

 

“Britain is a world leader in the Data Centre industry and the Climate Change Levy had been wrongly applied to it because the Government, and indeed the public, were totally unaware of its existence! Something we at industry body the Data Centre Alliance (DCA) have been working hard this year to change. The Chancellor’s Autumn statement plus the winning of the first ever European Strategic Research funding for Data Centres now demonstrates that DCA has succeeded in bringing this previously unseen industry onto the Political agenda.”

 

Norris went on to explain that the Data Centres are buildings full of computer servers that power things like Facebook, Twitter, E-mail, the Banks all big businesses. Airline booking systems, Air Traffic Control, the NHS, city traffic lights and in fact just about every part of our lives are controlled by these massive data centres.

 

“The UK was one of the very first countries to develop Data Centres out of our world-class telecommunications and mainframe computing industries,” Norris said, “and we were in serious risk of loosing what are effectively the UK’s only remaining ‘factories’.”

 

We refer to them as ‘factories’ because they store and make information out of raw data. They are massive – often the size of five or six football pitches – and can consume as much electricity as a small city. “What the Government is now starting to understand is that it is essential to protect and grow one of the few industries where UK plc still has a world lead,” said Norris, “and we are delighted that DCA’s efforts in representing the industry to them have, in such a short period of time had such significant results.”

 

As recently as June 2013 when the DCA in conjunction with the UK’s foremost political magazine New Statesman produced the UK’s first mainstream expose of the hitherto ‘hidden world’ of data centres, no Government Minister could be found to write a contribution, because they were unaware of this massive sector.

 

The publication of that Special Report and a personal briefing by DCA to Greg Baker Minister of State at the Department of Energy and Climate Change brought about an instant interest. So much so that by the time of the second DCA/New Statesman Data Centre Special Report in August 2013 Minister of State for Universities and Science David Willets wrote a leader article recognising the importance of the sector and noted that Government Ministers Vince Cable, Michael Fallon and Ed Vaisey had all now been to visit Data Centres!

In the New Statesman/DCA special report, Willets wrote:

“Data centres are a crucial part of that infrastructure, and are an area that the government needs to understand better. They are the physical, tangible manifestation of the somewhat invisible and ethereal concept which is the internet. They are absolutely fundamental to a successful and vibrant information economy in the UK, supporting some of our biggest global companies, and our research institutions. London’s successful financial sector could not function without the state-of-the-art data centres in areas like Docklands, enabling computer-based and low-latency trading.

Data centres are the tangible manifestation of the ethereal internet

Moreover, in the UK we are good at putting together data centres – and this is expertise we can export to the world at a time when global spending on data centres is predicted to reach $149bn next year.

Concluded the DCA’s Executive Director Simon Campbell-Whyte, , “We are delighted that the UK Government has now recognised the imperative to grow and support this sector. And, although as an organisation we represent the industry globally we believe that this is a major milestone in the success of UK plc and are proud that we were able to achieve this on behalf of our members.”

 

“It is of course only a first step, and we are standing by to assist UK Government and Governments worldwide as they come to terms with understanding and supporting the digital factories that now power the global information economy.”

Mayor of London congratulates Virtus Data Centres on commencement of works at LONDON2

Boris Johnson City HallLondon, 5 November 2013 – Virtus Data Centres Ltd, (Virtus), London’s flexible and efficient data centre specialists has announced its plan for expansion of its London data centre portfolio and the start of construction of its second site in West London, LONDON2.

Virtus’ new flagship data centre is designed to meet the growing demand for scalable, reliable, on-demand colocation services and will be the first in London to deploy a new ground breaking fresh-air, evaporative cooling technology that dramatically decreases energy consumption, bringing site power usage effectiveness (PUE) to below 1.2, delivering substantial savings to Virtus’ clients.

 

By using this evaporative cooling technology together with solar panels, ground water from its own well, chimney racks for heat extraction, highly efficient UPS systems, and other innovative technologies Virtus LONDON2 will be the most energy-efficient data centre in London. It will reduce its environmental impact even further by using 100% green power from renewable sources and heat pumps to recirculate heat generated by the IT equipment into communal areas.

 

Mayor of London Boris Johnson welcomed the new investment in Virtus LONDON2 and commented “I’m delighted that Virtus has chosen to invest in London again. London is leading the global digital technology revolution and is the world’s leading technology hub with great British technology, creativity and innovation.  Up and coming companies like Virtus are at the heart of that whole explosion of talent in London and I’m delighted to see them using so many state of the art ways of saving energy and improving efficiency.”

 

Steve Norris, Chairman of Virtus Data Centres, and himself a former Mayoral candidate added: “Virtus is one of London’s technology success stories, growing fast in the 3 years since we opened the door of our first site in Enfield.  That growth is  fuelled by the boom in technology, media and mobile activity in London. We wanted our LONDON2 facility to be right at the cutting edge of environmental efficiency and we’re proud of what we’ve started.  We’re also looking at other sites for LONDON3. Virtus’ on-going commitment to innovate in-line with the way businesses need data centre space ensures that our customers can rely on maximum flexibility, quality, service and value.”

Virtus Data Centres sponsors IP Expo – The ONE place for ALL the big questions

Virtus, the provider of London’s most flexible and efficient data centre solutions, announced today that it is sponsoring IP Expo at London’s Earls Court  on the 16th and 17th October.

 

Neil Cresswell, CEO of Virtus, commented: “I am delighted that Virtus is sponsoring the IP Expo; one of the major events bringing together all players of the modern interconnected digital market place. Virtus understands the needs of the Cloud world and is committed to developing flexible colocation products to help start-up and established Cloud and IT service providers flourish easily. We see an interconnected marketplace as being essential in reducing costs for end users and believe flexibility, efficiency and agility are key in this fast changing IT environment.”

 

To address the modern requirements of Cloud and IT service providers Virtus has developed CoLo-on-Demand and Connectivity-on-Demand products which are designed to give customers the flexibility to purchase colocation services and bandwidth on an on-demand basis, which matches their on-demand business requirements. For the first time, Cloud and IT service providers can flex up or down on space or power consumption by giving only a day’s notice. They pay for what they actually use so their costs are in line with their revenues.

For more information on the CoLo-on-Demand and Connectivity-on-Demand products or to talk about Virtus’ new exciting flagship data centre in Hayes come and see us on stand G31.

 

=ends=

About Virtus

Founded in 2008, Virtus is the provider of the most flexible, high-efficiency data centre solutions for cloud/managed service providers and corporate end users in London.

 

We exist to help our clients transform their businesses with flexible, modern and efficient data centre and connectivity solutions, which can be delivered in just a few weeks. Virtus’ unique ability to scale data centre services and connectivity bandwidth up or down on demand ensures clients benefit from the lowest TCOs available in London.

 

The high quality and resiliency of Virtus’ facilities, operations and connectivity services are second to none and ensure reliability, performance and security of your IT. Our environmentally efficient and connectivity-rich services are delivered from centres that we have built, on land that we own, located in prime but cost effective locations close to the centre of London.  Our solutions are available in innovative, flexible commercial packages. Our centres are becoming the heart of the power and connectivity driving the growing cloud and IT services community in London as well as financial, media and mobile businesses.
By owning and designing our data centres we maintain an emphasis on high power density and environmentally friendly cooling technology to deliver the lowest PUEs in outer London (less than 1.2) maximising efficiency and minimising cost and environmental impact for our clients.   This efficiency when combined with our unique locations and flexibility drives significant TCO savings through reductions in power costs, optimisation of powered space, flexible contract terms such as ‘pay as you use’ or ‘pay as you grow’ models and savings on connectivity due to geographical location to central London (less than  0.2 of a millisecond). Through these innovative offerings, Virtus are challenging the traditional London data centre landscape.

 

Our resilient and secure Tier 3 facilities have 100% uptime track records and SLAs, and are designed to be flexible for handling a broad range of requirements from a rack to a customised branded hall, high density, low density or a combination of both, for a day or a decade, for businesses of all sizes and in flexible and affordable packages.

 

Today we continue to innovate in-line with and beyond the way businesses of all types buy data centre and connectivity services, to ensure we exceed our clients’ expectations in quality, flexibility, service and value.

 

For more information please go to: www.virtusdatacentres.com or contact us on twitter: @VirtusDCs.

ABB Joins Data Centre Alliance – To Contribute To ‘The Good Of The Industry’

Simon Campbell-Whyte, Executive Director at Data Centre Alliance

Simon Campbell-Whyte, Executive Director at Data Centre Alliance

ABB, a world leader in power and automation technologies has joined industry representative body the Data Centre Alliance (DCA) at its highest level of membership – as a Platinum Partner.

 

Explaining why the company has joined, ABB’s head of the data centre sector for ABB in Europe Ciaran Flanagan said, “We at ABB see the Data Centre Alliance as a trusted advisor to the datacentre industry in terms of standards, innovation and research for the future. It is also rapidly becoming ‘the voice’ of the industry. Membership represents a great opportunity for ABB to better understand end-user requirements and focus our research and development appropriately for our mutual benefit.

 

“We look forward to working with the peer vendors and end-users to discuss future technologies and trends – and how these might help the industry better contribute to the common good. Also to help develop the human capital required to address the challenge.

 

“The format and objective of the Data Centre Alliance, in our opinion, promotes the sharing and benchmarking of concepts and performance amongst the members”.

Flanagan continued by explaining that ABB had decided to join at the Platinum Partner level because: “We clearly see and understand the value of the Data Centre Alliance to the industry and we want to participate at a significant level”.

 

ABB’s major focus will be on contributing to the DCA, he said. “We intend to share ABB’s vision for the future of the sustainable data centre. We will support the DCA by sharing new understanding that results from our corporate R&D efforts and we will invite the DCA members to engage with us as we assess new opportunities for improving datacentre performance.

 

“Our global network of data centre professionals can be leveraged by the DCA to share experience, best practices and standards from our customers who are willing to engage,” said Flanagan.

 

Commenting on ABB’s membership, Simon Campbell-Whyte, executive director of the Data Centre Alliance said, “We are delighted to have ABB on-board and that their focus is on contributing for the greater good of the industry. That exactly mirrors the aims and objectives of the Data Centre Alliance.

 

He continues, “I know that by ‘giving’ in this way they (and many other members who give their time and efforts so selflessly) will reap significant rewards in the long-term both by recognition in the industry and by being able to profitably align their R&D programs to the true and energy needs of the global customer base.”

 

ABB’s success in world markets is attributed to its very strong focus on Research and Development with 17 corporate research centres and an unflinching determination to invest in R&D whatever the market conditions.

 

ABB is able to boast that many of the technologies underlying our modern society – from high voltage power transmission to revolutionary ship propeller systems – come from their R&D and commercialisation efforts.

 

The most recent such technology being a completely new DCIM (data centre infrastructure management) solution called Decathlon®. This has been deployed by Green Datacenter AG initially at its Zurich West data centre and shortly to be installed by them company-wide.

 

Concluding, ABB’s Ciaran Flanagan said, “I very much hope that all of the major players in the sector who’ve not yet joined DCA will take our example and join – to contribute to the greater good of the industry”.

 

To find out more about the Data Centre Alliance or join please visit www.datacentrealliance.org/join

Virtus Data Centres starts construction of its new flagship facility in London

VTS0028 Virtus LONDON2 Data Centere-Corner ViewVirtus, the provider of London’s most flexible and efficient data centre solutions has announced that construction work has commenced on LONDON2; its new flagship data centre in West London, designed to meet the growing demand for scalable, reliable, on-demand colocation services. The data centre will deploy a new ground breaking cooling technology which will dramatically decrease energy consumption, bringing site power usage effectiveness (PUE) to below 1.2 and delivering substantial TCO reductions to Virtus’ clients.

Located in Hayes, just 17 minutes by train from Paddington and less than a mile from Junction 3 of the M4, the site will deliver 11.4MW of IT power, and 65,000 square feet of net customer data centre space. In addition, the site has extensive premium client office space, board rooms and meeting facilities particularly aimed at the growing community of Virtus’ clients such as IT Managed Service Providers (MSPs), systems integrators, digital media, financial institutions and enterprises. With 20MVA of power infrastructure already delivered to the site, construction is scheduled to be ready for client occupation in the summer of 2014.

By using the most advanced, fresh-air, evaporative cooling technology, solar panels, ground water from its own well, chimney racks for heat extraction, highly efficient UPS systems, and other innovative technologies, Virtus LONDON2 will be the most energy efficient data centre in London. The LONDON2 data centre further reduces environmental impact by using 100% green power, from renewable sources and heat pumps to recirculate heat generated by the IT equipment into communal areas.

Neil Cresswell, Virtus’ CEO, commented: “I am delighted that after a thorough design and planning process we have started construction of one of the most advanced data centres in the UK. Virtus already leads the way in bringing to market the most flexible data centre solutions available, thanks to the quality of our sites and services. With the range of energy-saving technologies we are putting in place we will now lead in being able to deliver the most cost-efficient and environmentally friendly data centre solutions, offering significant TCO reductions to our clients in power, cooling, connectivity and services charges.”

The facility, designed to deliver high quality customer solutions, will have six data halls, all capable of being subdivided allowing clients to have anything from a cabinet in a shared space to their own suite or data hall with dedicated power and cooling.

“Another advantage of our new data centre is its flexibility and the cost efficiencies gained by deploying ultra-high power density cabinets of up to 40kW, which can be clustered or installed next to standard power density cabinets without clients having to invest in costly in-row cooling. Clients requiring such ultra-high power density installations will achieve significant savings at LONDON2,” said Cresswell.

The site will naturally be highly secure, away from main roads and behind 5 meter security fences, with 24/7 on site security, technical support and monitoring centres to maintain Virtus’ 100% uptime record. It will also incorporate new innovative on-line real-time monitoring dashboards and self-service tools, for clients to use either remotely, or in one of the customer-friendly dedicated rooms or café areas that are being designed into the facility.

The LONDON2 data centre is located in a fibre rich area and when opened, will immediately offer a wide choice of carriers (more than 10, of which 4 are Tier 1) offering diverse connectivity services enabling low-cost, low-latency, high-speed and high-quality access to different networks, internet exchanges, and customer locations globally. In addition, the high-speed low-latency connections between Virtus’ LONDON1 data centre in Enfield and its LONDON2 data centre in Hayes, will allow them to operate as one common market place, making it easy and cost efficient for end user customers, MSPs and network services providers to serve each other’s needs while expanding their own businesses.

About Virtus

Founded in 2008, Virtus is the provider of the most flexible, high-efficiency data centre solutions for cloud/managed service providers and corporate end users in London.

We exist to help our clients transform their businesses with flexible, modern and efficient data centre and connectivity solutions, which can be delivered in just a few weeks. Virtus’ unique ability to scale data centre services and connectivity bandwidth up or down on demand ensures clients benefit from the lowest TCOs available in London.

The high quality and resiliency of Virtus’ facilities, operations and connectivity services are second to none and ensure reliability, performance and security of your IT. Our environmentally efficient and connectivity-rich services are delivered from centres that we have built, on land that we own, located in prime but cost effective locations close to the centre of London. Our solutions are available in innovative, flexible commercial packages. Our centres are becoming the heart of the power and connectivity driving the growing cloud and IT services community in London as well as financial, media and mobile businesses.

By owning and designing our data centres we maintain an emphasis on high power density and environmentally friendly cooling technology to deliver the lowest PUEs in outer London (less than 1.2) maximising efficiency and minimising cost and environmental impact for our clients. This efficiency when combined with our unique locations and flexibility drives significant TCO savings through reductions in power costs, optimisation of powered space, flexible contract terms such as ‘pay as you use’ or ‘pay as you grow’ models and savings on connectivity due to geographical location to central London (less than 0.2 of a millisecond). Through these innovative offerings, Virtus are challenging the traditional London data centre landscape.

Our resilient and secure Tier 3 facilities have 100% uptime track records and SLAs, and are designed to be flexible for handling a broad range of requirements from a rack to a customised branded hall, high density, low density or a combination of both, for a day or a decade, for businesses of all sizes and in flexible and affordable packages.

Today we continue to innovate in-line with and beyond the way businesses of all types buy data centre and connectivity services, to ensure we exceed our clients’ expectations in quality, flexibility, service and value.

For more information please go to: www.virtusdatacentres.com or contact us on twitter: @VirtusDCs.

 

 

New Gambling Tax Legislation Signals Return To The UK For ‘Offshore’ Data Centres

VTS0088 Steven Norris - Headshot - Chairman - Virtus Data CentresThe UK Government is introducing a new law which will remove the tax advantage from betting, gaming and online bingo companies operating their data centres offshore in so-called ‘tax havens’.

“The effects will be more far reaching than these companies realise,” said Stephen Norris, former MP and chairman of Virtus Data Centres, ” From 1st December 2014 not only will UK betting and gaming operators lose their £300 million/year tax advantage, they will soon realise that they are paying way over the odds for electricity, Internet bandwidth, other services and staff airfares for the running of their data centres.”

 

Norris, who is also president of the Data Centre Alliance, continued to explain that with these extra costs – and the inconvenience of having their data centres in far away in places like Gibraltar, Isle-of-Man and the British Virgin Isles – gaming companies will quickly feel the pinch. “These excessive costs will seriously bite into their bottom-line once the tax advantage disappears.

”We forecast that they will soon wish they were enjoying the much cheaper energy and bandwidth costs (plus the convenience of being close to home) of having their data centre located in a state-of-the-art facility like Virtus’ London data centres,” he said.

 

The change to tax law is being introduced because the UK government has identified that it is losing some £300 million in tax revenues from UK gaming companies, many of which have relocated their data centres to a low tax economy.

 

Under the current law the sales transaction is deemed to take place in the data centre, hence the current tax advantage from having the data centre located in a low tax economy.

 

The new law will redefine the ‘legal place’ at which the transaction is deemed to take place. It will now be the address of the UK gamer or gambler’s ISP (Internet service provider), which means that it will be subject to UK taxation no matter where in the world the data centre is located.

 

Neil Creswell, CEO of Virtus commented, “All gaming company CEOs and CFOs will already be aware of the tax situation, but few yet realise the double-hit from also needing to rapidly cut their excessive datacentre costs. Bringing gaming data centre operations back to London is the simple, quick option.”

 

Virtus London Data Centres make an ideal location for gaming and gambling companies to onshore their data centre operations. The Virtus LON1 facility, for example, is a very high spec Tier 3 data centre with low PUE and a 100 per cent uptime record. It is only a few minutes off the M25 at Junction 25, making it highly accessible for IT staff.

 

LON1 offers cheap and fully ‘green’ electricity (thereby avoiding CCL – the climate change levy tax). Also, being carrier-neutral, it has a large number of highly competitive carriers present including Level(3), Virgin, Geo, Pacnet and C4L. Virtus LON1 also has fast-fibre connectivity to all corners of the UK via Openreach.

 

Creswell concluded: “Gaming company CEOs and CFOs have a lot to think about in the coming year. We at Virtus are expert at migrating data centre operations into our data centres without interruption to clients 24×7×365 online businesses and so are ideally poised to help.

”We’ve migrated dozens of customers and host a number of gaming companies in what are, we believe, are the most modern flexible data centres in London.”

 

For more details please visit www.virtusdatacentres.com

Virtus Data Centres Goes Greener

Virtus LON1 Data CentreVirtus Data Centres (‘Virtus’), London’s most modern and flexible data centre specialist announces today it has secured a new power contract with E.ON for its LON1 London data centre facility – using only energy generated from fully renewable sources.

As part of its £1bn renewable energy programme, E.ON will be supplying Virtus with energy from within its portfolio of renewable sources, including on and off shore wind farms, biomass power stations and wave energy projects.

This means that all of the colocation customers within the Virtus LON1 facility will benefit from ‘green’ energy and will not be liable to pay the Climate Change Levy (CCL).

“Securing a fully renewable energy contract represents excellent value for our tenants and reinforces Virtus’ commitment to sustainability” commented Neil Cresswell, CEO at Virtus Data Centres. “Combined with our Eco-Engineering principles, the new contract further enhances the environmental credentials of our LON1 data centre. I am delighted that Virtus will now be providing customers with 100 per cent renewable energy. This further delivers on our commitment towards providing sustainable and environmentally efficient solutions which not only reduce the carbon footprint of our customers but provide the lowest costs for customers seeking high quality, flexible carrier neutral data centre solutions in London.”

The Virtus LON1 Tier 3 data centre features dual-resilient 8 Megawatt 11kV diverse electricity feeds. The site is located in Enfield, North East London and perfectly positioned for synchronous data replication to inner London locations with a 0.17 millisecond round trip latency – making it technically and physically close to both the City of London and Canary Wharf.

The LON1 data centre is also host to the newly launched CoLo-on-Demand service launched recently by Virtus which makes data centre colocation readily accessible and affordable for Cloud Service Providers.

The Climate Change Levy (CCL) is an energy tax to encourage business users to become more energy efficient, and to reduce their carbon dioxide emissions. Opting to use energy from a low CO2 source as Virtus has now done – such as wind, solar, geothermal, landfill gas or Good Quality CHP (Combined Heat and Power) – gives exemption from the levy to customers of Virtus’ already highly energy efficient LON1 data centre.

VTS0075

 

Out Of Work Graduates Go To High Tech Bootcamp In Search For Jobs

Data Centre Alliance BootcampThe plight of unemployed graduates in the UK, particularly London its capital, has been highlighted in the press and on the national news.

Bizarrely, the UK’s major high tech industry – data centres – have been having major difficulties finding suitable recruits to work in these ‘factories of the future’ – they are often the size of five or six football pitches and packed with tens of thousands of computer servers.

Today sees the start of Data Centre Bootcamp which aims to help out of work graduates and forces-leavers to find work in this exciting industry.

As well as tens of thousands of computer servers, Data Centres also contain massive electrical and mechanical installations, with generators as big as a ship’s engine and an amazing array of industrial-scale pipework.

A medium size data centre can use as much electricity as a small city – yet they are 100 per cent more efficient than company server rooms.

A very wide range of skills from Electrical and Mechanical Engineering to IT and sales are needed.

Probably the fastest growing area of the UK economy – and behind almost everything we do in today’s digital world, data centres are almost the only ‘factories’ remaining in the UK economy.

And they’re absolutely critical because everything from airline booking systems and air traffic control, to traffic-light phasing, Facebook status updates, tweets, e-mail, supermarket tills and stock control, Amazon, e-commerce. In fact just about every business you can think of now relies upon data centres for its operation.

“Amazingly,” says Simon Campbell-Whyte, executive director of international industry body the Data Centre Alliance, “the average age of people in the data centre industry is fifty-something and there’s a major skills shortage coming in this vital industry.

“We’ve worked with our many Data Centre Operator members to come up with ‘Data Centre Bootcamp’ which started today with TV news coverage by ITN. We hope this Bootcamp will give many unemployed graduates, and some of the highly able people now being forced out of our armed forces, the extra skills they need to become credible interview candidates for data centre employers.”

Today’s first pilot of Data Centre Bootcamp was devised by the Data Centre Alliance and is being run at the University of East London’s Dockland’s campus.

Said Campbell-Whyte; “The Data Centre Bootcamp is free to the attendees thanks to the sponsorship of training company C-Net and of two of London’s biggest data centre employers: Telecity and Telehouse.

“Both Telecity and Telehouse run massive data centre complexes in Docklands and are hoping that at the end of the Bootcamp they will have some of their best interview candidates in years.”

For the members of the Data Centre Alliance (which represents individual data centre professionals and equipment manufacturers as well as the data centre operators) – their expectation is that the pilot 10-day intensive will turn most of the 21 attendees into highly employable recruits.

If successful as expected, Data Centre Bootcamp will be run on a much larger scale in London, throughout the UK, Europe and the Far East.

The 21 ‘Bootcamp-ees’ on today’s pilot are mostly out-of-work Londoners including graduates of UEL, Queen Mary and Middlesex Universities. Additionally, three are forces leavers, plus a PhD student from Leeds who sees the Data Centre Bootcamp as her best chance of getting into this exciting and challenging industry.

DCA0025

Savvis Earns Silver CEEDA Honor for Data Center Energy Efficiency

DCProfessional Development today announced that BCS, the Chartered Institute for IT, has awarded Savvis, a CenturyLink company and global leader in cloud infrastructure and hosted IT solutions, a silver Certified Energy Efficient Datacenter Award for its LO3 London Docklands data center.

Savvis is the seventh organization in the world to demonstrate its leadership in sustainability and data center energy efficiency by earning a CEEDA honor, which is administered by DCProfessional Development on behalf of BCS.

“We are proud to meet the comprehensive, rigorous standards of the CEEDA independent assessment program,” said Drew Leonard, vice president, colocation product management, at Savvis. “This recognition speaks to Savvis’ dedication to enhancing the energy efficiency of its global data centers and the commitments of our people, who have worked diligently to optimize our operations for providing sustainable, secure and agile IT infrastructure solutions to our clients around the world.”

CEEDA provides evaluated organizations with an independently assessed and audited appraisal of the extent to which their data centers adhere to best practices in energy efficiency. Renewed biennially, the assessment has been developed in line with EU Code of Conduct for energy efficiency in data centers, based on globally recognized best practices in data center engineering, IT infrastructure, monitoring and management.

Under the program, DCProfessional Development assesses participating data centers for gold, silver or bronze certification. The assessment includes a comprehensive report combining a description of the performance of the best practices measured, a set of benchmarking tools and a roadmap for further improvement.

Savvis’ LO3 Docklands data center, which opened in June 2012, is a 1.5 megawatt facility with 11,000 square feet of raised floor space located in London’s prime financial district. It supports cloud services, hosted IT solutions and colocation for a range of businesses in the financial service, consumer brand, government and other sectors. It is one of more than 50 data centers Savvis operates worldwide.

DCProfessional Development-appointed assessor John Booth evaluated the facility against CEEDA energy-efficiency best practice criteria, commenting in his final appraisal report:

“The LO3 Docklands data center is an excellent example of a facility where the latest energy efficiency design measures and the on-going efforts by facilities and management to fully comply with the EU Code of Conduct and CEEDA – as well as go above and beyond – are to be applauded. Savvis’ commitment to energy efficiency and carbon reduction is clear to see, as well as its commitment to [its] customers’ sustainability and energy efficiency efforts, and it is refreshing that the performance of the data center is seen as important within senior management.”

In addition to Savvis, other organizations that have received a CEEDA include: ARM, Fujitsu, the Co-Operative Group, the University of St. Andrews, the Wellcome Trust Sanger Institute and Westpac Bank.

To find out more about CEEDA contact d.carter@dc-professional.com or visit www.dc-professional.com

=ends=

About DCProfessional Development

DCProfessional Development’s mission is to provide the global data center community with the best informed, most coherent and accessible learning, development and assessment services, thereby helping organizations reduce downtime, increase productivity and enhance energy efficiency.

For more information, visit www.dc-professional.com

About BCS

Our mission as BCS, The Chartered Institute for IT, is to enable the information society.  We promote wider social and economic progress through the advancement of information technology science and practice.  We bring together industry, academics, practitioners and government to share knowledge, promote new thinking, inform the design of new curricula, shape public policy and inform the public.

Our vision is to be a world-class organization for IT. Our 70,000 strong membership includes practitioners, businesses, academics and students in the UK and internationally. We deliver a range of professional development tools for practitioners and employees.  A leading IT qualification body, we offer a range of widely recognized qualifications.

For more information, visit www.bcs.org

 

Notes To Editors

For further information on the report series or about DCProfessional Development

Please contact:

Nick Morris, DC Professional Development

Tel:    +44 20 7426 4817

Email:  nick.morris@DatacenterDynamics.com

OR

Phil Turtle, DataCenterIndustryPR – Turtle Consulting

Email phil.turtle@turtleconsulting.com

Tel: +44 7867 780 676

Demand Set to Outstrip Supply in Latin American Colocation Data Center Market

New Space Latin AmericaMarket Ripe for New Entrants says DCD Intelligence

Demand for colocation data center facilities in the Latin American region looks likely to outstrip supply according to new research released by DCD Intelligence today.

The new report, ‘Latin American Colocation’ (http://turt.co/dcd46) concludes that there is room in the market for new entrants, especially in cities outside of the major hubs.

 

Most of the countries in Latin America are classed as emerging markets in terms of data center market growth and in common with other emerging markets such as those in Asia Pacific, Colocation is now seen as a viable option for businesses in terms of outsourcing their data center requirements. In fact, a larger percentage of data center white space is outsourced in the LatAm region than in many western countries.

 

According to Nicola Hayes, managing director at DCD Intelligence ‘What we see in the Latin American markets is that companies are far less reluctance to outsource data center requirements than has been the case at same stage of market development in other regions.  In previous years the colocation market was hampered in Latin America by a lack of suitable pure colocation space but this has changed over the past 2 years with a greater variety of stock now available”.

 

“2012 saw a significant amount of new build plus expansion to existing facilities in many of the countries researched and –  whilst Brazil continues to offer the largest amount of colocation space and has the largest number of providers across the region – other countries are gaining momentum. For example Colombia witnessed the highest number of new entrants to the market and Chile’s growth rate in terms of available space has overtaken that of Mexico.” Hayes reported.

 

Chart  here: http://turt.co/dcd46p2  [user: pics | pwd: pics]

Caption: New Colocation Space Latin American Markets

 

Although supply is increasing, there is still room for new entrants – particularly as demand is rising not only in the major cities where the majority of space is located but also in secondary locations.

 

The Latin American Colocation report also identified that providers with a regional rather than single country presence are for the most part international providers rather than native Latin American companies.

 

“There are indications that some of the larger country providers are looking to establish a presence in neighbouring countries in order to capitalize on opportunities outside of their own markets but at present ‘regional’ coverage is the domain of the large international players,” Hayes concluded.

 

For more information about the report Latin American Colocation please visit http://turt.co/dcd46.

 

About DCD Intelligence

Specialist research and analyst company in the Data Center, Telecoms and IT sectors – with the global reach of parent company DatacenterDynamics, DCD Intelligence is uniquely placed to offer professionals in these sectors holistic yet statistically sound business intelligence whether in form of research reports, data tools or fully bespoke projects.

 

DCD Intelligence is committed to basing our analysis on stringent research techniques accepted by research communities worldwide. An understanding of the process we employ gives our client base confidence in the reports, studies and bespoke analysis produced by our team of respected researchers and analyst.

 

About DatacenterDynamics

DatacenterDynamics is a full service B2B information provider at the core of which is a unique series of events tailored specifically to deliver enhanced knowledge and networking opportunities to professionals that design, build and operate data centres.

 

With 50 established annual conferences in key business cities across the world, DatacenterDynamics is acknowledged as the definitive event where the leading experts in the field share their insights with the top level datacentre operators in each market.

 

In 2012 over 28,000 senior data center professionals have attended a DatacenterDynamics event, creating the most powerful forum in the industry today.

 

DatacenterDynamics is renowned for its proximity to the data center market in all its event locations, seeking to establish one on one relationships with every professional with whom we engage. This personal touch coupled with a deep understanding of the market mechanics makes DatacenterDynamics much more than just a B2B media provider, which is highlighted by the number of long-term relationships that have been forged internationally with end-users, vendors, consultants, governments and NGOs alike.

 

Data Centres Could Experience 30 Per Cent More Failures as Temperatures Increase

Alan Beresford EcoCooling Managing DirectorMany data centre operators have been increasing the operating temperature in their data centres to reduce the massive costs of cooling.  But, warns Alan Beresford, technical director and md with EcoCooling – they run the risk of significantly more failures.

ASHRAE (the American Society of Heating and Refrigeration Engineers) is generally considered to set the standards globally for data centre cooling. A few years ago it relaxed its recommended operating range for data servers from 20-25C (Celsius) to 18-27C.

“For decades,” said Beresford, “data centres have operated at a 20-21C temperature. With the relaxation in the ASHRAE 2011 recommendation plus the pressure to cut costs – data centres have begun to significantly increase the ‘cold aisle’ temperature to 24-25C and in some cases right up to 27C.

“But many of them have not taken into account the study of server reliabilities detailed in the ASHRAE 2001 Thermal Guidelines for Data Processing Environments – which predicts that if the cold aisle temperature is increased from 20C to 25C, the level of failures increases by a very significant 24%. Upping the temperature to 27.5C increases the failure rates by a massive 34 per cent.

Warns Beresford: “And if the air temperature going into the front of the servers is 27C it’s going to be very hot (34-37C) coming out of the rear. For blade servers it can be a blistering 42C at the rear!

“It’s not just the servers that can fail,” states Beresford, “at the rear of the servers are electric supply cables, power distribution units and communication cables. Most of these are simply not designed to work at such elevated temperatures and are liable to early mortality.”

Interestingly, again according to ASHRAE’s published figures, if the temperature is reduced to 17C – the server reliability is improved by 13 per cent compared to conventional 20C operations.

“To cool the air to 17C would be completely uneconomic with conventional refrigeration cooling,” said Beresford, “our modelling shows it would require over 500kW kilowatts of electricity for every megawatt of IT equipment.

“However, with our evaporative direct air cooling CRECs (Computer Room Evaporative Coolers), this 17C operation would require
less than 40kW kilowatts – a saving of over 450kW compared to conventional refrigeration and a reduction of PUE (Power Usage Effectiveness) of 0.45.

When given the option of cooling a data centre with refrigeration at 27C compared with evaporative cooling at 17C at less than 10% of the energy use, 40% less temperature related server failures and a more stable environment for other components it is clear why over 150 UK data center operators have adopted this approach.

Alan Beresford has prepared an informative webinar explaining how evaporative cooling works and why it uses so little energy compared to conventional refrigeration. To watch it visit http://turt.co/dcme12

Beresford adds a final point, “when Engineers and technicians are working on servers, it’s usually at the rear where all the connections are. If they are going to have to work regularly in temperatures of 34C to 42C there might be health issues to consider too. Keeping their working environment under 30C is a far more acceptable solution.”

To find out more about EcoCooling CRECs visit www.ecocooling.org

 

ColdLogik Pours Cold Water On Data Centre Cooling – And Nets The Queen’s Award!

Michael Cook USystems Managing Director And FounderUSystems, the innovator in data centre cooling, has won the coveted 2013 Queen’s Award for Enterprise in the Innovation category for its multi-award winning ColdLogik data centre cooling range.

The Queen’s Award for Enterprise is the UK’s most prestigious award for business performance and recognises and rewards outstanding achievement. The Innovation category is for continuous innovation and development, allied to commercial success, sustained over at least five years to levels that are outstanding for the size of a company’s operations.

Michael Cook, USystems’ MD and founder said, “We are extremely excited and proud to receive this award which helps to elevate our profile and endorse our success.

“The Queen’s Award assessors were impressed that USystems could make a radical difference to the approach of data centre cooling with a simple idea that was well thought through and offers a complete, long term solution to the challenges of reducing data centre energy consumption, real estate, carbon footprint and other associated costs.”

Founded in 2003 and based in Gamlingay, Cambridgeshire, England, USystems is a privately owned company with over 50 employees in its 35,000 square foot manufacturing facility. It launched the family of ColdLogik CL20 rear coolers – the heart of its data centre solution – in 2007 as the world’s first intelligent and fully integrated rear cooler system.
ColdLogik coolers have been deployed worldwide – in over 100 projects – all of which now boast massive on-going energy and cost savings, plus a lower carbon footprint.

In addition to reducing energy consumption, ColdLogik rear coolers remove hot spots and can be retrofitted to existing or new build OEM cabinets using an interface.

USystems continues to develop and expand the range to meet new demands. The company is currently working with a number of key data centre design and build specialists that are incorporating the ColdLogik solution into their on-going projects.

The technology

Data centre cooling is an issue that has far reaching effects on both financial drain and power consumption – for many businesses throughout the world.

ColdLogik rear coolers replace the traditional approach to data centre cooling by removing low and high density heat loads. Rear door cooling has added benefit of reducing the real estate wasted by hot aisle/cold aisle, in row cooling; CRAC cooling and aisle-containment designs.

Waste heat generated by active equipment within a cabinet is removed at source by water cooling. Water is 3,500 times more effective than air by volume. Under normal circumstances, water and computing are not a good mix – however, by utilising a patented Leak Prevention System, the ColdLogik solution operates without fear of leakage.

Facility-wide solution

It will surprise many readers that, in order to gain the optimum energy performance and control the room environment, ColdLogik coolers can be used to control the entire computer room / data centre without additional air conditioning, The ColdLogik Management System, intelligently maintains the room ambient temperature.

ColdLogik is now specified into many prestigious current and on-going projects where it offers superior control and unrivalled energy, power and financial savings to future-proof data centres for many years.

Other recent awards
USystems is no stranger to winning with its ColdLogik products – with three further awards under its belt last year:

• Winner – 2012 CEEDA Energy Efficient Gold Award
• Winner – 2012 CEEDA Energy Efficient Silver Award
• Winner – 2012 UK IT Industry Award

DCME0013

About USystems

USystems is the designer, manufacturer and distributor of an impressive range of world class high quality modular and scalable cooling systems, sound proofing data cabinets and 19 inch racks, which are used throughout the world in computer rooms, data centres and data / telecoms projects.

Through clear and innovative thinking in our designs and using advanced manufacturing processes, our products are dramatically reducing noise, heat and energy costs in data centres and offices while improving space utilisation and reducing carbon footprints.

USystems is registered as meeting the quality manufacturing requirements of BS EN ISO 9001:2008 and products comply with CE, UL, SCC, CMC and RoHS standards – we recycle over 90% of our waste and are currently working towards ISO 14000.

Notes To Editors
For further information on ColdLogik, USystems or the Queen’s Award for Enterprise please contact:
Zillah Loewe, USystems Limited.
Email: Zillah Loewe zillah@usystems.co.uk
Tel: +44 1767 652 817
OR
Phil Turtle, DataCenterIndustryPR – Turtle Consulting
Email phil.turtle@turtleconsulting.com
Tel: +44 7867 780 676

Notes To Advertising Representatives
Please refer any requests for colour seps direct to client (details above) and NOT to the agency. Thanks, Phil Turtle

 

Retrofitting Cold-aisle Cocooning Does Not Mean Massive Disruption

CAN0072 Cannon TEchnologies Retrofit Cold Aisle Coccoon_2CAN0072 Cannon Technologies Retrofit Cold Aisle Cocoon_1CAN0072 Cannon Technologies Retrofit Cold Aisle Cocoon_3Working round infrastructure that has evolved over time makes retrofitting hot/cold aisle containment a challenge. Multiple data and network cable runs, cooling pipes and mismatched cabinets mean many solutions will not work effectively.

Mark Hirst, T4 product manager, with Cannon Technologies looks at the options available to those who want containment but are not sure if their environment can handle it.

What is hot/cold aisle containment?
Hot/cold aisle containment is an approach that encloses either the input or output side of a row of cabinets in the data centre. The goal is to effectively control the air on that side of the cabinet to ensure optimal cooling performance.

With hot aisle containment, the exhaust air from the cabinet is contained and drawn away from the cabinets. Cold aisle containment regulates the air to be injected into the front of the hardware.

In both cases, the ultimate goal is to prevent different temperatures of air from mixing. This means that cooling of the data centre is effective and the power requirements to cool can, themselves, be contained and managed.

Challenges
Over time, all environments evolve. The most common changes in a data centre tend to be around cabling and pipe work. What was once a controlled and well order environment may now be a case of cable runs (power and network), being installed in an ad hoc way. In a well run data centre, it is not unreasonable to assume this would be properly managed but the longer it has been since the last major refit, the more likelihood of unmanaged cable chaos.

The introduction of high density, heat generating hardware such as blade systems has seen greater use of water based cooling. This requires changes to the racks and the addition of water pipes. These make enclosing a rack difficult as many solutions need to have pipework holes cut into them. The other challenge here is that you cannot simply drill a hole and the retrofit will not include disconnection and reconnection of pipes to run them through the holes.

These are not the only challenges. Just as the type of hardware in the cabinets has evolved, so have the cabinets themselves. What started out as a row of uniformly sized and straight racks may now be a mix of different depths, widths and heights. This is common in environments where there are large amounts of storage present as storage arrays are not always based on traditional rack sizes.

Cabinet design can also introduce other issues. If the cabinet has raised feet for levelling, something often seen with water-based solutions, there may be existing backwash of air under the cabinets. There may be gaps in the cabinets either down the sides or where there is missing equipment. These should already be covered by blanking plates. The reality in many data centres, however, is that there will be missing plates which is allowing hot and cold air to mix.

The floor also needs attention. Structurally, there may be a need to make some changes to accommodate the weight of any containment system. This is not just the raised floor but the actual floor on which the data centre sits. The evolution of data centres and changes to equipment is rarely checked against floor loads. Before adding more weight through a containment system, it is an opportunity to validate loads

Floor tiles degrade over time. They get broken, replacements may not be the right size or have the right size of hole. No air containment system can be effective if there are areas where leaks can occur.

Prerequisites
It would be naïve to assume that retrofitting hot/cold aisle containment will not require some potential changes to the existing configuration. However, there are very few prerequisites to deal with.

1. Weights and floors as mentioned above.
2. Each enclosure should ideally line up height-wise with its counterpart across the aisle. Don’t worry about small gaps, we will deal with those later.
3. The height of each pair of enclosures should ideally be the same. However, there are ways around this but within reason. A height difference of a few cm can be managed easily. A difference of a metre or more is increasingly common in older environments. Whilst most containment solutions could not cope with this, we have designed our retrofit solution specifically for such “Manhattan Skylines” which are highly prevalent in many older data centres and where a cost effective upgrade path to containment can significantly extend the useful life of the existing racks, data cabling and M&E infrastructure.
4. Normally, each row must line up to present an even face to the aisle that is being contained, in order to create an air tight seal.

The prerequisites may require a little planning inside the data centre and in the most extreme case, require a little moving of cabinets to get the best fit. Again it is possible, as we have done with our own retrofit system, to design a solution for situations where it is not reasonable to move cabinets to create an even line to the containment aisle.
What equipment is required?
Once the prerequisites have been met, fitting aisle containment is a mix of installation and good practice cabinet management.

There are four steps to fitting containment.

1. Fix the ceiling eyebrow to the top of the cabinets. Where there are differences in cabinet heights, it will be necessary to fit blanking panels to create a uniform height both sides of the aisle. It may also be necessary to arrange for cables or pipes to be moved if they are too close to the edge of the cabinet.
2. Install the ceiling panels. These sit inside a framework which should be a uniform size. If the ceiling panels do not fit snugly the containment will be seriously compromised.
3. Fit air skirts under the cabinets to prevent any return flow of air. If the cabinets do not butt up against each other, fit skirts to cover the gaps between the cabinets.
4. Attach the doors at both ends of the aisle.
Tidying up
Tidying up the installation is about good data centre management. Here the typical steps include:

1. Fit blanking plates wherever there are gaps inside the cabinets.
2. Fit blanking plates to the sides of cabinets where cables run.
3. Replace broken or damaged floor tiles. If the containment is to the cold aisle, rebalance the cooling by changing the floor tiles for those with the right size of vent.
4. If the containment is for the hot aisle, check that the extraction and venting is evenly spread across the length of the aisle to prevent hot air zones being created.
5. Fit monitoring devices such as temperature and humidity sensors to ensure that there are no unexpected challenges caused by the containment.

None of these steps should cause problems for data centre facilities managers and will provide an opportunity to validate the benefits from the aisle containment.

Conclusion
There is a belief that retrofitting aisle containment to data centres is highly disruptive, requires a lot of time, is expensive, can have limited benefits and may not be suitable.

As can be seen, the process to retrofitting is not essentially complex and builds upon good data centre best practices for housekeeping and management. Additionally, the time required to do the retrofitting is easily manageable and can be done without any impact on data centre operations.

Retrofitting aisle containment is one of the easy wins when it comes to recovering money spent on power by reducing excessive cooling while retaining, for a good few years more, the significant investment in racks, data cabling and M&E.

For more information go to: www.cannontech.co.uk

CAN0072

Why Grey Is The New Black

CAN0029 Cannon ServerSmart cabs and roof mounted Raceway installed at Wi-Manx Heywood Data CentreCAN0029 Cannon ServerSnart Cabs installed within an Aisle Cocoon for Cold Aisle Containment at Wi-Manx Heywoood Data CentreCAN0029 Cannon Technologies grey-white cabinets inside cold aisle

 

 

 

 

 

Power prices continue to rise leaving data centre owners facing a double increase in power costs this winter. Some have already seen power costs increase by over 6% as contracts have renewed and in 2013 there will be even more pressure from green taxes across the EU.

Although hardware is getting more efficient, what else can a data centre owner do? Mark Hirst, T4 Product Manager, Cannon Technologies Limited looks at an unexpected saving that many data centre owners are ignoring.

Cutting power costs a daily task
Data centre owners spend as much time today looking for cost savings as they do running their facilities. Each generation of computer hardware is more energy efficient than the last and capable of reaching higher levels of utilisation. While hardware replacement cycles have increased from an average of three years to five years, inside the data centre, the most power hungry hardware is still being changed at three years.

Another power efficiency gain has been raising the temperature inside the data centre from 16C a decade ago to around 26C today. This has helped drive hardware replacement to ensure the operating envelope of servers and storage are not exceeded. It has also reduced the cost of cooling.

Smaller measures such as enforcing blanking plates and keeping floors and plenums clear have also had a beneficial impact on reducing power bills.

Despite all this, power costs have continued to rise. This means that data centre owners need to look for new areas in which to make savings.

Lighting
One area that has been looked at before is lighting in the data centre. It may surprise some readers to discover that lighting can account for 3-5% of the power costs in a data centre. This has already led smart data centre owners to look at where savings can be made.

An increasingly common solution is to use intelligent lighting solutions that only come on when someone is in the data centre. These systems help to reduce cost but have to be properly designed. If someone is working inside a rack, they may not trigger the lighting system frequently enough to keep the lights on. This can result in being plunged into darkness and having to keep stopping to walk down an aisle to force the lights back on.

Bulbs and fittings
Another solution is to use longer life, lower power, bulbs. This can result in several separate savings. Longer life means fewer replacements which in turn means lower inventory costs. Fewer replacements also means less time spent by maintenance crews changing light bulbs.

The type of light fitting also has an impact on the efficiency of the lighting. Direct light can be harsh and too much reflectivity off of surfaces is hard on the eyes. One solution is to use recessed light fittings so that the light source is not directly reflecting off of a surface.

A second alternative for light fittings is to use a lighting channel. This works by creating a scatter effect on the light, giving it a more even distribution around the room.

Harsh light
Good examples of harsh light environments are hospitals and operating theatres. Bright white light and bright white walls may make it ideal for surgery but make them places where extended exposure can cause eye fatigue.

By comparison, most server halls are an off white wall at best, grey the floors are a grey/blue colour and they are filled with black server cabinets. These choices of colour have not been arrived at to create a less harsh working environment. Instead they have been arrived at often due to the lower cost of equipment and the ready availability of certain colours and types of paint.

Reflectivity
It may seem strange but it often takes more light to illuminate a server room than an operating theatre. The reason for this is the way light is absorbed by colour.

The light reflective value (LRV) of a colour determines how much light the colour absorbs. Black reflects as little as 5% of the ambient light while grey-white reflects up to 80% of the light.

This means that a server room filled with black cabinets, many of which are filled with black cased hardware, absorbs a lot of light. For this reason, it is not unusual to find engineers working at the back of cabinets wearing head torches to ensure that they have enough light to see by.

Grey the new black
Changing the colour of the server cabinets from black to grey or even white can make a significant difference to the amount of available light in a room and the cost of lighting that room.

Using the LRV of the server cabinet colour, it is not unrealistic to see a saving of around 30% on the lighting in a data centre. It may even be possible to get a greater saving but one of the problems of making an environment too bright, is that it becomes difficult to work in.

Effective lighting is a health and safety issue and it may surprise some readers to know that the recommended light level in a data centre is 500 lux at 1m off of the floor. How bright is that? Most building corridors are 100-150 lux, entrance halls around 200 lux and office spaces are 300-400 lux. That means that you need to put a lot of light into a data centre to deliver 500 lux. If the surfaces are absorbing a lot of the light, you need to use a large amount of energy to reach that target.

Calculating the savings
A small 1MW data centre with 5% of its total power consumed by the lighting is spending 50kW just to light server rooms.

Reducing that by 30% delivers a saving of 15kW.

Assuming 10p per kWh, the savings work out at:

(15 x 10) / 100 = £1.50 per hour
1.5 x 24 = £36 per day
36 x 365 = £13,140 per year

While £13,140 might not seem an excessive amount, the average UK data centre draws around 5MW with the larger data centres drawing in excess of 20MW. Scale up the savings and the larger data centres could be saving over £240,000 per year.

The savings are not just on the power. The larger the data centre the more it will be paying in carbon taxes. This means additional savings are made through the reduction of power.

An additional benefit is that it creates a more engineer friendly environment. If the engineer does not need to use a head torch, then they can work more easily around the cabinet.

Conclusion
It is often easy to overlook the little things when saving money. The use of intelligent lighting systems has managed to cut costs in some data centres. Changing the type of lighting and light fittings can also deliver savings and improve the working environment.

The bigger savings, however, comes from the colour of the cabinets and the equipment in the room. In a small data centre, it is possible to save the equivalent of a junior operator’s salary. In larger data centres, the savings could easily cover the cost of several highly trained system and security engineers.

Whether the money saved is invested back into the business or simply taken off the bottom line, it is time to pay attention to lighting costs and the colour of the equipment in the data centre.

CAN0029

Grey Cabinets Help Cut Data Centre Lighting Costs By A Third

Data centres could save a third of their lighting costs by replacing black server cabinets with grey ones, believes Mark Hirst, T4 Product Manager, Cannon Technologies Limited. In fact, choosing to use grey or white cabinets during an equipment refresh can make bigger savings than intelligent lighting systems which use movement-sensitive lights.

Ambient light levels in a server room are crucial for the engineers who maintain the systems but ironically it can be the equipment itself which can make a significant impact. The light reflective value (LRV) of black server racks can be as little as 5% whereas grey or white will reflect up to 80% of the light. “Using the LRV of the server cabinet colour, it is not unrealistic to see a saving of around 30% on the lighting in a data centre,” says Hirst.

“It may also be surprising to some people that lighting can account for 3-5% of the power costs in a data centre,” continues Hirst. This comes at a time where data centre owners are facing a double increase in power costs. Not only have some seen power costs rise by over 6% as contracts have been renewed but the carbon taxes introduced as part of the European Union’s green policies will further increase bills.

The savings that can be made are significant. A small 1MW data centre where 5% of the total power is consumed by lighting will use 50kW to light the server rooms. A 30% reduction would deliver a saving of 15kW. Assuming 10p per kWh this data centre could save £36 per day, or £13,140 per year, the equivalent of a junior operator’s salary.

An average UK data centre, which would draw around 5MW, would save over £65,000 per year. On top of those power cost reductions there would also be the savings in carbon tax and all of this could be achieved by just choosing different colour cabinets. “Manufacturers have already taken this on board,” says Hirst. “HP’s kit is now in grey and Cisco’s new data centre in the USA used bright white cabinets. In the data centre, grey is definitely the new black.”

Ref: CAN0044

Effective Datacenter Cooling Means Big Savings

As the density and power usage in the datacenter continues to rise, so does the heat. When planning the design of the datacenter, it is important to understand the workload and the expected increase in power in order to effectively deploy cooling technologies. Without this, equipment and cooling are not properly aligned and money is wasted.

 Hot aisles, cold aisles, containment, the correct management of racks and when to deploy HVAC are all key considerations in heat management. The biggest challenge in cooling the datacenter is designing a solution that is flexible and which doesn’t create its own hot spots that cannot be easily cooled.

 Cannon Technologies has decades of experience in managing complex cooling issues within the datacenter and Mark Hirst, T4 product manager explains how to plan and deploy effective cooling.

 The numbers tell the story

In late 2011, DatacenterDynamics released their annual energy usage survey for datacenters. It made for stark reading.

1. Global datacenter power usage in 2011 was 31GW. This is equivalent to the power consumption of several European nations.

2. 31GW is approximately 2% of global power usage in 2011.

3. Power requirements for 2012 was projected to grow by 19%.

4. 58% of racks consume 5kW, 28% consume 5-10kW and the rest consume more than 10kW per rack.

5. 40% of participants stated that increasing energy costs will have a strong impact on datacenter operations going forward.

If these numbers come as a shock, they should be considered against several other factors that will impact the cost of running datacenters.

1. Environmental or carbon taxes are on the increase and datacenters are seen as a prime target by regulators.

2. As a result of the Fukushima nuclear disaster, several European countries are planning on reducing and even eliminating nuclear as a power generation source. This will create a shortage in supply and drive up power costs.

3. Around 40% of all power usage in the datacenter is to remove heat and could be considered waste.

4. The move to cloud will only shift Capital Expenditure (CAPEX) out of the budget. Power is Operational Expenditure (OPEX) and will be added to the cost of using the cloud thus driving OPEX at a faster rate than CAPEX is likely to come down.

 Design for cool

Removing heat effectively is all about the design of the cooling systems. There are several parts to an effective cooling system:

1. The design of the datacenter.

2. Choosing the right technology.

3. Effective use of in rack equipment to monitor heat and Computational Fluid Dynamics (CFD) to predict future problems.

 Datacenter Design

A major part of any efficient design is the datacenter itself. The challenge is whether to build a new datacenter, refurbish existing premises or retrofit cooling solutions. Each has the ability to deliver savings with a new build and refurbishment likely to deliver the greatest savings. Retrofitting can also deliver significant savings on OPEX, especially if reliability is part of the calculation.

1. Build

Building a new datacenter provides an opportunity to adopt the latest industry practices on cooling and take advantage of new approaches. Two of these approaches are free air cooling and splitting the datacenter into low, medium and high power rooms.

In 2010, HP opened a free air cooling datacenter in Wynyard, County Durham, UK. In 2011, HP claimed it had only run the chiller units for 15 days resulting in an unspecified saving on power.

2. Refurbish

Refurbishing an existing datacenter can deliver savings by allowing easy access to rerun power and change out all the existing cooling equipment.

In 2011 IBM undertook more than 200 energy-efficiency upgrades across its global datacenter estate. The types of upgrades included blocking cable and rack openings, rebalancing air flow and shutting down, upgrading and re-provisioning air flow from computer room air conditioning (CRAC) units. The bottom line was a reduction of energy use of more than 33,700MWh which translates into savings of approximately $3.8m.

3. Retrofit

Retrofitting a datacenter can be a tricky task. Moving equipment to create hot aisles and deploying containment equipment can have an immediate impact on costs.

Kingston University was experiencing problems in its datacenter. IT operations manager, Bill Lowe admits ‘As the University has grown, so too has the amount of equipment housed in its datacenter. As new racks and cabinets have been added, the amount of heat generated started to cause issues with reliability and we realised that the only way to deal with it was to install an effective cooling system. Using Cannon Technologies Aisle Cocoon solution means that we will make a return on investment in less than a year.’

 Choosing the right technology

Reducing the heat in the datacenter is not just about adding cooling. It needs to start with the choice of the equipment in the racks, how the environment is managed and then what cooling is appropriate.

1. Select energy efficient hardware.

Power supplies inside servers and storage arrays should all be at least 85% efficient when under 50% load. This will reduce the heat generated and save on basic power consumption.

2. Raise input temperature.

ASHRAE guidelines suggest 27oC as a sustainable temperature which is acceptable to vendors without risking warranties. Successive generations of hardware are capable of running at higher input temperatures, so consider raising this beyond 27oC where possible.

3. Workload awareness.

Understand the workload running on the hardware. High energy workloads such as data analysis or high performance computing (HPC) will generate more heat than email servers and file and print operations. Mixed workloads cause heat fluctuations across a rack so balancing workload types will enable a consistent temperature to be maintained making it easier to remove excess heat.

4. Liquid cooling.

Liquid cooling includes any rack that uses water or any gas in its liquid state. Industry standards body ASHRAE has recently begun to talk openly about the benefits of liquid cooling for racks where very high levels of heat are generated. This can be very hard to retrofit to existing environments due to the problems of bringing the liquid to the rack.

5. Hot/cold aisle containment.

This is the traditional way to remove heat from the datacenter. Missing blanking plates allow hot air to filter back into the cold aisle reducing efficiency. Poorly fitted doors on racks and the containment zone allow hot and cold air to mix. Forced air brings other challenges such as missing and broken tiles allowing hot air into the floor while too much pressure, prevents air going up through the tiles.

6. Use of chimney vents

This can be easily retrofitted even in small environments. Using fans, the chimney pulls the hot air off the rack and vents it away reducing the need for additional cooling.

7. CRAC

Computer Room Air Conditioning (CRAC) has been the dominant way of cooling datacenters for decades. It can be extremely efficient although that depends on where you locate the units and how you architect the datacenter in order to take advantage of the airflow.

One danger with poorly placed CRAC units, as identified by The Green Grid, is the problem of multiple CRAC units trying to control humidity if air is returned at different temperatures. The solution is to network the CRAC units and do coordinated humidity control.

Effective placement of CRAC units is a challenge. When placed at right angles to the equipment, their efficiency drops away over time causing hot spots and driving the need for secondary cooling.

In high density datacenters, ASHRAE and The Green Grid see Within Row Cooling (WIRC) cooling, to get the cooling right to the sources of the heat, as imperative. WIRC also allows for cooling to be ramped up and down to keep an even temperature across the hardware and balance cooling to workload.

If the problem is not multiple aisles, just a single row of racks, use open door containment with WIRC, or alternatively liquid based cooling. This is where the doors between racks are open, allowing air to flow across the racks but not back out into the aisle. Place the cooling units in the middle of the row and then graduate the equipment in the racks with the highest heat closest to the WIRC.

For blade servers and HPC, consider in rack cooling. This solution works best where there are workload optimisation tools that provide accurate data about increases in power load so that as the power load rises, the cooling can be increased synchronously.

New approaches to CRAC are extending its life and improving efficiency. Dell uses a sub floor pressure sensor to control how much air is delivered by the CRAC units. This is a very flexible and highly responsive way to deliver just the right amount of cold air and keep a balanced temperature.

Dell claims that it is also very power efficient. In tests, setting the subfloor pressure to zero effectively eliminated leaks and while it created a small increase in the power used by the server fans, it heavily reduced the power used by the CRAC units. Dell states that this was a 4:1 reduction. Dell has not yet delivered figures from the field to prove this saving but it does look promising.

8. Workload driven heat zoning

Low, medium and high power and heat zones allow cooling to be effectively targeted based on compatible workloads. An example of this is the BladeRoom System where the datacenter is partitioned up by density and power load.

 Effective Monitoring

Effective monitoring of the datacenter is critical. For many organisations, this is something that is split across multiple teams and that makes it hard to identify problems at source and deal with them early. When it comes to managing heat, early intervention is a major cost saving.

There are three elements here that need to be considered:

1. In rack monitoring.

2. Workload planning and monitoring.

3. Predictive technologies.

All three of these systems need to be properly integrated to reduce costs from cooling.

 In Rack Monitoring

This should be done by sensors at multiple locations in the rack; front, back and at four different heights. It will provide a three dimensional view of input and output temperatures and quickly identify if heat layering or heat banding is occurring.

As well as heat sensors inside the rack, it is important to place sensors around the room where the equipment is located. This will show if there are any issues such as hot or cold spots occurring as a result of air leakage or where the air flow through the room has been compromised. This can often occur due to poor discipline in the datacenter where boxes are left on air tiles or where equipment has been moved without an understanding of the cooling flow inside the datacenter.

Most datacenter management suites, such as CannonGuard provide temperature sensors along with their CCTV, door security, fire alarm and other environmental monitoring.

Workload Planning and Monitoring

Integration of the workload planning and monitoring into the cooling management solutions should be a priority for all datacenter managers. The increasing use of automation and virtualisation means that workloads are often being moved around the datacenter to maximise the utilisation of hardware.

VMware, HP and Microsoft have all begun to import data from Data Center Information Management (DCIM) tools into their systems management products. Using the DCIM data to drive the automation systems will help balance cooling and workload.

Predictive Technologies

Computational Fluid Dynamics (CFD) and heat maps provide a way of understanding where the heat is in a datacenter and what will happen when workloads increase and more heat is generated. By mapping the flow of air it is possible to see where cooling could be compromised under given conditions.

Companies such as Digital Reality Trust, use CFD not only in the design of a datacenter, but as part of the daily management tools. This allows them to see how heat is flowing through the datacenter and move hardware and workloads if required.

Conclusion

There is much that can be done to reduce the cost of cooling inside the datacenter. With power costs continuing to climb, those datacenters that reduce their power costs and are the most effective at taking heat out of the datacenter will enjoy a competitive advantage.

CAN0183