Mayor of London congratulates Virtus Data Centres on commencement of works at LONDON2

Boris Johnson City HallLondon, 5 November 2013 – Virtus Data Centres Ltd, (Virtus), London’s flexible and efficient data centre specialists has announced its plan for expansion of its London data centre portfolio and the start of construction of its second site in West London, LONDON2.

Virtus’ new flagship data centre is designed to meet the growing demand for scalable, reliable, on-demand colocation services and will be the first in London to deploy a new ground breaking fresh-air, evaporative cooling technology that dramatically decreases energy consumption, bringing site power usage effectiveness (PUE) to below 1.2, delivering substantial savings to Virtus’ clients.


By using this evaporative cooling technology together with solar panels, ground water from its own well, chimney racks for heat extraction, highly efficient UPS systems, and other innovative technologies Virtus LONDON2 will be the most energy-efficient data centre in London. It will reduce its environmental impact even further by using 100% green power from renewable sources and heat pumps to recirculate heat generated by the IT equipment into communal areas.


Mayor of London Boris Johnson welcomed the new investment in Virtus LONDON2 and commented “I’m delighted that Virtus has chosen to invest in London again. London is leading the global digital technology revolution and is the world’s leading technology hub with great British technology, creativity and innovation.  Up and coming companies like Virtus are at the heart of that whole explosion of talent in London and I’m delighted to see them using so many state of the art ways of saving energy and improving efficiency.”


Steve Norris, Chairman of Virtus Data Centres, and himself a former Mayoral candidate added: “Virtus is one of London’s technology success stories, growing fast in the 3 years since we opened the door of our first site in Enfield.  That growth is  fuelled by the boom in technology, media and mobile activity in London. We wanted our LONDON2 facility to be right at the cutting edge of environmental efficiency and we’re proud of what we’ve started.  We’re also looking at other sites for LONDON3. Virtus’ on-going commitment to innovate in-line with the way businesses need data centre space ensures that our customers can rely on maximum flexibility, quality, service and value.”

Virtus Data Centres Goes Greener

Virtus LON1 Data CentreVirtus Data Centres (‘Virtus’), London’s most modern and flexible data centre specialist announces today it has secured a new power contract with E.ON for its LON1 London data centre facility – using only energy generated from fully renewable sources.

As part of its £1bn renewable energy programme, E.ON will be supplying Virtus with energy from within its portfolio of renewable sources, including on and off shore wind farms, biomass power stations and wave energy projects.

This means that all of the colocation customers within the Virtus LON1 facility will benefit from ‘green’ energy and will not be liable to pay the Climate Change Levy (CCL).

“Securing a fully renewable energy contract represents excellent value for our tenants and reinforces Virtus’ commitment to sustainability” commented Neil Cresswell, CEO at Virtus Data Centres. “Combined with our Eco-Engineering principles, the new contract further enhances the environmental credentials of our LON1 data centre. I am delighted that Virtus will now be providing customers with 100 per cent renewable energy. This further delivers on our commitment towards providing sustainable and environmentally efficient solutions which not only reduce the carbon footprint of our customers but provide the lowest costs for customers seeking high quality, flexible carrier neutral data centre solutions in London.”

The Virtus LON1 Tier 3 data centre features dual-resilient 8 Megawatt 11kV diverse electricity feeds. The site is located in Enfield, North East London and perfectly positioned for synchronous data replication to inner London locations with a 0.17 millisecond round trip latency – making it technically and physically close to both the City of London and Canary Wharf.

The LON1 data centre is also host to the newly launched CoLo-on-Demand service launched recently by Virtus which makes data centre colocation readily accessible and affordable for Cloud Service Providers.

The Climate Change Levy (CCL) is an energy tax to encourage business users to become more energy efficient, and to reduce their carbon dioxide emissions. Opting to use energy from a low CO2 source as Virtus has now done – such as wind, solar, geothermal, landfill gas or Good Quality CHP (Combined Heat and Power) – gives exemption from the levy to customers of Virtus’ already highly energy efficient LON1 data centre.



After Google and Microsoft Outages – “We Need to Talk” says Data Centre Alliance


“Data Centres need to talk to each other more to avoid the sort of outages that have hit Google and Microsoft in recent weeks.” So said Phil Turtle CCO of Data Centre Alliance – the not for profit Industry body.  “The data centre industry is relatively young, yet these outages demonstrate just how utterly dependent in it we all our for our business and personal lives.

“Currently it exhibits too many ‘knowledge silos’ and an unnecessary fear of working together with competitors to share ‘best-practice’ – something mature industries find highly beneficial.

Many data centres do not have the resources of a Google or a Microsoft, yet as we have seen even with their massive technological resources these giants can have problems.

We commend more data centre operators, service providers, and individual data centre professionals to join the Data Centre Alliance and to share experiences and expertise with their peers – to ensure that the entire industry can learn from these outage events and share the knowledge for the benefit of data centre customers and service-users globally.

Working together allows the pooling of resources to establish and codify best practice – not only to avoid outages but also to increase power efficiency, and provide reliable comparative measurements (a level playing field) to enable customers to properly compare data centres when they are searching. All of these are initiatives on which the Data Centre Alliance (DCA) is currently working.

“How reliable a data centre is should be a ‘given’ and not a competitive edge, ‘ said Turtle. “There are many other factors on which to compete and the industry needs to share knowledge in the same way that other critical industries like nuclear and air-transport for the greater good.”

As part of the first ever Europe-funded research project, PEDCA, into the training and in-depth needs of the industry – there is a free entry-level Data Center Alliance membership available at

“We call on the whole industry to work together to reduce major outages for the good of the industry and the customers who rely so totally on us,” said Turtle.

DCA is also working on the skills shortage and recruitment difficulties faced by an industry largely staffed by professional in their fifties. Funded by two major data centre operators: Telecity and Telehouse plus training company CNet Training, DCA is currently running a pilot ten day intensive Boot Camp for graduates at the University of East London, to teach them the What? Why? and Where? of data centres  and the all-important skill of ‘critical-thinking’.

It is then planned to roll-out these Boot Camps internationally with the help of the industry’s training community for the benefit of the entire industry.

Queen Mary, University of London chooses Virtus Data Centres to host its business critical IT platform

London, 13th June 2013 – Virtus Data Centres (“Virtus”), a leading London provider of environmentally efficient, modern, interconnected carrier neutral data centres, announced today that it has completed implementation of a new IT platform in its LON1 data centre for Queen Mary, University of London. Queen Mary is one of the UK’s leading research-focused higher education institutions and Virtus’ LON1 data centre now supports all of its business critical IT applications.

With around 16,900 students, 3,800 staff and an annual turnover of £300m, Queen Mary is one of the biggest University of London colleges. It teaches and researches across a wide range of subjects including the humanities, social sciences, law, medicine, dentistry, science and engineering. Queen Mary is also a member of the prestigious Russell Group which represents 24 leading UK universities that are committed to maintaining the very best research, an outstanding teaching and learning experience and unrivalled links with businesses and the public sector.

Jonathan O’Regan, Queen Mary’s Assistant Director IT infrastructure, said: “Virtus’ facility is hosting the core infrastructure of our new IT platform supporting the University’s business critical applications. Queen Mary chose to partner with Virtus because of their highly pragmatic and professional approach to doing business whilst delivering ‘best in breed’ data centre facilities.”

Neil Cresswell, CEO at Virtus, commented: “We are delighted that such a high profile educational and research organisation has chosen our facility to host its business critical platforms and applications. At Virtus we have a 100% track record of service availability and are committed to delivering an agile, scalable infrastructure platform to support Queen Mary’s business requirements.”

– ENDS –

For more information please contact:
William Horley
T: +44 (0) 207 499 1300

About Virtus

Virtus Data Centres own, design, build and operate environmentally and economically efficient data centres around the heart of London’s cloud and digital data economy. Our customers can rely on our secure facilities and uptime record for their data and applications, enabling them to store, protect, connect, process and distribute data to customers, partners and networks within our data centres and the global digital economy. For more information please visit


Safehosts London City Data Centre Opens With A 1.06 PUE

What’s very light, hardly eats, sits outside and loves the heat? Phil Turtle went to Safehosts new 5MW co-location data centre in the City of London to find out.

To be honest, this was my third visit to this highly impressive new facility disguised as a 1970s office block in the vicinity of London’s Borough tube station. The first – a midnight tour already infamous in the data centre industry – cannot be described in these pages, save to say that a second visit during daylight hours was necessary to unscramble recollections of bright LED lighting, temperatures that were approaching arctic and Safehosts’ technical wizard Lee Gibbins telling the group that they’d got 100kW of IT load running and the cooling system was only drawing 400 watts. It was so cold we envisaged polar bears moving in sometime soon.

This third visit, however, was to meet with Safehosts’ CTO Lee Gibbins and Alan Beresford, md of EcoCooling whose equipment Safehosts had installed – not only because it hardly ‘eats’ any energy – but for a host of other practical and business reasons such as: It ‘sits outside’ koala-style half-way up the wall as it turns out and doesn’t mind very hot days. Alan Beresford said, “The cost of energy and the capital cost of cooling equipment are by far the biggest non-productive costs of any data centre. For a new co-location site in central London, space is also at a premium and any white space which has to be given over to CRAC (computer room air conditioning) units or even to in-row coolers uses up many racks-worth of space which could otherwise be used for operational racks to generate revenue.

Safehosts’ Gibbins explained that their building used to be a five story commercial office block. “I initially selected it because I foresaw the opportunity to use the existing windows as air ducts without extensive building works and hence to have a very fast project turnaround with low development costs. It also meant that we could deliver the project with very little impact on the visual appearance of the building and most unusually – no discernible noise.”

However there were limitations with the building that meant normal cooling would not have been possible. The top story being a lightweight upward extension meant that the roof was essentially non load-bearing and unsuitable for heavy refrigeration plant. The small yard at the rear was large enough only for the initial two Himoinsa. 500KVA generator sets, the fuel store, the substation for one of its dual 5MW main feeds and parking for four vehicles. So an innovative approach to cooling equipment space requirements was very high on Gibbins’ agenda.

Having used EcoCooling’s fresh air CREC (Computer Room Evaporative Cooling) system for over three years at Safehosts’ Cheltenham data centre, Gibbins had no qualms over using the same technology to achieve the PUE target of 1.06 that the business had set itself.

“And that’s 1.06 PUE from day one with an initially sparsely populated co-location facility, not a hopeful full-capacity prediction,” Gibbins said.

“Few data centres spend much of their life at, or even near, full capacity,” explained Beresford. “If we take one floor at Safehosts as an example; at around 1MW capacity then using the conventional approach you would need to install three big, heavy, 500kW coolers to provide n+1 redundancy – possibly deploying two initially.

“But whilst monsters such as these are maybe 3:1 coefficient of performance (CoP) at full load – i.e. 100kW or electricity for 300kW of cooling output – at part load it quickly falls to at worst 1:1. So for 150kW of cooling it will still be consuming 150kW! This is why some partly populated data centres routinely have a PUE of 2.5 or worse”

Direct fresh air evaporative cooling on the other hand requires very little power and the EcoCooling units come in much smaller ‘chunks’ – 30kW, 60kW or 90kW. So, as Gibbins explained, “we could have started by installing just two or three units initially though in fact, as the CapEx is so much lower, we decided to start with six.

Compared to the 50-100kW consumption that conventional cooling requires for up to 100kW of IT load, this solution draws a maximum of 4kW per 100 kW. “That’s not only a massive energy saving,” explained Gibbins, “it also means I’ve got an extra 1.3 MW of my 7 MW incoming supply available for revenue generating IT load.

“I imagine everyone knows just how easy it is to max-out the utility power feeds these days – long before a data centre is full. So having an extra 1.3 MW for production kit is a major bonus.”

Returning to the lack of space for cooling equipment at the Safehosts City of London site, the question had to be asked: How did Beresford’s team at EcoCooling solve the space problem? Sky hooks? Not far off as it transpired. “We like to throw away the ‘it’s always done like this’ approach – which frankly is all too prevalent in data centre design,” said Beresford, explaining that by applying a little lateral thinking, the matrix of window openings on the rear wall of this old office block was ideal to enable exterior wall-mounting of the small and lightweight EcoCoolers. “Each one only weighs around 90kg, well within the wall’s load bearing strength.”
This writer had noted, with some confusion, that the air-flow routing within the data hall was far from conventional. In the cold aisle, cold air fell down through filter panels in the ceiling rather than coming up through floor tiles. “Hot air rises, cold air falls,” explained Beresford with a wry smile. “Conventional setups push the cool air under floors and upwards through floor grills working against natural convection. We work with convection – since it’s free – and not against it.” That answered one question, but why were there servo-operated louvered flaps between the hot aisles and the cold air inlet from the external cooling units? Strangely it turns out that, whilst in conventional data centre cooling arrangements great lengths are needed to keep the expensive cold air from being contaminated by hot air leakage, in the evaporative cooling scenario, the incoming air is frequently so cold that hot air needs to be re-circulated and mixed into it in order to keep the servers warm enough! ”Many servers, if they get down to 10° C, will actually close themselves down” explained Beresford, “and we don’t want outages because the servers are seeing air which is too cold!”

Of course three of the big questions around direct air evaporative cooling are atmospheric contamination, relative humidity and ‘very hot’ days.

On the contamination front, the coolers are wrapped in a filter ‘blanket’ giving a EU4/G4 grade standard as a first line of defence.

Further filters to G4 standard are fitted instead of ceiling tiles to the cold aisles in the Safehosts installation – but these, it turns out, are a defence against particulates from both the re-circulated hot air and the incoming cold air. This gives the same filtration standard as a conventional CRAC installation.

“And using 600mm square ceiling filters saved me the cost of ceiling tiles,” quipped Safehosts’ Gibbins.

One other misconception that needs to be explained,” said Beresford, is that direct evaporative cooling cannot meet the relative humidity (RH) standards required in a data centre. The unique patented EcoCooling control system manages both temperature and RH. The temperature is stable and the RH never goes over the allowable limits – so contrary to rumour, the incoming air is not over-laden with moisture.”
And ‘very hot’ days? Well of course, explained Beresford, in a temperate climate such as the UK, there aren’t actually that many – which is not so good for us humans – but great for data centres.

He went on to paint a very interesting picture. “Very hot days are actually quite short term events. We can always be sure, in the UK for example, that come night-time the temperature of the external air will fall below 20° C. So there is only a limited time when it is technically ‘very hot’.”

Refrigeration units become much less efficient as the external ambient temperature rises. Because the condenser units are out in the sun they get extremely hot – far hotter than ambient temperature. They also suffer from their hot air exhaust being sucked back into the inlet raising internal temperatures even higher and causing failures.

As readers will know, on very hot days conventional cooling often can’t cope and it’s quite common to see the data centre doors wide open and massive portable fans deployed to move lots more external air through the datacentre to try to keep things cool. And, to be honest, just getting more air through like that usually works.

Evaporative direct air cooling actually has two very significant advantages over refrigerator cooling on ‘very hot’ days, Beresford claims. Firstly, airflow is not restricted because EcoCoolers have masses of airflow available. So as the ambient temperature increases the system controller ramps up the fans delivering far more cool air to the server rows than CRACs or DX (direct exchange) systems are able to. Without having to open the data centre doors to the outside air.

“What’s more, the higher the temperature the better the cooling because in the UK the hotter the day the lower the humidity so the level of cooling actually increases. So on a hot day in the UK an EcoCooler can reduce the temperature by 12 degrees or more – the air coming off the cooler will never be above 22C whatever the outside temperature.

Although direct air evaporative cooling seems to have many advantages, Beresford is a realistic engineer. “Direct air evaporative cooling isn’t for everyone or everywhere. But in many countries and many operations it offers massive energy savings and significant data hall space saving – allowing better, revenue-earning, use of scarce power and sellable space – as Safehosts have demonstrated here.”
From Safehosts’ perspective Gibbins concludes, “Using evaporative direct air cooling with its zero rack space internal footprint, lightweight wall-mounted coolers and 0.04 effect on PUE, has allowed us to turn an unlikely building into a state of the art colocation centre right in the City of London and enables us to start with a 1.06 PUE from day one. I’m very happy with that.”

Shaun Barnes Named As New MD UK And EMEA at IT And Critical Facilities Consultancy i3 Solutions Group

I3 Solutions Group the vendor neutral critical IT and facilities consultancy – established by well-known industry veterans Ed Ansett and Mike Garibaldi last year – announces the appointment of banking sector IT guru Shaun Barnes as its UK and EMEA managing director.

Announcing the appointment, i3 Solutions Group managing partner Ed Ansett said, “because i3 Solutions Group is trailblazing the concept of fully integrating IT application architecture, technology infrastructure and critical facilities at the design phase of projects such as data centres and trading floors, it was imperative for us to get away from the conventional consultancy model of only one or two ‘names’ constantly on aeroplanes to meetings.

“With the appointment of Shaun Barnes as md for UK and EMEA, we are underlining our mission to have very senior consultants permanently on the ground in each territory – able to interface at ministerial and C-levels into governments and major enterprise – and to take all necessary business decisions. This will mirror the capabilities we already have in North America and AsiaPac.”

Barnes comes to i3 Solutions Group with a track record that includes critical facilities, IT, networking, M&E (mechanical and electrical) and corporate real-estate with Barclays Capital, ABN Amro, Royal Bank of Scotland and most recently consultancy firm ITPM where he ran the global data centre and corporate real-estate practices, delivering large programmes of work for ICAP and DBS Bank.

Barnes, who was based in Singapore for several years but is now operating out of London said, “I’m looking forward to re-establishing many UK contacts and making new contacts in the UK and EMEA. I will be attending many industry events in the coming months.”

One of i3 Solutions Group’s unique points is that they bring the normally siloed disciplines of IT applications, IT infrastructure, Facilities Management and M&E together in equal weight in one consultancy.

According to Ansett, though, “only a small percentage of CIOs and CEOs currently ‘get’ this concept — there are massive advantages to having IT, FM and M&E co-planning and co-designing critical facilities like data centres, trading floors and new corporate offices.”

“There is a lack of consistency across technology strategy, design, implementation and operation in the market. There is also inconsistency across the IT stack. The major consulting firms tend to focus on business process and applications consulting and less on technology infrastructure. Similarly systems integrators concentrate on technology hardware and have little ability in critical facilities,” said Ansett.

“Naturally the large OEMs have expertise in many areas, but are predisposed to their own hardware or software products. This continuation of ‘silos’ of expertise can often lead to £millions of additional downstream costs because the design thinking was not properly ‘joined-up’. As an independent consultancy with expertise in each field we engage at any point in a project.”

Barnes concluded, “I’m happy to explain the advantages of silo-removal and joined-up planning for critical facilities – be they data centres, trading floors or even corporate office. It’s going to be several years before this thinking is adopted by the conservative mainstream – but we are here to help enlightened companies set new trends.”

Ref: i3SG0010

Brady And Siemon Release LabelMark™ 5.4 Software Expansion Pack With Time-Saving Datacomm Wizard

Label design software offers label templates for Siemon hardware manufacturing and models

Brady Corporation, a leading manufacturer of label printing systems, and Siemon, a globally recognised leader in network cabling solutions, have worked together to release the latest version of LabelMark™ Label Design Software. LabelMark 5.4 Software features several updates for electrical, lab and voice/data applications. The most noteworthy feature is a time-saving datacomm wizard that offers pre-made label templates based on Siemon’s most common network connectivity.

“Before, network technicians were using Microsoft Word and Excel to set up page sizes and label formats in order to label their network hardware,” said Marlon Davis, Brady’s software product manager. “The process was often tedious and very time-consuming, because they’d need to make numerous adjustments and format modifications before they could even add the data to the labels to start printing,” said Davis.

With the new datacomm wizard in LabelMark™ 5.4 Software, technicians can simply select Siemon as their network hardware manufacturer and hardware model, and the appropriate label template will be automatically generated.

After selecting their label template, users are guided to apply labelling legend data and save the cable markers and label files they have created for patch panels, jacks, and other hardware. It allows technicians to save a complete job, which can be used for quick label kitting at future on-site cable drops and rack installations.

“Combining Brady’s LabelMark Software and Siemon’s hardware models provides a time-saving cable and networking labelling solution for users around the globe,” says Bob Carlson, vice president of global marketing for Siemon.

For more information on LabelMark™ Label Design Software, visit