As previously stated, it seems to me that global data on the environmental footprint of digital technology, taken alone, ultimately perpetuates the discourse of dematerialisation and deterritorialisation. As an example, I would like to develop the case of data centers, which I mentioned earlier, to observe the failures of the global vision and the challenges of the territorial approach.
According to estimates by Masanet et al. and the IEA (International Energy Agency), data centers account for 1% of global electricity consumption (205 TWh).1 This consumption would be rather stable (+6% since 2010) despite a growing increase in the power and load of these centers.
This global lens means that data centers are not an important subject in the study of the overall environmental footprint of digital technology. However, the data presented in this way has many limitations: talking about electricity only means talking about the use phase of data centers and their electricity consumption during this phase, so it is only a very reduced view of the overall footprint. Secondly, the datasets used to arrive at these global estimates are heavily influenced by data from US data centre fleets. If we keep the reductionist lens of electricity consumption, then we see other trends in other geographical areas: the European Commission published a report in May 2020 announcing that the annual electricity consumption of data centers (DCs) in the EU28 had risen from 53.9 TWh/y in 2010 to 76.8 TWh/y in 2018, a 30% increase.2 The European Commission estimates that annual DC consumption accounted for 2.7% of annual electricity consumption in the EU28 in 2018 and forecasts that this consumption will increase by 21% by 2025. Again, this consumption is not evenly distributed across Europe, with 82% of electricity consumption being in Western and Northern Europe. Similarly, Asian data on the subject remains hard to obtain and confirm. For example, in Singapore, the Ministry of Industry and Trade estimates that the 60 data centers on its territory will account for 7% of national electricity consumption in 2020 (about 3.8 TWh). In China, Greenpeace estimates that annual electricity consumption was 161 TWh in 2018, and predicts a very large increase,3 but these numbers seem too high compared with other estimates.
One can immediately see gaps between the global estimate by Masanet et al. and the IEA, and the figures reported independently in each region. Why such a discrepancy? The global estimates cited always refer to data centre power consumption data in the US (Shehabi et al.4) and use a bottom-up method (start from the unit consumption of a device and multiply by the number of devices). US consumption has stabilised thanks to efficiency gains, new hardware and the increase in the size of data centers (increase of hyperscale data centers) but also because cooling methods have changed from air-conditioning to cold water circuits. Furthermore, bottom-up approaches tend to lead to low estimates (as opposed to top-down approaches that tend to produce high estimates). Generally, low estimates come from studies using US datasets (Shehabi et al.), regional estimates use other sources, such as the Borderstep Institute for the European study. Due to missing data, global estimates will always be complex to model and it is therefore preferable to look at regional estimates that use more contextualised datasets.
So far we have only shown the figures given through a narrow perspective: use phase and electricity consumption. If we really want to have a better idea of the footprint of data centers, then we need to add the manufacturing and end-of-life phases and other environmental factors: water consumption, consumption of “abiotic” (non-living) resources, waste production, primary energy consumption, not forgetting the greenhouse gas emissions linked to all this. However, the first thing we have to face is that these data are very scarce and are not publicly available. To my knowledge there are only a few “recently” published and open LCA papers : Whitehead et al. in 20157 and Shah et al. from 2009 to 2014.5 In the data center sector, a 2015 publication is relatively old as installation, manufacturing and management methods have largely changed since then. In this scientific literature, it is estimated that the manufacturing phase (construction of the building + manufacturing of the IT equipment) represents on average 15% of the energy and GHG footprint of a data center in a country with “medium” carbon electricity (approx. 150-200gCO2/kWh). To get to this figure, it is assumed that the building is new and will last 20 years and that the IT equipment is replaced every 4 to 5 years. Based on GAFAM’s Scopes 3, a recent publication by researchers from Facebook, Harvard and Arizona University6 estimated that the carbon impact of data centers related to IT equipment, construction and infrastructure was higher than imagined. There is therefore a growing interest in better understanding these “omissions”.
Generally, integration of impacts related to the construction of a data center are optional (ETSI 203 199 V.1.3.1, p.39) as they are considered too low, i.e. around 1%. However, the increasing size of data centers and the recurrent use of concrete and metal means that construction should not be underestimated. This is particularly important as data center construction projects are increasing: Microsoft has announced that it wants to build 50 to 100 new data centers per year until an unspecified date. Microsoft’s development projects are far from being an exception, investments are piling up in the sector: $37 billion was spent in the sector in Q3 2020 alone. Similarly, the construction phase can have a much greater impact on other environmental factors such as fine particle emissions (PM-10) and toxic releases. Finally, the purchase of energy and cooling infrastructure (HVAC, CRAV, generators, etc.) needs to be looked at closely, especially if they have a shorter than average lifespan.
While building construction supposedly accounts for little of the total energy / GHG footprint, the footprint associated with the manufacturing of computers, servers and other IT equipment is not to be underestimated. It generally represents the majority of the impacts outside the use phase (except for certain factors). This footprint is as much on water consumption (related to extraction and processing) as on resource consumption (metals, etc.) and other related pollutants (arsenic, cadmium, etc.). This footprint is also important because, in accounting terms, the replacement rate of equipment is generally around 4 to 5 years, so their impact is “amortized” over fewer years. In physical terms, environmental impacts are not “amortized” and accumulate in specific regions.
The end-of-life phase (EoL) of data centers is not taken into account in the analyses, even though this phase can produce a lot of waste. Decommissioning a data center means dismantling the entire park, removing the electrical and cooling infrastructure, removing the tiles, etc. In addition, the policy of redundancy and security requires risk-free equipment, so it is preferable to replace earlier than to take the risk of failure. For example, generators that have a lifespan of 35 to 40 years may be decommissioned after 15 years when they have hardly ever been used. The end of life of IT equipment must also be an issue in view of the relatively short lifetimes.
While water consumption is important during the manufacturing phase of equipment, there is another parameter to take into account: the generation of electricity. Generating electricity, via a steam turbine for example, uses water that will be evaporated and will therefore skip a step in the water cycle. The methods of generating electricity must therefore be taken into account when understanding the water footprint of data centers. The United States has a water intensity of 2.18 L/kWh according to David Mytton. France would be around 4 L/kWh. The difference is explained by the higher proportion of nuclear and hydroelectric power in France. In addition, water consumption for data center cooling varies widely depending on the water recovery systems. It is important to focus on water withdrawals from the municipal network to get a clear picture of the water burden of cooling. In a 2016 paper, the Uptime Institute estimated that a 1 MW data center consumes 25.5 million litres of water per year (25,500 m3),8 around 70 m3 per day. In a 2019 report for ADEME (French Agency for Ecological Transition), Cécile Diguet and Fanny Lopez found that, according to one US operator, a 15 MW water-cooled data center would consume up to 1.6 million litres of water per day (1,600 m3).9 As we will see later, the water requirements of new data centers pose real territorial problems.
While the focus is on the electricity consumption of data centers, there is a subtlety to this. While most of the American giants announce that their data centers will be 100% renewable energy, there is a big distinction to be made about this 100% renewable energy. There are two approaches to calculating the carbon footprint of electricity: location-based and market-based. Location-based refers to the carbon intensity of the electricity physically consumed by the data center. Market-based refers to the carbon intensity of the electricity after accounting for the difference between the electricity consumed and the electricity purchased via various financial mechanisms. Among these mechanisms are Renewable Energy Certificates (RECs) which are generally considered to be of low quality as they cannot fully prove the origin of the electricity. As a result, Power Purchase Agreements (PPAs) have become popular with the digital giants. A PPA can take several forms, it can be direct: in this case a buyer commits over several years to a producer to buy the electricity physically supplied to its facilities. This is the type of PPA that should be favoured if claims of renewable energy supply are to be credible. But there are also “virtual” or “financial” PPAs that are not linked to the physical delivery of electricity. In this case, a buyer commits to invest in the construction of a renewable energy plant (solar/wind) and to buy future electricity at a certain rate when the plant is up and running. However, these virtual PPAs allow carbon to be subtracted from their balance sheet before the plants are even built. Today, GAFAMs are the world’s largest buyers of PPAs (direct or virtual) and globally the ICT sector accounts for more than half of PPAs from 2009 to 2019.
Google seems to be in the best position to obtain or bring about direct APPs. However, the carbon neutrality claims on most of its data centers are linked to RECs or virtual PPAs and this is even visible in the data provided by Google. The US giant advertises that its data center in Eemshaven in the Netherlands would be 100% powered by RE since its opening in 2016. However, on Google’s electricity supply matrices we can clearly see that 69% of the electricity supply was provided by RE. The remaining 31% is offset by RECs or virtual PPAs. Google’s statement in the preamble is therefore not factually correct.
This brief summary of what is not counted in the global estimates of the data center footprint shows that there are many pieces missing from the puzzle. The lack of open and shared data creates a myopia that will eventually become increasingly unsustainable as hundreds of data centers are and will be built each year. In talking to specialists in digital lifecycle analysis, we arrive at roughly the same hypothesis that the manufacturing phase (building + hardware) would be around 12-15% of the total footprint of a data center (although this varies depending on the country’s energy mix). This means that DC operators can integrate as much renewable energy as they want in their power consumption, there will be an intense share of carbon, water, resource impacts related to construction/manufacturing that they will find difficult to compress. Energy used in a mine, in freight, in the supply and production chain is much less likely to be renewable. Similarly, it is essential to remember all the other environmental factors that are completely forgotten: biodiversity (!), fine particles, ecotoxicity, … Finally, electricity consumption is increasing, whether it is renewable or not, yet one of the main objectives of the transition is to reduce our energy consumption and our material footprint. On the one hand, the problem is that the highly concentrated electricity demand of data centers is blocking the territories and cities in which they are installed. On the other hand, the use of renewables to decarbonise growing consumption drastically increases the material footprint of the whole sector (popular renewables such as wind and solar have a large material footprint). Like every low-carbon energy source, the use of renewables is much more relevant to energy reduction, not the other way around. In which case we will only be decarbonising energy growth but not changing the carbon basis of our energy systems.
As a preliminary conclusion, this myopia on electricity and carbon puts the data center sector in a good light. The day we add to the whole GHG footprint, the water footprint and the material footprint, then the transfer of impacts and pollution will be clearly visible. Again, we need to integrate other factors: where are the biodiversity, fine particles, ecotoxicity for humans and living ecosystems? Carbon is only part of the problem and we cannot hide the transfer of impacts forever. This short-sightedness is all the more glaring in view of the rate of construction of data centers in the years to come.
As shown above, the global vision is today largely incomplete and continues to possibly perpetuate a dematerialised vision of the digital sector. Faced with this observation, my research method has changed to start from the territory where my object of study is located. In doing so, I am much more interested in the material conditions of the sector, i.e. the material and flow prerequisites for an installation to be built, deployed and function. I had already laid the groundwork for this reversal in the Sciences du design article I co-authored with Nicolas Nova. Our proposal was to study the digital through three concentric circles: materialisation, territorialisation and terrestrialisation. Today I am committed to pursuing this path via some ongoing work such as semiconductor manufacturing in Taiwan and water supply. The present analysis is also part of this line of research, especially as the case of data centers is exemplary for such approaches.
What are the factors that influence the location of a data center? To summarise: privileged access to the electricity grid (source station) and the internet fibre network (backbone); low natural risks (eruptions, floods, etc.); access to cheap land; low cost or preferential electricity rate ; financial incentives and potentially access to a water network. When an operator decides to set up in city X because it meets these criteria then it will traditionally apply for planning permission, prior studies on the various points requested (environment, industrial risks, etc.), civil works requests for cabling, a request for reservation of electrical power, a request for a water supply if necessary, and these are just a few of the many elements. A data center will take on average 2 years to build or renovate an existing building. The major data center operators (Equinix, Interxion, GAFAM, etc.) are generally well received by local authorities because they promote progress and innovation, bring in few jobs and tax income for the local authority.
In city X, even if the arrival of the data center operator is welcomed, a first difficulty may arise: the electricity reserved from the power provider. The electricity network is distributed in the cities through substations managed by the distributor. In France, Enedis manages this distribution up to a certain threshold, RTE ensures the routing of very high voltages and large power demands. A substation has a power to be distributed (in kVA or MW) which must be divided between residential, tertiary, industrial, transport, etc. An industry will reserve a corresponding power for its peak production and possibly for the future development of its production chain. Data center operators do the same, but in orders of magnitude that are surprisingly large in relation to their operations. With power reservation and consumption opaque, it is very hard to understand why operators reserve so much: either they want to occupy the field to avoid potential electricity competition, or it is linked to very important future developments. Furthermore, data center operators reserve the same power on two different substations in order to have a back-up line if one of the two source stations suffers a failure. When the power reservations bring the substations to the end of their available power, the electricity distributor contacts the city council to inform them of the situation, sometimes leading to the construction of a new substation (for several million euros and counting 10 years of work before commissioning).
These conflicts of use around electricity have already caused many problems in local authorities in Plaine Commune in Seine-Saint-Denis, Marseille, Dublin, Frankfurt and Amsterdam, Stockholm, Helsinski, to name but a few European examples. For example, in 2013, the Marseille city council, having forgotten to reserve electrical power for a new tramway line, had to negotiate with an operator (Interxion) to free up 6 MW of reservation. In Plaine Commune, Clément Marquet has documented very well the local conflicts related to data centers and in particular those related to power reservation, increasing electricity consumption and its inadequacy with local energy transition plans.10 The deployment of new and ever larger data centers (hyperscalers) is challenging the power distribution patterns of most of the cities that host them. As such, the three European hubs, Dublin, Francfort and Amsterdam have in recent years ordered a halt to all new deployments for a year while they rethink their power supply management systems. In 2020, data centers accounted for 11% of Ireland’s electricity consumption and, according to national provider EirGrid this will rise to 27% by 2029 at current rates. It also raises the point that data centers could crowd out renewable energy capacity on the grid, slowing down the country’s energy transition. Moratoriums are piling up for more or less the same reasons in the Netherlands and Singapore. From a territorial perspective, we can see how concentrated the electricity consumption of data centers can be and how it can pose real planning and development problems for cities. In this respect, each municipal team should be particularly attentive when approached by a data center operator.
Data center operators who rely on water cooling must draw water either from the municipal water system or directly from groundwater if they have their own pumping station. How much water does a data center need to draw? As mentioned earlier, this depends on the location of the center and the climate. An average sized data center in California would consume up to 1,600 m3 per day. Some of this water is evaporated and therefore does not normally cycle, so it is said to be consumed. However, different types of water can be used - potable, non-potable, wastewater - and most importantly water can be recycled to be used several times in the cycle, this process can be referred to as reclamation, recycling, or conservation.
To fully understand the actual water withdrawal, it is necessary to look at the water supply demands made on the city. For example, in Red Oak, Texas, Google was asking for 5.5 billion litres of water per year for one of its data centers, almost 10% of the county’s annual consumption. In Mesa, Arizona, a giant data center project, supposedly for Facebook, has made an initial demand of 6.4 million litres of water per dayor 681 million litres per year, and 1.9 billion per year in the 3rd phase of its development. In the same city, many data centers are already present, and Google is currently building a center that will use 3.8 to 15 million litres of water per day. Most of the water comes from the Colorado River, whose level is falling and whose resources have been over-allocated according to a report by the University of Arizona. However, the possible unsustainability of the new data center project was outweighed by an $800 million project with various financial benefits to the community, so the construction project was voted 6-1 in the city council. Mesa’s water manager reported that the city consumed 105 billion litres of water in 2019 and estimates that it could reach (and support) 227 billion by 2040. This estimate seems quite optimistic given the state’s water supply, especially with the shortage of the lake Mead.
One question remains, why put data centers in desert areas, and generally in water stress areas? It would seem that it is easier to get rid of heat in a dry climate; but also deserts are areas with little exposure to natural hazards (earthquakes, etc.); the possibility of abundant solar energy; and access to low cost electricity and water, especially for water in the US states mentioned. However, it is hard to understand the long-term strategy of such facilities in the face of possible water supply difficulties or rising electricity or water prices, or simply when water stress is rapidly increasing as in periods of drought, which is currently happening in the American West.
t is impossible for me to list one by one all the environmental factors to be taken into account at the territorial level but I refer to the excellent report by Cécile Diguet and Fanny Lopez for the ADEME: “The spatial and energy impact of data centers on territories” and to Lopez’s recent note “Data centers: Anticipating and planning digital storage”.11 We could talk about the land use associated with hyperscalers that go beyond 10,000 m2. In Frankfurt alone, data centers occupy 640,000 m2 and 270,000 m2 are planned. It is this type of aggressive development that has prompted Frankfurt to introduce a moratorium to regulate the sector (in addition to the power consumption and lack of interest in recovering waste heat). Digital infrastructure, especially that related to GAFAMs and giant operators, needs to be planned and regulated by cities but has, until now, benefited from many free rides. A data center operator will not always be able to get the land, electricity and water it wants, it will have to adapt to territorial constraints it does not want to deal with. For this to happen, local authorities must be prepared and have tools at their disposal. The obligation to open up and centralise data on land, electricity and water consumption by data centers would be a major step forward in planning and regulating any new development. In this respect, it seems to me that the territorial approach would make it possible, if the data are available, to modify the exceptional regime that the sector has enjoyed until now.
At the environmental level, the territorial approach makes it possible to get out of the mystique of relative efficiency values to align consumption in absolute value with a local stock and a precise environment. One of the weak points in the development of data centers is that they cannot be completely delocalised, they must be close to their “audience”. Thus, environmental impacts remain visible to end-users and can be addressed or mitigated directly either by citizen mobilisation or by legal action of the local authorities or state concerned. Finally, the territorial approach shows that, beyond the rhetoric in CSR reports, digital infrastructure can directly collide with the ecological transition objectives of cities where it is implemented. If this is a zero-sum game, then the relative environmental embellishment (and above all accounting) of the major operators is paid for at the price of the degradation of local transition plans.
We will have to continue to follow the evolution on a global level as best as possible, but I do not think that this approach can continue to live alone or it will become potentially counterproductive. The territorial approach is already supported by many actors and should therefore be methodologically accentuated to become a standard in the reporting of environmental data of the digital sector. It does not seem to me that we cannot make progress on these environmental issues without integrating geography, cartography, humanities and social sciences, local field studies, in short, bringing in the qualitative data and the constrained perimeters. The next piece of the methodological puzzle will be the connection between global and territorial vision in a sector that is both concentrated and diluted. Many years of research will be needed to make progress on this issue.
By 2020, $150 billion has been invested by the cloud giants half of it to build new data centers. The biggest investments are, in order, Amazon, Microsoft, Google, Facebook, Apple, Alibaba and Tencent. If there were 541 hyperscalers in the world in June 2020, Synergy Research Group now estimates there would be 625 today. How are all these new centers set up, what has been the impact of their construction, the manufacture of the equipment, what are their local demands for electricity and water, what are the usage conflicts associated with this, are these developments compatible with a +2°C world? What about telecommunication networks and user equipment? We have only touched on the environmental issue on one of the three “technical thirds” here and as you can see, this issue is far from being resolved.
As the material conditions for building and running the infrastructure become unsustainable, GAFAMs and other giants will surely develop their own energy and water infrastructure. Some of them are already busy designing their own IT equipment anyway. This verticalization will have the great flaw of making the real consumption of these infrastructures invisible. Today we can still retrieve some data from water and energy providers but when Amazon builds its own substations, like in Santa Clara, or Google its own pumping stations then the black box will continue to grow. We will then understand less and less the real environmental footprint of these giants at both global and territorial levels. This opacification effort will be supported by aggregated data publication and accounting methods that today allow to hide part of the footprint and its numerous impact transfers.
Masanet et al., “Recalibrating global data center energy-use estimates”, Science, 28 février 2020 ; IEA, “Data Centres and Data Transmission Networks”, juin 2020. ↩
Montevecchi et al., “Energy-efficient Cloud Computing Technologies and Policies for an Eco-friendly Cloud Market – Final Study Report”, European Commission, 2020. ↩
Greenpeace East Asia, “Powering the Cloud: How China’s Internet Industry Can Shift to Renewable Energy”, Greenpeace East Asia et the North China Electric Power University, 2019. ↩
Shehabi et al.,“United States data center energy usage report”, Lawrence Berkeley National Laboratory, 2016. ↩
Shah et al., “Sources of Variability in Data Center Lifecycle Assessment”, 2012 IEEE International Symposium on Sustainable Systems and Technology, 2012. ↩
Gupta et al., “Chasing Carbon: The Elusive Environmental Footprint of Computing”, IEEE International Symposium on High-Performance Computer Architecture, 2021. ↩
Whitehead et al., “Assessing the environmental impact of data centres part 1: Background, energy use and metrics”, Building and Environment 82, 2014 ; Whitehead et al., “Assessing the environmental impact of data centres part 2: Building environmental assessment methods and life cycle assessment”, Building and Environment 93, 2015. ↩
Heslin, “Ignore Data Center Water Consumption at Your Own Peril”, Uptime Institute, 2016. ↩
Diguet et Lopez, “L’impact spatial et énergétique des data centers sur les territoires”, ADEME, 2019. ↩
Marquet, “Binaire Béton : quand les infrastructures numériques aménagent la ville”, 2019. ↩
Lopez et al., “Data centers: Anticipating and planning digital storage”, Institut Paris Région, 2021. ↩