ArchiLabs Logo
Data Centers

Cooling choices that lock in your data center site decade

Author

Brian Bakerman

Date Published

Cooling choices that lock in your data center site decade

Cooling Choices That Lock You Into a Data Center Site for 10 Years

Designing a data center means making decisions that will echo for years. One of the most critical choices is how to keep all those servers cool. Your cooling strategy isn’t just an operational detail – it can determine where you build, how you build, and how flexible (or inflexible) your facility will be for the next decade. In fact, cooling infrastructure often consumes up to 40% of a data center’s energy (www.globaldatacenterhub.com) and comes with a lifespan of around ten years (www.sourceups.co.uk). That means the cooling choices you make today will lock you into certain site requirements and costs until at least 10 years down the line. This long-term impact is why architects, engineers, and BIM managers need to scrutinize cooling options early in the design process. In this post, we’ll explore common data center cooling methods, how they can tie you to a location or design, and ways to plan for flexibility. We’ll also see how integrated design platforms (like what ArchiLabs offers) can help navigate these high-stakes decisions by unifying data and automating planning across your entire tech stack.

The 10-Year Commitment of Data Center Cooling

Cooling is not a simple plug-and-play component; it’s a structural investment. Most manufacturers peg the service life of major cooling equipment (chillers, CRAH units, cooling towers, etc.) at about ten years (www.sourceups.co.uk). Once you commit to a cooling system, you’re essentially wedded to it for a decade – both financially and physically. The facility is built around it, and ripping it out for a different solution a few years later would be prohibitively expensive and disruptive. Over that time, the cooling plant has to reliably protect your IT equipment from overheating. If it falters, you're looking at rising server inlet temperatures, potential downtime, and mounting energy bills as efficiency drops. This is why many operators follow a “ten-year rule” of planning upgrades or replacements on that timeline to avoid diminishing returns in performance. The key point is that a cooling decision isn’t easily reversed; it’s a long-haul commitment. Choose the wrong system for your site or needs, and you could be stuck battling its limitations for years.

Beyond lifespan, cooling systems are deeply entwined with a site’s infrastructure. They involve heavy machinery, extensive piping or ductwork, and often external structures like cooling towers or condensers. For example, if you install large water-cooled chillers and a cooling tower yard, your data center is literally built around a water-based design. Switching later to an all-air or an immersion cooling setup would require major construction (if it’s even feasible within the building). This physical tie-in means your initial cooling choice can effectively anchor you to a specific site configuration and environment. It’s not like swapping out servers or upgrading software – cooling is baked into the building. So, from the outset, you need to ensure the chosen method aligns with both current requirements and future expectations for that location.

Site Selection and Cooling Strategy Go Hand-in-Hand

Every data center site has unique attributes – climate, water availability, power costs, and more – that influence which cooling options make sense. In fact, the local climate alone can dictate your cooling approach and long-term viability. In cooler climates, data center owners can leverage free air cooling (air-side economization) to reduce reliance on chillers (phoenixnap.com). Chilling with outside air, especially in places with long cold seasons, slashes energy usage and can yield industry-leading PUE ratios. Many hyperscale facilities in Scandinavia and the Pacific Northwest, for instance, take advantage of naturally cold, dry air to keep servers cool with minimal mechanical intervention. The flip side is that hot or humid climates severely limit free cooling potential. In a tropical or desert location, you’ll likely have to fall back on energy-intensive refrigeration or even explore liquid cooling to handle the heat (phoenixnap.com). In other words, your geography can lock you into certain cooling needs: a Phoenix data center can’t rely on frigid winter air, and a Helsinki data center doesn’t need massive chillers running year-round.

Even when free cooling is on the table, it comes with design commitments. Using outside air isn’t as simple as opening a window – you need large air-handling units, controllable louvers, and robust filtration and humidity control. Outside air brings dust, pollutants, and unregulated humidity that can wreak havoc on servers. Air-side economizer systems require extensive filters and sometimes even supplemental humidifiers to keep conditions in the safe range (quizgecko.com). Those additional systems add cost and complexity. Some experts note that what you spend on filtration and humidity control might end up rivaling the cost of just running traditional chilled cooling in the first place (quizgecko.com). Plus, designing for an economizer means your building layout and mechanical plant are configured around large airflow pathways. If down the road you decided to revert to closed-loop cooling (say, because outdoor pollution got worse or you needed more precise control), you’d be redesigning major chunks of the facility. Essentially, embracing free cooling makes you commit to that path – you’re betting on the local air quality and climate remaining favorable for the life of the data center.

Water availability is another site-dependent shackle introduced by certain cooling choices. Many data centers use water in their cooling systems – from cooling towers that evaporate water to reject heat, to direct liquid cooling loops that circulate water to server racks. The efficiency gains can be big, but so is the thirst. The average mid-size data center uses roughly 1.4 million liters of water every day for cooling needs (news.tuoitre.vn). If your cooling plan leans on evaporative cooling or chilled water loops, you’re locking yourself into a location with a sufficient water supply (and into whatever it costs to use that water). In water-scarce regions, this can be a ticking time bomb. A data center built in a semi-arid area might secure water rights or supplies now, but what happens in five or ten years if a drought hits or municipalities start clamping down on large water consumers? We’re already seeing concern over data centers’ “insatiable thirst” for water as the industry expands (news.tuoitre.vn) (news.tuoitre.vn). If your facility depends on thousands of cubic meters of water a day, you are inherently bound to that resource and vulnerable to its scarcity. Upgrading to a water-free cooling method later (like switching to dry coolers or refrigerant-based systems) would be a major retrofit, essentially a redesign of the cooling architecture. In short, choosing a water-reliant cooling design ties your fortunes to the local water supply and environmental policy for the next decade.

Key Cooling Options and Their Long-Term Implications

Let’s break down some of the common data center cooling methods and how each can lock you into certain design and site parameters. Each approach has pros and cons – and each carries different baggage for your facility’s future flexibility.

Traditional Air Cooling (Chilled Air and CRAC Units)

Most legacy data centers use air-based cooling with CRAC/CRAH units and chillers. Cold air is circulated through server racks (often in a hot aisle/cold aisle containment setup), absorbs heat, and then is cooled again by chilled water coils or DX (direct expansion) refrigerant in Computer Room AC units. This approach is tried-and-true, and modern optimizations (like rear-door cooling units or close-coupled cooling) have improved efficiency. However, pure air cooling starts to struggle as rack densities climb. The equipment footprint is huge: you need room for chiller plants, pump rooms, and possibly cooling towers if it’s water-cooled chillers. That’s a massive capital investment anchored to your building. Once you’ve poured the concrete for a chiller yard and routed miles of chilled water piping through the floors, you’re not going to rip that out on a whim. The site and building are effectively custom-built for that cooling topology.

Another implication is energy usage. As mentioned, cooling eats a large chunk of the facility’s power (www.globaldatacenterhub.com), and traditional air cooling, while reliable, can be less energy-efficient in hot climates or at scale. If down the line your company’s sustainability goals tighten (say you need a lower PUE or carbon footprint), you might feel stuck with an inherently power-hungry system. Retrofitting an existing data hall designed for air cooling to support liquid cooling later is a major ordeal – one that might involve raised floor changes, structural modifications for heavy liquid distribution units, and new coolant leak mitigation measures. Thus, a conventional air-cooled design is something you double-down on: it’s safest in terms of well-understood tech, but it locks you into a certain efficiency band and layout. Notably, industry trends suggest conventional air cooling may not cut it in the near future for high-performance computing needs – some are even calling high-density air cooling obsolete as servers get hotter (www.itpro.com). If your data center might need to host ultra-dense racks (think AI training clusters), a pure air system could become a liability well before that 10-year mark.

Free Cooling (Using Outside Air or Water)

As discussed, free cooling leverages the environment – using outside air or cold water sources – to dissipate heat with minimal mechanical work. Air-side economizers draw in outside air when conditions are cool enough, while water-side economizers use sources like adjacent lakes or cooling ponds to chill water. Free cooling can dramatically improve efficiency and reduce operating costs. For instance, a well-designed air economizer can let you turn off chillers for a good portion of the year in temperate climates. This is great for energy savings and can even become a selling point (a “green data center” angle). But the commitment is that you’ve now tied your cooling uptime to external conditions. If wildfire smoke fills the outside air one summer, or an unusual heatwave pushes temperatures beyond the norm, your facility might have to fall back on backup chillers or risk insufficient cooling. Designing for free cooling means betting on Mother Nature consistently supplying the right conditions.

There’s also maintenance and resiliency to consider. Systems that use outside air need regular filter replacements and careful monitoring of humidity and contaminants. If your data hall relies on, say, Arctic air for half the year, any climate shift or environmental event (smoke, dust storm, industrial pollution spike) can force a scramble to maintain safe temperatures. Many operators mitigate this with hybrid systems – essentially designing economizers with full chiller backups – but that increases initial cost and complexity. You’re basically building two cooling systems (one “free” and one traditional) and that inherently raises the design complexity. Economizer designs lock you into a climate-dependent model, and while you can engineer backups, you’re still constrained by the local environment’s cooperation over the facility’s life. The savings are real, but they come with the acceptance of some climatic risk and a building uniquely configured for large airflow management.

Water Cooling and Evaporative Systems

Water is extremely effective at carrying heat, so it’s no surprise that many high-capacity data centers use water in their cooling loop. This might be via chilled water CRAH units (with cooling towers outside), or newer adiabatic and evaporative cooling units that pre-cool air using water evaporation. Some facilities even draw directly from nearby rivers or sea water for cooling (for example, a famous data center in Finland uses seawater from the Gulf of Finland to cool servers). The efficiency gains can be huge – evaporative cooling can reduce electricity use by leveraging water’s cooling potential. But, as highlighted earlier, the tradeoff is massive water consumption. Using hundreds of thousands of gallons (or millions of liters) of water daily (news.tuoitre.vn) not only ties you to a robust water source, it also opens you up to environmental scrutiny. Drought-prone regions today are increasingly wary of new data centers for this very reason. A design that seemed fine in 2015 might face community pushback by 2025 if water becomes scarcer or more expensive.

From a lock-in perspective, going with water-centric cooling pins your facility to both a resource and a design. You’ll have pump rooms, treatment systems (to manage water quality and prevent scale in your heat exchangers), and likely chemical handling for water treatment – all permanently installed. If water prices spike or regulations force lower consumption, you might have to retrofit with dry cooler add-ons or operate in a degraded mode. Some operators build in contingency by designing cooling towers for closed-loop mode as well (using them as giant radiators when water use is restricted), but again, that adds cost upfront. Additionally, large evaporative systems often require specific permits and environmental safeguards – you might be literally permitted to operate in that site under certain conditions around water use and discharge. Changing course later could mean new permits or even relocating, which is unthinkable for a running data center. In short, water-heavy cooling designs are efficient and powerful, but they bind you tightly to local water infrastructure and policies. Over a 10-year horizon, that’s a risk that must be weighed carefully against the efficiency rewards.

Liquid Cooling at the Rack (Direct-to-Chip)

Direct-to-chip liquid cooling involves pumping coolant (often water with additives or a dielectric liquid) directly to cold plates or heat sinks on the servers. Instead of blowing air over hot components, you run liquid lines to absorb heat right at the source. This method has been used in supercomputers and HPC clusters for years, but it’s now gaining traction in mainstream data centers as server densities hit new highs. Companies like Microsoft and DataBank have started designing systems that bring cooling directly to the chips (www.itpro.com) because modern CPUs and GPUs are pushing what air can do. The benefit is clear: liquid can remove heat far more effectively, enabling racks that dissipate tens of kW (or even 100+ kW) each without overheating. For context, at NVIDIA’s GTC conference in 2025, their team unveiled plans for data center racks supporting an astounding 600 kW of IT load by 2027 (www.globaldatacenterhub.com) – a figure that simply cannot be achieved with air cooling. That underscores why direct liquid cooling is moving from niche to necessity in cutting-edge facilities.

However, adopting direct-to-chip liquid cooling is a big design paradigm shift. It means integrating liquid distribution throughout your data hall – you’ll have manifolds, piping to each rack, leak detection systems, and maintenance procedures for liquid-cooled servers. If you didn’t plan for it from the start, retrofitting an existing air-cooled data center to support liquid at the rack is a massive undertaking. It’s not just swapping out server coolers; you need to add industrial plumbing in a space that wasn't built for it. Floors might need reinforcement (liquid-cooled racks and cooling distribution units are heavier), and redundancy plans need rethinking (pumps fail differently than fans). So, deciding on day one to use direct liquid cooling effectively locks in a certain class of facility design – one that is advanced and prepared for extreme densities, but also one that can’t easily revert to traditional methods. You’d likely double down on liquid cooling more and more as you grow. The upside is you get a future-proof cooling capability for high-performance gear; the downside is you’ve married the facility to that methodology. If in five years a new cooling tech appears (say, some exotic refrigerant or photonic cooling), integrating it would be as challenging as any big infrastructure change. Thus, direct liquid cooling is a forward-looking choice that trades some flexibility for raw performance handling.

Immersion Cooling

Immersion cooling takes liquid cooling to the next level by dunking entire servers or boards in baths of non-conductive fluid. The fluid (often a specialized dielectric oil or engineered fluid) absorbs heat directly from components, and heat exchangers then transfer that heat out of the bath. Immersion can achieve spectacular cooling efficiency and can handle extremely high power densities – it’s a favorite in some crypto-mining operations and cutting-edge HPC labs. With the rise of AI workloads, many in the industry believe immersion cooling is approaching a tipping point into mainstream adoption (www.datacenterfrontier.com). It’s no longer seen as an exotic niche for supercomputers; even some cloud data centers are experimenting with immersion for AI training pods. The promise is huge: better thermal management, quieter operation (no server fans), and potentially lower failure rates since components aren’t exposed to air or dust at all.

But if you thought direct liquid cooling requires design changes, immersion is an even bigger architectural commitment. An immersion-cooled data center might look radically different from a traditional one – instead of standard 19-inch rack rows, you have tank enclosures. The facility needs cranes or pulley systems to lift heavy server trays in and out of fluid. You need plumbing to each tank to cycle coolant to external dry coolers or water chillers. The floor plan might require lower density of tanks if human access is needed, or specialized fire suppression (since some fluids may be flammable, albeit high flash point). Converting an air-cooled white space into an immersion setup later is more or less a full rebuild of that space. So choosing immersion cooling up front essentially locks your data center into a very specialized mode of operation. You’re also locking into specific IT hardware choices – not every server model is certified or warrantied for immersion, so you might constrain your hardware supply chain. On a 10-year timeline, that’s a bet that the benefits (and vendor ecosystem for immersion-rated hardware) will only grow, and that you won’t need to revert to traditional racks. Many believe that for ultra-dense computing, immersion is the future, but it certainly anchors your facility design around that concept from day one.

Planning for Change: How to Avoid Getting “Stuck”

Given how each cooling approach carries long-term baggage, what can data center designers do to maintain some flexibility? The goal should be to plan for change even if you commit to a particular strategy now. One key tactic is designing in modularity and headroom. For example, if you build an air-cooled data center today, consider allocating space and structural support for a liquid cooling loop in the future. Some operators install dry coolers or extra conduit space on the roof “just in case,” so that if higher density cooling is needed later, the hook-ups are there. Similarly, designing a generous overhead cable tray and piping infrastructure can make it easier to introduce new cooling distribution or sensors later. The idea is to avoid painting yourself into a corner. If you’re using economizers, ensure the building can still be sealed and cooled mechanically on tough days (even if that means slightly oversizing a backup chiller plant). If you’re going all-in on liquid cooling, you might still keep one hall air-capable for lower density or legacy gear, providing a hybrid option.

Another strategy is incremental deployment. Instead of equipping the entire data center with a bleeding-edge cooling tech on day one, some teams build out one module or section as a pilot. For instance, you could have one data hall with direct-to-chip liquid cooling and the rest air-cooled, and monitor performance for a generation of hardware before expanding it. This modular approach means if something isn’t working out (technically or economically), you haven’t bet the whole farm on it. It also allows partial modernization down the road: you could convert halls one by one to a new cooling system during refresh cycles, rather than a full-facility retrofit all at once. The trade-off is complexity – supporting two cooling paradigms in one site has its own challenges – but it can be a smart way to de-risk long commitments.

Crucially, comprehensive planning tools can help you simulate and foresee the impact of cooling choices. This is where advanced BIM (Building Information Modeling) and data center infrastructure management come into play. Modern data center design is a cross-disciplinary exercise – architects need to coordinate with mechanical engineers, IT hardware teams need to weigh in, sustainability officers care about water and power usage, and financial analysts project ROI over years. Bringing all that data together is tough if you’re just using siloed tools like separate Excel sheets, CAD drawings, and standalone thermal models. Enter platforms like ArchiLabs, which is building an AI-driven operating system for data center design that connects your entire tech stack into a single source of truth. With a solution like this, BIM managers, architects, and engineers can synchronize data across Excel cost models, DCIM capacity systems, Revit and other CAD drawings, CFD thermal analysis tools, databases, and even custom software scripts – all in one unified environment. By having every facet of the design connected, you can more easily run “what-if” scenarios for different cooling choices. For example, you could tweak a cooling design parameter (like switching from air cooling to liquid loops in one zone) and immediately see implications on floor plans, BOM costs, power draw, and even get automated feedback if certain thresholds or design rules are violated.

Using Integrated Platforms to Make Better Cooling Decisions

The advantage of a cross-stack platform like ArchiLabs lies in automation and consistency. Once all your systems talk to each other, you can let the AI and automation handle repetitive and complex tasks that would normally take weeks of coordination. Imagine adjusting your layout to accommodate a new row of liquid-cooled racks: traditionally, you’d have to manually update the CAD drawings, recalculate cooling loads in an analysis tool, adjust the power distribution in a DCIM, and ensure all those changes are reflected in spreadsheets and equipment databases. With an integrated approach, a lot of that grunt work can be automatically synchronized. ArchiLabs, for instance, enables teams to set up custom agents that learn your workflows end-to-end – whether it’s reading and writing data to CAD software like Revit, working with open BIM formats like IFC, pulling information from external databases or APIs, or pushing updates to other enterprise systems. This means when you decide on a new cooling distribution layout, an agent could trigger updating the 3D model, regenerate cable pathways that avoid the new cooling pipes, recalculate the weight load on the floor slab, and update the equipment inventory – all without manual intervention.

Such automation isn’t just about convenience; it’s about making it feasible to iterate and adapt your design rapidly. In the context of cooling, that agility is gold. If a new cooling technology emerges or if your IT load forecast changes dramatically (imagine suddenly needing to host an AI cluster that doubles your watt/sqft density), an integrated system can help re-plan the data center quickly. BIM managers can teach the AI platform their specific rules and standards, so the system can, say, automatically lay out rack and row arrangements that comply with both hot aisle containment and the clearance needed for liquid cooling piping. Or it might auto-generate cable pathways that account for larger coolant distribution units now occupying what used to be cable tray space. By orchestrating these multi-step processes across the entire tool ecosystem, you avoid the scenario where a cooling change gets lost in translation between teams. Everyone – from the CAD model to the procurement sheet – stays on the same page.

Importantly, a platform like ArchiLabs treats each integration (Revit is one, but equally important are your spreadsheets, your DCIM, your monitoring systems) as part of a unified whole. This cross-stack philosophy means data center design becomes a living, always-in-sync model rather than a bunch of static documents. So when you’re faced with big decisions that normally lock you in (like “Should we design for air or liquid cooling?”), you can explore both options virtually with much less effort. The AI can even automate repetitive planning work to test each scenario: generating layouts, running cooling simulations, pulling in equipment specs, checking compliance against standards – and do so much faster than a human team shuffling files between departments. The result is you make a more informed choice upfront, reducing the risk that you’ll regret the decision half-way through the facility’s life. In essence, integrated design automation gives you an “undo” button for big design choices, or at least a crystal ball to see the outcome before you’re stuck with it in concrete and steel.

Conclusion: Design for a Decade (and Beyond)

Cooling might be the single most consequential aspect of data center design when it comes to long-term commitment. The choices you make on day one – air or liquid, water-dependent or dry, free cooling or fully contained – set the trajectory for your facility’s efficiency, sustainability, and adaptability. Each approach can, in its own way, lock you into a site’s characteristics: climate, resource availability, and infrastructure. A data center is not easily picked up and moved, nor is its cooling system easily swapped out, so it’s vital to align your cooling strategy with both your current needs and your best forecast of future requirements. Think about where technology is headed (e.g. higher density racks, new server form factors) and where the planet is headed (e.g. climate change, water scarcity, new regulations) over the next ten years. Designing with those in mind will help ensure you’re not stuck with a white elephant of a cooling system by 2030.

The good news is that today we have better tools and methodologies to plan these complex trade-offs. From modular designs that allow phased upgrades, to AI-powered design platforms like ArchiLabs that connect all your data and automate scenario planning, the industry is innovating not just in cooling tech but in how we plan for cooling. The goal for BIM managers, architects, and engineers should be to remain agile: create data centers that meet today’s demands but can evolve with tomorrow’s. By breaking down data silos and embracing cross-discipline collaboration (augmented by smart automation), you can avoid the worst kind of lock-in. In the end, the best cooling choice is one that balances efficiency with flexibility – keeping your data center humming optimally throughout its first decade of life, without painting you into a corner for the next one. With careful planning and the right tools, you can cool your servers effectively and keep your options open. The decisions are long-term, but they don’t have to be a shackle if made with foresight and integrated insight. (www.itpro.com) (www.itpro.com)