AIZ
Technology Deploy About News Contact

We started by building
a thermal test rig.

We built this rig to establish, with certainty, whether our thermal models were correct before committing to production hardware. This reflects our engineering methodology at AIZ.

"We validated our thermal models by simulating sustained H200 GPU load using a thermal test rig. PUE targets are set through measured engineering validation, not vendor data sheets."
— AIZ Engineering
Thermal validation method
01
Model H200 SXM5 sustained thermal output (~700W)
02
Replicate with a calibrated thermal validation apparatus
03
Validate heat transfer mathematics against model predictions
04
Commit to production firmware and enclosure design
The origin story

AIZ was founded in Christchurch, New Zealand, with one conviction: the data centre model needed to be redesigned from the ground up. Not incrementally improved. Rebuilt.

The compute demands of modern AI workloads do not fit the assumptions that traditional data centre design was built on. Legacy facilities were optimised for predictable, steady-state server loads — not for the extreme thermal density of dense GPU clusters, and not for the distributed edge deployments that latency-sensitive AI applications require.

We do not assume dependable AI compute always requires purpose-built landmark facilities. In selected deployment scenarios, clean and cool ambient conditions, disciplined thermal engineering, and reliable power and network interfaces may be sufficient. Our climate-adaptive approach and modular architecture are designed to support these warehouse-scale deployment environments where appropriate.

We approached the problem differently. Instead of adapting existing cooling paradigms, we derived our thermal models from physics. We set PUE targets that are difficult to achieve with legacy architecture. Then we built hardware to meet them.

The test rig was built to validate heat transfer mathematics before writing a single line of production firmware. That level of rigour defines everything we ship.

AIZ Air Node is in production. AIZ Thermal Node is under active R&D. Both are built in the same enclosure form factor — because modularity and simplicity aren't design compromises. They are the design.

Non-negotiable engineering standards.

Engineering Validation

We validate before we commit. Vendor data sheets are a starting point, not a specification. Every thermal model, PUE target, and cooling coefficient is verified through measured testing and engineering review.

PUE Performance Discipline

Every watt of overhead matters. Today, our validated PUE is below 1.14 for air-cooled deployments and below 1.08 for liquid-cooled architecture. We continue engineering toward stricter targets as validation progresses.

Edge by Design

Latency is a physical constraint, not a software problem. Speed of light in fibre is fixed. Distance to compute matters. We build for edge deployment from the ground up — not as an afterthought bolted onto a centralised architecture.

Tokens generated by AI compute represent a third form of electricity in service of humanity. We are committed to flexible, modular deployment of compute infrastructure.
— AIZ Engineering

Building something that needs serious infrastructure?

If you require dedicated GPU compute at the edge, a deployment partner for an AI workload, or an assessment of enclosure fit for your site, please contact our team.