This talk will look at the future energy cost associated with the training and usage of generative AI (ChatGPT alike) that requires large data centers and edge nodes. The computational requirements are demanding new efficient and complex GPU/accelerator hardware with ever increasing heat fluxes. The thermal management will challenge the fluid medium of thin air and enforce data centers to employ liquid cooling close to the microelectronics. Questions arise of what future heat fluxes are expected and whether immersed cooling, direct-to-chip or any derivative of liquid cooling can meet the needs.
Anmäl dig här