News

There are definitely easier businesses to be in than operating a neocloud. For one thing, the makers of AI accelerators, ...
With AI being the biggest change in IT infrastructure since the Dot Com boom, it was no surprise that at the annual Cisco ...
The biggest challenge to AI initiatives is the data they rely on. More powerful computing and higher-capacity storage at ...
As we have said many times here at The Next Platform, the only way to predict the actual future is to live it. Despite that, ...
It has taken the better part of a year and a half and some wrangling with the US Department of Justice to get it done, but ...
Micron Technology has not just filled in a capacity shortfall for more high bandwidth stacked DRAM to feed GPU and XPU ...
Right or wrong, we still believe that we live in a world where traditional HPC simulation and modeling at high precision ...
There is not one Ethernet business, but several, and now, with the evolution of Ethernet switches for back-end AI cluster ...
This chart below was useful in allowing us to calculate the watts used by the more recent TPUs, which Google did not provide: Google likes to make comparisons to the TPU v2, which was the first of its ...
Compute engine makers can do all they want to bring the performance of their devices on par or even reasonably close to that of Nvidia’s various GPU accelerators, but until they have something akin to ...
The E1 chip is etched in 5 nanometer processes from TSMC, like the X2 switch chip. Here’s the block diagram for the E1: With all 64 cores activated, the E1 DPU will run about 90 watts and with 32 MB ...
When Oracle bought Sun Microsystems, it put out a five year roadmap, and it largely stuck to it. When GPU-accelerated computing took off in 2010 and the GPU Technical Conference was new and the number ...