www.eetimes.eu, Nov. 21, 2024 –
With our society's instantiable appetite for consuming vast amounts of data, it is no surprise that the back-end data networks must constantly evolve. Most high-throughput data networks rely on optical transmission methods to transfer data, with two dominant modulation techniques: 4-level pulse amplitude modulation (PAM4) and coherent modulation. PAM4, where each amplitude level represents two bits, is a bandwidth-efficient modulation technique that suits short-range (< 10km) applications. Coherent modulation, which involves the modulation of a coherent light source's amplitude, phase and polarization, is more suited to high-speed, long-distance (>10km) transmission. Both modulation techniques are in demand as the race is on to achieve transfer rates of up to 800 Gbps. There are signs that as each technique evolves, the differences in cost, simplicity and power consumption between them will reduce.
EE Times Europe spoke with Tony Chan Carusone, chief technology officer of Alphawave Semi (London, U.K.), to discover the underlying applications driving bandwidth demand and to find out if PAM4 and coherent modulation techniques are ever likely to converge.
AI back-ends driving bandwidth demand
Chan Carusone highlighted that "AI back-ends are the leading application driving the next generation of technology evolution. This is an interesting perspective since it was traditionally Ethernet networks that evolved first to meet the next generation of speed grades. These back-end networks have some unique requirements in terms of latency reach and reliability. Since they're leading the way to the next higher data rates, the initial communications we're having [with customers] about beyond 200 Gbps per lane are all predicated on these different requirements. This was illustrated at a recent Ethernet Alliance event I attended, the first public event explicitly discussing what's needed for rates higher than 200 Gbps."
It's clear that data centers hosting AI-based applications are a key target for next-generation optical networks. However, Chan Carusone observed, "Part of the challenge for data centers is that AI technology is still evolving and that there are different workloads that might give rise to other architectures. On the one hand, different workloads and different hardware architectures might dictate an alternative approach to the scale-up and scale-out of these centers. Another aspect is that many hyperscalers want to scale out massively and build a lot of infrastructure, which, in turn, puts pressure on standardization [initiatives] such as those of interoperability and ensuring a healthy supply chain."
Standardization is an evergreen topic for any technology topic and often includes support for backward compatibility. It infers a lowest common denominator approach, which usually implies some overhead in the technology stack. Chan Carusone suggested that the industry take a pragmatic approach, stating, "You will probably see customizations layered on top of a standard-based solution. The race has been so aggressive in the past couple of years that some hyperscalers could work with a limited set of partners to build a solution that was cutting edge as quickly as possible. However, that approach has to change now. If you're scaling things out to the level being talked about, you must involve more people."