When nascent technologies start to mature, nearing the point of adoption, the tech industry tends to focus on one key characteristic to help simplify discussions around the readiness of the technology for productive use cases and to compare competing solutions.
www.eetimes.com, Feb. 18, 2025 –
For example, when cellular technology first started to support data calls, the industry focused on bandwidth to determine if certain applications like video calls and video streaming could be enabled. This was a good proxy to show when those applications would become possible. However, a second metric, in conjunction with bandwidth, was needed to determine when it would become not just possible but usable, as well as which solutions would provide the best quality of service.
In that case, the key metric to unlocking actual usability was latency. Likewise, as the industry inexorably marches towards quantum computing, the vast majority of conversations thus far have centered around the number and what type of qubits a quantum processor can bring to bear. While this is an extremely important measure of when quantum computing becomes viable, it is not the only metric necessary to determine when quantum computing becomes usable.
This is where gates come in and their importance in supporting the more complex workloads that will ultimately enable quantum computers to achieve "quantum advantage," essentially performing practical tasks faster or cheaper than classical computing.
Just as qubits are the quantum analogs to classical computing bits, so are quantum gates the counterpart to classical gates. They are the fundamental functional operations that allow for qubits to logically interact with each other to develop circuits designed to run complex calculations with a large number of variables, such as those used for optimization algorithms or large-scale molecular modeling.
An example of a commonly used quantum gate is a controlled NOT gate (CNOT). This gate takes two qubits, a control qubit and a target qubit, and outputs both qubits depending on the state of the control qubit at input. If the control qubit is a 0, the target qubit is left in its original state. If the control qubit is a 1, the target qubit is flipped.
Quantum circuits are made up of cascaded layers of gates, like a CNOT, where the next layer's input is the previous layer's output. Similar to classical computing, the more layers of gates a processor can cascade, the more complex operations it can perform. This is called the depth of the circuit.
Unlike classical computing, currently, the limiting factor to how deep of a circuit a given processor can support is the ability of the processor to maintain the coherence of the state of the qubits as it goes through its gates, and how it handles errors that are introduced with each successive cascade.
Just like in cellular technology where bandwidth and latency needed to work together to deliver novel applications like video calling and video streaming in a viable, usable way, so too do advancements in qubits and the ability to run large numbers of gates quickly and without error go hand in hand in the quest for enabling applications where quantum computing exceeds classical computing capabilities, otherwise known as quantum advantage.
Qubit and gate count together need to increase to deliver the required circuit depths, which will ultimately be realized with the advent of error correction and continued qubit design improvements. Until then, increases in gate count and circuit depth come through improved qubit design for longer coherence times and better error mitigation.
Different processors from quantum processor designers, such as (but not limited to) IBM, Quantinuum, Google, Microsoft, Amazon, Alice & Bob and Intel support different circuit depths. IBM is by far leading the field with its two-qubit input, 5,000-gate Heron chip.
One method IBM uses for improving qubit quality is a process they employ after fabrication but before packaging called Lasiq . Not to be mistaken for the laser eye surgery procedure, Lasik, IBMs Lasiq process uses lasers to tune the qubits to better align the frequencies (energies) to IBMs design parameters, which ultimately helps improve yield for processors of more than 100 qubits.