A few 12 months and a half in the past, quantum management startup Quantum Machines and Nvidia introduced a deep partnership that might carry collectively Nvidia’s DGX Quantum computing platform and Quantum Machine’s superior quantum management {hardware}. We didn’t hear a lot concerning the outcomes of this partnership for some time, however it’s now beginning to bear fruit and getting the trade one step nearer to the holy grail of an error-corrected quantum pc.
In a presentation earlier this 12 months, the 2 firms confirmed that they’re able to use an off-the-shelf reinforcement studying mannequin working on Nvidia’s DGX platform to raised management the qubits in a Rigetti quantum chip by holding the system calibrated.
Yonatan Cohen, the co-founder and CTO of Quantum Machines, famous how his firm has lengthy sought to make use of normal classical compute engines to manage quantum processors. These compute engines have been small and restricted, however that’s not an issue with Nvidia’s extraordinarily highly effective DGX platform. The holy grail, he mentioned, is to run quantum error correction. We’re not there but. As an alternative, this collaboration targeted on calibration, and particularly calibrating the so-called “π pulses” that management the rotation of a qubit inside a quantum processor.
At first look, calibration could look like a one-shot downside: You calibrate the processor earlier than you begin working the algorithm on it. However it’s not that easy. “If you look at the performance of quantum computers today, you get some high fidelity,” Cohen mentioned. “But then, the users, when they use the computer, it’s typically not at the best fidelity. It drifts all the time. If we can frequently recalibrate it using these kinds of techniques and underlying hardware, then we can improve the performance and keep the fidelity [high] over a long time, which is what’s going to be needed in quantum error correction.”
Consistently adjusting these pulses in close to actual time is a particularly compute-intensive job, however since a quantum system is at all times barely totally different, it’s also a management downside that lends itself to being solved with the assistance of reinforcement studying.
“As quantum computers are scaling up and improving, there are all these problems that become bottlenecks, that become really compute-intensive,” mentioned Sam Stanwyck, Nvidia’s group product supervisor for quantum computing. “Quantum error correction is really a huge one. This is necessary to unlock fault-tolerant quantum computing, but also how to apply exactly the right control pulses to get the most out of the qubits”
Stanwyck additionally harassed that there was no system earlier than DGX Quantum that might allow the type of minimal latency essential to carry out these calculations.
Because it seems, even a small enchancment in calibration can result in large enhancements in error correction. “The return on investment in calibration in the context of quantum error correction is exponential,” defined Quantum Machines Product Supervisor Ramon Szmuk. “If you calibrate 10% better, that gives you an exponentially better logical error [performance] in the logical qubit that is composed of many physical qubits. So there’s a lot of motivation here to calibrate very well and fast.”
It’s price stressing that that is simply the beginning of this optimization course of and collaboration. What the workforce really did right here was merely take a handful of off-the-shelf algorithms and have a look at which one labored greatest (TD3, on this case). All in all, the precise code for working the experiment was solely about 150 traces lengthy. In fact, this depends on all the work the 2 groups additionally did to combine the varied techniques and construct out the software program stack. For builders, although, all of that complexity may be hidden away, and the 2 firms anticipate to create increasingly open supply libraries over time to make the most of this bigger platform.
Szmuk harassed that for this venture, the workforce solely labored with a really fundamental quantum circuit however that it may be generalized to deep circuits as effectively. If you are able to do this with one gate and one qubit, it’s also possible to do it with 100 qubits and 1,000 gates,” he mentioned.
“I’d say the individual result is a small step, but it’s a small step towards solving the most important problems,” Stanwyck added. “Useful quantum computing is going to require the tight integration of accelerated supercomputing — and that may be the most difficult engineering challenge. So being able to do this for real on a quantum computer and tune up a pulse in a way that is not just optimized for a small quantum computer but is a scalable, modular platform, we think we’re really on the way to solving some of the most important problems in quantum computing with this.”
Stanwyck additionally mentioned that the 2 firms plan to proceed this collaboration and get these instruments into the palms of extra researchers. With Nvidia’s Blackwell chips changing into out there subsequent 12 months, they’ll even have an much more highly effective computing platform for this venture, too.