When Elon Musk wanted to start an automobile company, perhaps he heard well meaning advisors say to him, “Elon, starting an automobile company is very difficult and making it an electric platform from the start makes it even harder. Why don’t you build a gasoline powered car first and then gradually move to electricity later on?”. Fortunately, for all Tesla owners he decided not to do that and now has an automobile company with a market value greater General Motors, Ford, Toyota, Daimler AG, and Volkswagen combined. The moral of the story is that sometimes a technology leapfrog strategy works!
We recently interviewed Pete Shadbolt, Chief Scientific Officer at PsiQuantum, to understand the approach they are taking to develop a quantum computer. This article will cover some of the higher level architecture strategies that PsiQuantum is employing. At the qubit level, PsiQuantum is implementing something they call Fusion Based Quantum Computing (FBQC) to create the error corrected qubits. We will cover the architecture of this lower level in a future article.
PsiQuantum is also trying to implement a technology leapfrog strategy for quantum processors. They have eschewed NISQ and are going straight to implementing a fault tolerant processor. By doing this, they hope to get to this target faster than all the other guys. It is their belief that a fault tolerant architecture will be required for all commercially viable applications. They feel that any activity that users are doing now with the current generation of NISQ machines will have limited value because they won’t be able to accomplish anything useful and fighting the error levels in these machines will detract from a user’s focus of creating the right algorithms to solve their problems. Several other quantum companies have similar views including Microsoft, Intel, and possibly Google. Although these other companies may not be quite as absolute and may just say that most Commercially relevant quantum applications will require full fault tolerance instead of saying that all will require it.
Another core belief held by PsiQuantum is that creating such a computer will require the best possible fabrication processes available using industrial semiconductor fabrication foundries. In the silicon photonic technology that PsiQuantum is using, precise process control is vital in order to minimize loss as well as noise in the circuit. As an example, when a photon travels through a channel in a piece of silicon, the walls of that channel should be as smooth as possible. Any roughness in the walls, will result in signal loss, which can be quite a problem. PsiQuantum’s process is leveraging standard semiconductor processes without requiring the use of unusual materials or atomic-scale fabrication steps that would make a Tier One fab reluctant to work with them. In PsiQuantum’s case, they have chosen GlobalFoundries as their foundry partner in order to access the most advanced processes and the most precise process control. They pointed out that when they moved from a 200mm foundry to the GlobalFoundries 300mm facility, they saw a five times improvement in pattern quality with a 97% to 99.7% improvement in the intrinsic efficiency of their single photon detectors. Coincidentally, we should mention that Intel is also taking a similar strategy and using one of their advanced semiconductor fab facilities to process their spin qubit wafers.
Another important thing to understand about PsiQuantum’s approach is to understand their timeline. They are saying that all the necessary manufacturing processes for building a 1 million physical qubit machine will be in place by the middle of this decade. It is important to understand that having all the manufacturing processes in place, is much different than saying they will have a machine ready at this time. It will still take some time to build and integrate all the pieces together for a functioning machine. So, the availability of a machine that users can use for real world problems will likely be more towards the later part of this decade than the middle.
Part of the reason it may take a while is because a key strategy they will be using is the modular architecture they have chosen. Modularity in optical quantum computing, coupled with the flexibility of optical connections can enable many possible fault tolerance schemes. Their building block with be an individual module. But to provide a full 1 million qubit system, they will be interconnecting many of these modules together using optical fiber. Their choice of technology should make it easier for them to do this than other qubit technologies. Because they are already using photonics, they won’t need a transducer to convert a qubit to something that can be sent over a fiber optic cable. Other technologies, such as superconducting or ion trap qubits, will first need to use a transducer to convert the qubit to a photonic qubit if they want to send it to another processor chip over a fiberoptic cable.
Another issue with multi-processor implementations is the connectivity between modules. While a technology, such as ion trap, may have all-to-all connectivity within a module, that will not be the case when a circuit requires the use of two qubits in different modules. One can get around this issue by utilizing a circuit called a SWAP gate which interchanges the states between two qubits. But SWAP gates are problematic if you are working with gates that have some level of gate infidelities. If there is a chance of error each time a qubit runs through a gate, you will end up with significant inaccuracies if your implementation requires using too many of them. But with an error corrected architecture like PsiQuantum’s, SWAP gates become much less of an issue and you can use them to solve the intermodule connectivity problem. There may be other more sophisticated methods to resolve this issue that may be better than the linear overhead of a chain of SWAP gates, but these methods still could present problems if the gates don’t have a zero error rate.
The result will be PsiQuantum’s quantum processor will physically be quite large. It won’t fit under your desk or even be the size of a kitchen refrigerator. The system will have a large number of modules and will look more like a classical computing data center cluster. It will be placed in a room the size of a large data center.
When we first heard that PsiQuantum was using their Fusion Based Quantum Computing, we were worried about the software implications. Would all the quantum programs being developed now require a complete rewrite to take advantage of their architecture?
It turns out that the exact opposite is true. If anything, it should be easier for someone to write a program for their machine than for one of the existing machines. Every processor has its own set of what are called Native Gates. For the PsiQuantum machine, the programmer interface will consist of what are called the Clifford Gates (S, Hadamard, CNOT) plus the T gate. These are typically the set of gates taught in a beginner quantum computing class. Most of the machines that exist today don’t implement things quite as straightforward. They use native gates such as the Mølmer–Sørensen gate or the iSwap gate or other types of custom gates. Now most programmers won’t need to worry about this because the compilers will translate a quantum program to use these native gates, but it still is a step that one won’t need to worry about when programming the PsiQuantum machine.
For software companies being “hardware agnostic” is an important consideration for their software and they strive to support as many hardware platforms as possible. It should be easy for these companies to write a backend for their software that will interface with the PsiQuantum architecture so we think that most of them will be able to support the PsiQuantum machine once it is available.
But, in addition, there are many things that the software companies will no longer need to worry about because they will be dealing with error corrected, textbook quantum gates. These include:
- Currently, much effort is being made in algorithm design to reduce the number of gate levels to the minimum possible in order to improve the solution quality. The more levels a program has, the more chance for an error in the calculation. However, if the gates have built-in error correction, the need to optimize the number of levels won’t be quite as important.
- There won’t be a need to work with pulse level control in order to find ways to mitigate errors. All the error correction will be performed inside the hardware and a programmer shouldn’t have a need to get involved with it.
- The gate fidelities and coherence times in many of the machines may differ from one qubit to the next. Sometimes, the compilers will implement calibration aware optimization routines to physically place the circuit to use the very best physical qubits in the machine for executing the algorithm. And this placement could vary from one calibration cycle to the next. This work would not be needed in an error corrected machine since all the logical qubits should have the same, near perfect fidelities.
On the other hand, having an error corrected machine may also change the algorithms that programmers use. One example, would be QAOA (Quantum Approximate Optimization Algorithm) which was specifically designed to provide low gate depths for NISQ level machines in order to reduce errors. QAOA works by shuttling the processing back and forth between the classical computer and the quantum computer multiple times to achieve a solution.
Another difference may occur in the way algorithnms are optimized to run on a fault tolerant machine versus a NISQ machine. In a NISQ processor one typically wants to minimize the number of CNOT gates in order to reduce error rates. However, in a fault tolerant algorithm, the CNOT gates won’t have this error rate problem, but instead the algorithm should be optimized to reduce the number of T-gates because T-gates can be expensive.
Although a NISQ oriented algorithms could run on the PsiQuantum machine, perhaps there may be alternate algorithms that work much better on a fault tolerant machines due to these considerations.
You might think that since PsiQuantum’s hardware is not yet available so that their engagement with customers is limited. But this is not the case either. They are engaging with customers, but in a different way than most of the other hardware providers. PsiQuantum is working with customers to come up with detailed paper analyzes at scale of the problems that their customer wishes to solve with a quantum computer. They are calculating the exact number of qubits required, the exact number of gates, the time to solution, and the required physical error rate before error correction to run a program that can provide a solution to a real-world customer problem. In contrast, most customer engagements with other hardware providers today are for training or experimentation purposes and use what are sometimes called “toy models”. These certainly have value, but it is not certain that customers will be able to scale up their solution from a “toy model” to a larger one that has commercial value. Sometimes things can change when you scale up and you encounter new, unanticipated problems that you would have discovered earlier if you had done a more detailed paper analysis.
So PsiQuantum is certainly taking a different path from the other hardware companies, and it does entail some level of risk. There is still a large amount of work left for them to do to make this a reality. But PsiQuantum certainly has a very clear vision of how they intend to make their quantum machine a reality and the necessary steps they need to implement for it to happen. They are also very well-funded and have assembled a large, talented team to work on solving the remaining engineering issues. On the other hand, the other hardware companies are advancing their technology very rapidly too. So, it is still a race to see how PsiQuantum’s approach will stack up against the other quantum computer implementations and we will need to wait another 5-10 years to see how it turns out.
PsiQuantum has participated in a number of talks and webinars as well as publishing papers that describe their technology in more detail. Here are some of the references that you can check to learn more about their technology.
February 11, 2022