Scientists build the most precise quantum computing chip ever using a new silicon-based computing architecture

Silicon Quantum Computing physicists have developed what they consider to be the most precise quantum computing chip ever designed, after having built a new type of architecture.
Representatives of the Sydney-based startup say their silicon-based atomic quantum computing chips give them an advantage over other types of quantum processing units (QPU). Indeed, the chips are based on a new architecture, called “14/15”, which places phosphorus atoms in silicon (so named because they constitute the 14th and 15th elements of the periodic table). They presented their findings in a new study published December 17 in the journal Nature.
SQC achieved fidelity rates between 99.5% and 99.99% in a quantum computer with nine nuclear qubits and two atomic qubits, resulting in the world’s first demonstration of silicon-based atomic quantum computing on discrete clusters.
Precision rates measure the effectiveness of error correction and mitigation techniques. Company officials claim to have achieved a peak error rate on their custom architecture.
It may not sound as exciting as quantum computers with thousands of qubits, but the 14/15 architecture is extremely scalable, the scientists said in the study. They added that demonstrating maximum fidelity across multiple clusters serves as a proof of concept for what, in theory, could lead to fault-tolerant QPUs with millions of working qubits.
The secret sauce is silicon (with a side of phosphorus)
Quantum computing is carried out according to the same principle as binary computing: energy is used to carry out the calculations. But instead of using electricity to flip switches, as is the case in traditional binary computers, quantum computing involves the creation and manipulation of qubits, the quantum equivalent of the bits in a classical computer.
Qubits come in many forms. Scientists at Google and IBM are building systems with superconducting qubits that use closed circuits, while some labs, like PsiQuantum, have developed photonic qubits, qubits that are particles of light. Others, including IonQ, work with trapped ions, capturing single atoms and holding them in a device called a laser tweezer.
The general idea is to use quantum mechanics to manipulate something very small in such a way as to make useful calculations from its potential states. SQC representatives say their process for achieving this is unique, in that QPUs are developed using the 14/15 architecture.
They create each chip by placing phosphorus atoms into wafers of pure silicon.
“This is the smallest type of functionality in a silicon chip,” Michelle SimmonsCEO of SQC, told Live Science in an interview. “It’s 0.13 nanometers, and that’s basically the kind of bond length you have in the vertical direction. That’s two orders of magnitude below what TSMC usually does as a standard. It’s a pretty dramatic increase in accuracy.”
Increasing the number of qubits of tomorrow
For scientists to advance in the field of quantum computing, each platform must overcome or mitigate various obstacles.
Error correction (QEC) is a universal obstacle for all quantum computing platforms. Quantum calculations take place in extremely fragile environments, with qubits sensitive to electromagnetic waves, temperature fluctuations and other stimuli. This causes the superposition of many qubits to “collapse” and become unmeasurable – with quantum information lost during calculations.
To compensate, most quantum computing platforms dedicate a certain number of qubits to error mitigation. They work similarly to bit control or parity in a typical network. But as the number of qubits increases, so does the number of qubits required for QEC.
“We have long nuclear spin coherence times and we have very few of what we call ‘bit-flip errors.’ So our error-correcting codes themselves are much more efficient. We don’t have to correct a little bit of flip and phase for errors,” Simmons said.
In other silicon-based quantum systems, bit-flip errors are larger because qubits tend to be less stable when manipulated with coarser precision. Because SQC chips are designed with high precision, they are able to mitigate some occurrences of errors found on other platforms.
“We really just need to correct these phase errors,” Williams added. “So the error correction codes are much smaller, so all the overhead you do for error correction
is very, very reduced. “
The race to beat Grover’s algorithm
The standard for testing fidelity in a quantum computing system is a routine called the Grover algorithm. It was designed by a computer scientist Lov Grover in 1996 to demonstrate whether a quantum computer can demonstrate an “advantage” over a classical computer in a specific research function.
Today it is used as a diagnostic tool to determine how efficiently quantum systems operate. Essentially, if a lab can achieve quantum computing fidelity rates between 99.0% and above, it is considered to have achieved quantum computing with error correction and fault tolerance.
In February 2025, the SQC published a study in the journal Nature in which the team demonstrated a fidelity rate of 98.9% on the Grover algorithm with its 14/15 architecture.
Look on it
In this regard, SQC has outperformed companies such as IBM and Google; although they showed competitive results with tens or even hundreds of qubits, compared to SQC’s four qubits.
IBM, Google and other major projects are still testing and iterating their respective roadmaps. However, as they increase the number of qubits, they are forced to adapt their error mitigation techniques. QEC has proven to be one of the most difficult bottlenecks to overcome.
But SQC scientists say their platform is so “error-deficient” that it was able to beat Grover’s record without running error correction on the qubits.
“If you look at the Grover result that we produced at the beginning of the year, we have the highest fidelity Grover album at 98.87% of the theoretical maximum and, on that score, we’re not doing any error correction at all,” Simmons said.
Williams says the qubit “clusters” featured in the new 11-qubit system can be scaled to represent millions of qubits – although infrastructure bottlenecks may still slow progress.
“Obviously, as we scale to larger systems, we’re going to do some error correction,” Simmons said. “Every company has to do it. But the number of qubits we need will be much smaller. Therefore, the physical system will be smaller. The energy requirements will be less.”




