Showing posts with label Quantum Computing Errors. Show all posts
Showing posts with label Quantum Computing Errors. Show all posts

Erasure Error Correction Key To Quantum Computers

 

Overview of an erasure-converted neutral atom quantum computer that is fault-tolerant. A plane of atoms beneath a microscope objective that is used to image fluorescence and project trapping and control fields is shown in the schematic of a neutral atom quantum computer. b Individual 171Yb atoms are used as the physical qubits. The Rydberg state |r|rleft|rright|rangle, which is accessible by a single-photon transition ( = 302 nm) with Rabi frequency, is used to conduct two-qubit gates. The qubit states are encoded in the metastable 6s6p 3P0F = 1/2 level (subspace Q). With a total rate of = B + R + Q, decays from |r|rleft|rrightrangle are the most common faults during gates. The remaining decays are either blackbody (BBR) transitions to neighbouring Rydberg states (B/ 0.61) or radiative decays to the ground state 6s2 1S0 (R/ 0.34), with just a tiny percentage (Q/ 0.05) returning to the qubit subspace. These events may be identified and turned into erasing errors at the end of a gate by observing the fluorescence from ground state atoms (subspace R) or by autoionizing any leftover Rydberg population and seeing the fluorescence on the Yb+ transition (subspace B). c A section of the XZZX surface code that was the subject of this study, displaying stabiliser operations, data qubits, and ancilla qubits in the order shown by the arrows. A quantum circuit utilising ancilla A1 and interspersed erasure conversion steps to simulate a measurement of a stabiliser on data qubits D1 through D4. After every gate, erasure detection is used, and when necessary, erased atoms are restored using a movable optical tweezer pulled from a reservoir. Although just the atom that was discovered to have left the subspace has to be replaced, doing so also guards against the risk of leakage on the second atom being unnoticed. Nature Communications (2022). DOI: 10.1038/s41467-022-32094-6


Why is "erasure" essential to creating useful quantum computers?

Researchers have uncovered a brand-new technique for fixing mistakes in quantum computer computations, possibly eliminating a significant roadblock to a powerful new computing domain.

Error correction in traditional computers is a well-established discipline. In order to transmit and receive data across clogged airwaves, every cellphone has to be checked and fixed. 

Using very ephemeral subatomic particle characteristics, quantum computers have the incredible potential to tackle certain difficult problems that are intractable for traditional computers. 

Even peeking into these computing activities to seek for problems might bring the whole system crashing down since they are so fleeting.

A multidisciplinary team led by Jeff Thompson, an associate professor of electrical and computer engineering at Princeton, and including collaborators Yue Wu, Shruti Puri, and Shimon Kolkowitz from Yale University and the University of Wisconsin-Madison demonstrated how they could significantly increase a quantum computer's tolerance for faults and decrease the amount of redundant information. 


The new method doubles the allowed error rate, from 1% to 4%, and makes it workable for developing quantum computers.


The operations you wish to perform on quantum computers are noisy, according to Thompson, which means that computations are subject to a variety of failure scenarios.


An error in a traditional computer may be as basic as a memory bit mistakenly switching from a 1 to a 0, or it can be complex like many wireless routers interfering with one another. 

Building in some redundancy to ensure that each piece of data gets examined with duplicate copies is a popular strategy for addressing these problems. 

However, such strategy calls for more data and raises the likelihood of mistakes. Therefore, it only functions when the great majority of the available information is accurate. 

Otherwise, comparing incorrect data to incorrect data just serves to deepen the inaccuracy.

According to Thompson, redundancy is a terrible technique if your baseline error rate is too high. The biggest obstacle is lowering that barrier.

Thompson's team simply increased the visibility of mistakes rather than concentrating just on lowering the amount of errors. 

The researchers studied the physical sources of mistake in great detail and designed their system such that the most frequent source of error effectively destroys the damaged data rather than merely corrupting it. 

According to Thompson, this behavior is an example of a specific kind of mistake known as a "erasure error," which is inherently simpler to filter out than damaged data that still seems to be all the other data.


In a traditional computer, it might be dangerous to presume that the slightly more common 1s are accurate and the 0s are incorrect if a packet of presumably duplicate information appears as 11001. 

However, the argument is stronger if the information appears as 11XX1, where the damaged bits are obvious.

Because you are aware of the erasure mistakes, Thompson said that they are much simpler to fix. "They could not participate in the majority vote. That is a significant benefit."

Erasure faults in conventional computing are widely recognized, but researchers hadn't previously thought about attempting to construct quantum computers to turn errors into erasures, according to Thompson.

Their device could, in fact, sustain an error rate of 4.1%, which Thompson claimed is well within the range of possibilities for existing quantum computers. 

The most advanced error correction in prior systems, according to Thompson, could only tolerate errors of less than 1%, which is beyond the capacity of any existing quantum system with a significant number of qubits.

The team's capacity to produce erasure mistakes ended up being a surprising advantage of a decision Thompson made in the past. 

His work examines "neutral atom qubits," in which a single atom is used to store a "qubit" of quantum information. 

They were the ones who invented this use of the element ytterbium. As opposed to the majority of other neutral atom qubits, which contain only one electron in their outermost layer of electrons, ytterbium possesses two in this layer, according to Thompson.

As an analogy, Thompson remarked, "I see it as a Swiss army knife, and this ytterbium is the larger, fatter Swiss army knife." "You get a lot of new tools from that additional little bit of complexity you get from having two electrons."

Eliminating mistakes turned out to be one application for those additional tools. 

The group suggested boosting ytterbium electrons from their stable "ground state" to excited levels known as "metastable states," which may be long-lived under the appropriate circumstances but are fundamentally brittle. 

The researchers' proposal to encode the quantum information using these states is counterintuitive.

The electrons seem to be walking a tightrope, Thompson said. Additionally, the system is designed such that the same elements that lead to inaccuracy also result in electrons slipping off the tightrope.

A collection of ytterbium qubits may be illuminated, but only the defective ones light up because, as an added bonus, the electrons scatter light extremely visibly after they reach the ground state. 

Those that illuminate need to be discounted as mistakes.

This development requires merging knowledge from the theory of quantum error correction and the hardware of quantum computing, drawing on the multidisciplinary character of the research team and their close cooperation.

Although the physics of this configuration are unique to Thompson's ytterbium atoms, he said that the notion of building quantum qubits to produce erasure mistakes might be a desirable objective in other systems—of which there are many being developed all over the world—and the group is still working on it.


According to Thompson, other organizations have already started designing their systems to turn mistakes into erasures. 

"We view this research as setting out a type of architecture that might be utilized in many various ways," Thompson said. "We already have a lot of interest in discovering adaptations for this task," said the researcher.

Thompson's team is now working on a demonstration of the transformation of mistakes into erasures in a modest operational quantum computer that integrates several tens of qubits as a next step.

The article was published in Nature Communications on August 9 and is titled "Erasure conversion for fault-tolerant quantum computing in alkaline earth Rydberg atom arrays."


~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read more about Quantum Computing here.


References And Further Reading:


  • Hilder, J., Pijn, D., Onishchenko, O., Stahl, A., Orth, M., Lekitsch, B., Rodriguez-Blanco, A., Müller, M., Schmidt-Kaler, F. and Poschinger, U.G., 2022. Fault-tolerant parity readout on a shuttling-based trapped-ion quantum computer. Physical Review X12(1), p.011032.
  • Nakazato, T., Reyes, R., Imaike, N., Matsuda, K., Tsurumoto, K., Sekiguchi, Y. and Kosaka, H., 2022. Quantum error correction of spin quantum memories in diamond under a zero magnetic field. Communications Physics5(1), pp.1-7.
  • Krinner, S., Lacroix, N., Remm, A., Di Paolo, A., Genois, E., Leroux, C., Hellings, C., Lazar, S., Swiadek, F., Herrmann, J. and Norris, G.J., 2022. Realizing repeated quantum error correction in a distance-three surface code. Nature605(7911), pp.669-674.
  • Ajagekar, A. and You, F., 2022. New frontiers of quantum computing in chemical engineering. Korean Journal of Chemical Engineering, pp.1-10.



Quantum Computing Error Correction - Improving Encoding Redundancy Exponentially Drops Net Error Rate.




Researchers at QuTech, a joint venture between TU Delft and TNO, have achieved a quantum error correction milestone. 

They've combined high-fidelity operations on encoded quantum data with a scalable data stabilization approach. 

The results are published in the December edition of Nature Physics. 


Physical quantum bits, also known as qubits, are prone to mistakes. Quantum decoherence, crosstalk, and improper calibration are some of the causes of these problems. 



Fortunately, quantum error correction theory suggests that it is possible to compute while simultaneously safeguarding quantum data from such defects. 


"An error corrected quantum computer will be distinguished from today's noisy intermediate-scale quantum (NISQ) processors by two characteristics," explains QuTech's Prof Leonardo DiCarlo. 


  • "To begin, it will handle quantum data stored in logical rather than physical qubits (each logical qubit consisting of many physical qubits). 

  • Second, quantum parity checks will be interspersed with computing stages to discover and fix defects in physical qubits, protecting the encoded information as it is processed." 


According to theory, if the occurrence of physical faults is below a threshold and the circuits for logical operations and stabilization are fault resistant, the logical error rate may be exponentially reduced. 

The essential principle is that when redundancy is increased and more qubits are used to encode data, the net error decreases. 


Researchers from TU Delft and TNO have recently achieved a crucial milestone toward this aim, producing a logical qubit made up of seven physical qubits (superconducting transmons). 


"We demonstrate that the encoded data may be used to perform all calculation operations. 

A important milestone in quantum error correction is the combination of high-fidelity logical operations with a scalable approach for repetitive stabilization " Prof. Barbara Terhal, also of QuTech, agrees. 


Jorge Marques, the first author and Ph.D. candidate, goes on to say, 


"Researchers have encoded and stabilized till now. We've now shown that we can also calculate. 

This is what a fault-tolerant computer must finally do: handle data while also protecting it from faults. 

We do three sorts of logical-qubit operations: initializing it in any state, changing it using gates, and measuring it. We demonstrate that all operations may be performed directly on encoded data. 

We find that fault-tolerant versions perform better than non-fault-tolerant variants for each category.



Fault-tolerant processes are essential for preventing physical-qubit faults from becoming logical-qubit errors. 

DiCarlo underlines the work's interdisciplinary nature: This is a collaboration between experimental physics, Barbara Terhal's theoretical physics group, and TNO and external colleagues on electronics. 


IARPA and Intel Corporation are the primary funders of the project.


"Our ultimate aim is to demonstrate that as we improve encoding redundancy, the net error rate drops exponentially," DiCarlo says. 

"Our present concentration is on 17 physical qubits, and we'll move on to 49 in the near future. 

Our quantum computer's architecture was built from the ground up to allow for this scalability."


~ Jai Krishna Ponnappan


You may also want to read more about Quantum Computing here.



Further Reading:


J. F. Marques et al, Logical-qubit operations in an error-detecting surface code, Nature Physics (2021). DOI: 10.1038/s41567-021-01423-9

Journal information: Nature Physics 

Abstract:

"Future fault-tolerant quantum computers will require storing and processing quantum data in logical qubits. 
Here we realize a suite of logical operations on a distance-2 surface code qubit built from seven physical qubits and stabilized using repeated error-detection cycles. 
Logical operations include initialization into arbitrary states, measurement in the cardinal bases of the Bloch sphere and a universal set of single-qubit gates. 
For each type of operation, we observe higher performance for fault-tolerant variants over non-fault-tolerant variants, and quantify the difference. 
In particular, we demonstrate process tomography of logical gates, using the notion of a logical Pauli transfer matrix. 
This integration of high-fidelity logical operations with a scalable scheme for repeated stabilization is a milestone on the road to quantum error correction with higher-distance superconducting surface codes."



Fault Tolerance For Quantum Computing Errors

 




Scientists working on quantum computers—dream machines that might solve problems that would exceed any supercomputer—are learning to identify and fix their errors like a kid learns arithmetic. 


In the most recent stage, a team showed a method for detecting mistakes in the setup of a quantum bit, or qubit, that is guaranteed not to exacerbate the problem. 


Such "fault tolerance" is a crucial step toward the ultimate aim of keeping fussy qubits alive long enough to be controlled. 




“It seems to be a genuine watershed moment,” says Scott Aaronson, a theoretical computer scientist at the University of Texas at Austin who wasn't involved in the research. 

“We knew it was just a matter of time until someone did something like this.” 

However, John Martinis, an experimental physicist at the University of California, Santa Barbara, wonders whether the latest study's authors are exaggerating their findings. 

He describes it as a "really good step. But it's just a first step.” 

A traditional computer uses small electrical switches, or bits, that can be set to 0 or 1, while a quantum computer uses qubits that can be set to 0 and 1 at the same time. 


A qubit may be a single ion spinning one way, the other, or both directions at the same time, or a small circuit of superconducting metal with two distinct energy states. 


  • A quantum computer can encode all of the possible solutions to particular problems as quantum waves sloshing across the qubits thanks to such both-ways-at-once states. 
  • Interference cancels out the incorrect answers, allowing the correct solution to emerge. 
  • Such methods would allow a big quantum computer to rapidly factor enormous numbers, which is difficult for a regular computer to do, and therefore defeat encryption systems used to secure data on the internet. 


However, even the tiniest disturbance may destabilize a qubit's fragile condition. 


  • If a qubit were like a regular bit, researchers could simply duplicate it and count the majority to keep it in the correct condition. 
  • If a duplicate does flip, adding up several subsets of the bits (so-called parity tests) will disclose which one it is. 


Quantum theory, on the other hand, prohibits the copying of one qubit's state onto another. 


  • Worse, every effort to test a qubit to determine if it is in the proper state causes it to collapse to one of two states: 0 or 1. 
  • Researchers circumvent these issues by using entanglement, a quantum link that enables them to distribute the state of an initial "logical" qubit—the object that will ultimately execute the required operation—across many physical qubits. 
  • A 0-and-1 state of one qubit, for example, may be extended to three qubits in a condition in which all three are 0 at the same time. 
  • Researchers may then entangle additional ancillary qubits with the group and measure the ancillary qubits to identify faults in the main qubits—without ever touching them—in a quantum version of parity checks. 


In fact, the method is considerably more complex since developers must avoid two kinds of errors: 


  1. bit flips 
  2. and phase flips. 



Despite this, scientists have made progress. 




In June, Google researchers using superconducting qubits demonstrated that spreading a logical qubit over as many as 11 physical qubits with 10 ancillas may decrease the incidence of one kind of mistake but not both at the same time. 


Now, physicists Laird Egan and Christopher Monroe of the University of Maryland (UMD) in College Park, together with others, have shown a method that simultaneously corrects both kinds of flips—and therefore any mistake. 


Individual ytterbium ions are trapped in an electromagnetic field on the chip's surface to form qubits. 


  • The researchers utilized nine ions to encode a single logical qubit, as well as four more ions to keep track of the primary ones. 
  • Most importantly, in certain respects, the encoded logical qubit outperformed the physical ones on which it is based. 
  • The researchers, for example, were able to prepare either the logical 0 or logical 1 state 99.67 percent of the time, which is higher than the 99.54 percent for individual qubits. 

Monroe, creator of IonQ, a firm creating ion-based quantum computers, says, “This is truly the first time where the quality of the [logical] qubit is greater than the components that encode it.”

However, that the encoded qubit did not outperform the individual ions in every manner. 

Instead, the true breakthrough is proving fault tolerance, which implies that the error-correcting mechanism does not create more mistakes than it corrects. 

Fault tolerance is the design concept that prevents mistakes from spreading.

Martinis, on the other hand, has reservations about the term's usage. 


Researchers must also accomplish two additional things, in order to claim genuine fault-tolerant mistake correction. 


  • They must demonstrate that as the number of physical qubits grows, the mistakes in a logical qubit become exponentially less. 
  • They must also demonstrate that they can measure the auxiliary qubits frequently in order to keep the logical qubit stable, he adds. 


Those are the apparent next steps for the UMD and IonQ teams.

He points out that in order for the encoded logical qubit to outperform the underlying physical qubits in every manner, the latter must first have a low enough error rate. 


~ Jai Krishna Ponnappan


You may also want to read more about Quantum Computing here.


What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) is defined as the software representation of generalized human cognitive capacities that enables the ...