Main

QEC in Neutral Atom Quantum Computers

Harry Zhou (Harvard University & QuEra Computing) https://simons.berkeley.edu/talks/harry-zhou-harvard-university-quera-computing-2024-02-15 Advances in Quantum Coding Theory In this talk, I will give an overview of neutral atom quantum computers, an emerging platform for error-corrected quantum computation. The talk will start by introducing the basic building blocks of neutral atom quantum computer operations and describing some differences from other platforms. I will then highlight some of the key characteristics that make this system particularly well-suited for error-corrected computations and exciting opportunities to further enhance the operation of such error-corrected quantum computers.

Simons Institute

Streamed 1 month ago

okay welcome back everyone um so we will continue with the theme on neutral atoms and now Harry Joe will um kind of go into more depth on how to do QC in uh neutral atom computers thank you um um so hello everyone uh I'm uh Harry Joe from Harvard University and quer Computing um and I'm excited to kind of continue on the topic of neutral atoms uh and tell you a little bit more about kind of some considerations when trying to do Quantum error correction with a view towards the future um I do want
to emphasize that I tried to make it as also computer science friendly as possible so if you're a computer scientist and there's something that you don't understand fine to physics e please interrupt me and also if you're not a computer scientist also feel free to un okay so um we've heard in the previous talk that we're starting to witness this generation of new Quantum systems in which we can really think about early fa tolerant logical Quantum processes and I mean here if you look at kind of
this diagram that Dov showed in his talk it really kind of reminds you a little bit already about what maybe some of the early classical computers might look like with maybe a storage Zone that looks somewhat um reminiscent of a ram in your system you have kind of a Computing region various things um and so it's kind of interesting to think about how as we go further in this field how should we further develop the architectures and what are some considerations for thinking about it maybe at a m
ore abstract level so in today's talk um I'll try to give you just a very quick overview again of some of the high level neutro atom operations um but at first I'll actually try to describe it in kind of as minimal and simple of a way that nonetheless captures a lot of the things that we have done and that the natural operations that you might want to do as you start designing further error correction codes so it's not quite python level but uh hopefully it's high level enough of a language uh w
e'll then talk about that a little bit in the context of our proposals for implementing Quantum ldpc codes um we have this proposal for hypergraph product codes but I think there are also other interesting code families that people uh in the community have been thinking about um and then following that we'll then move from thinking about the codes to thinking about the algorithms and thinking about how we kind of take these same native tools and do it in the context of for example transversal Ga
tes that you've already heard a little bit about throughout this conference after that then I'll go kind of into the more lower level details of the neutr atom system so you might think of that as maybe a little bit more like an assembly model details of really if you tried to do all all of the kind of things behind what are the microscopic operations which give you a little bit more flexibility than these high level operations that I'll talk about at the beginning um and finally I was also aske
d by the organizers to maybe give just like a little bit of a perspective or Peak view of where the field might go this is also something that we would like to know um so it's definitely something that is very under development I think the nice thing about the neutr adom field is that it's a it's an extremely vibrant field in both Academia and Industry um and so it will be kind of just an overview of a few things that people are thinking about um but I'm sure that there will be many other intere
sting things that I that I won't have covered here that we'll be hearing about in the next couple of years great um so we already heard from Dov a little bit how these neut triem systems work I'll maybe just give a very quick overview of some of the key operations again but focusing kind of at like a high level kind of um kind of kind of a block operation if you will so uh a very simple model that you might have in mind this is not the most General model but a very simple model might be that you
kind of have a rectangular array of sites and cubits in practice it doesn't have to be rectangular but for Simplicity let's just say that for now um and then kind of a very natural operation that we can do is so we have some of these sites and some of these cubits some of the sites might already have atoms like here or have cubits if you want some of the sites might be empty at this point so what you can do is essentially take any rectangular subarray of cubits um and with bad PowerPoint animat
ion you can kind of move it over to some of the other empty sites okay so that's kind of a very natural operation maybe some of the key things to highlight here first is that uh we're kind of working with in this particular case a rectangular array but also more importantly the array that we move has this rectangular structure right so there has a very clear structure fundamentally it comes from the types of optical tools that we're currently using um but really this is kind of what's efficient
for us to do okay pick up some rectangular subarray move it to somewhere else um and then uh do operations so what operations might we want to do uh obviously kind of the most interesting one in many cases is the two Cubit gate um so in this case what we can do is we move some subay such at then neighboring some cubits um and then if they are within this kind of entangling region and if they are next to each other then they'll do a two cuic a these also generalize to other types of gates kind of
if you want to do free cubic Gates you can kind of somewhat imagine the same thing um but this is maybe the the most relevant also for error correction okay yeah yeah so it's also very general like if you really didn't want a rular subarray is it easy to also do just an arbitrary subset of locations yeah so I guess these two are really somewhat kind of decoupled the gate is basically as long as they're neighboring they'll do it so if you kind of pick them up and put them there then there's the
separate question of if you wanted to move um can you move in a different grid uh I think it's very likely that in a few years people will come up with other clever Optical tools or manipulation methods this is kind of a constraint that is just I would say with the current Generation Um another thing maybe to also mention here is that in principle you could kind of go one by one and like move move this atom over to where you want arbitrarily so in principle you can do anything it's just that doi
ng that anything right now might not be the most efficient U but again I would I would say that and this is also something that that I'll return to later um I think a lot of these types of things right now there's a lot of creativity in the community and so if you have a good reason for not wanting to do just a very structured thing but do something else and it gives you like a huge Improvement uh people will definitely try to find ways to make that reality great um okay so we now know how to do
the gates uh we now know how to configure things um and so really I think uh one of maybe the key things to remember from these two slides of what the native operations are is that there are kind of at the moment at least very clear structures in the native control efficient operations that we can support I talked about this rectangular subarray really one way to maybe phrase it in a language that will be I think a little bit evocative of uh kind of some of the code constructions that we'll see
later is that it's kind of like a product right so if you have something that is rectangular one way to see it is that you have to ond things and you take a product and that that's how kind of a lot of these current tools are formed but again this is something that we um just have in the current generation okay so how does this relate to QC codes um for that I'll just maybe give one very quick review of quum Correction here so just to set the stage uh so typically we care about these Quantum co
des characterized usually by these nkd parameters number of physical cubits uh usually people refer to the number of data cubits used here um number of logical cubits code distance and in general the Target or the name of the gain at least in terms of the code design is we want as large K and D as possible so as large kind of a um encoding rate and large distance up to a certain point based on how how much error suppression you want while having as small of an n as possible so you want to use as
few physical Cubit resources to do uh um what you want now of course we've heard already uh in quite a few talks about these bounds if you have just a 2d local system and you can't do much better than uh this well-known surface code that I won't go into detail here um but of course there's been a lot of amazing work about how you can actually have better codes uh even kind of these so-called good Quantum ldpc codes um in in with like much better parameters where instead of being constrained by
this you can have both the number of encoded cubits and the distance scaling linearly with the system size uh however it's also known by some very nice work from from uh nen and Ani and others um that uh kind of these types of codes uh if you if you generalize them if you generalize some the previous bounds there are kind of constraints on how much know locality you need and it seems like you need actually substantial known locality so in the neutr atom case uh there is this flexibility of movin
g the atoms around this is kind of an earlier Generation video uh in which uh the toric co this is kind of roughly speaking in the in the qu Quantum error correction language one round of syndrome Z syndrome extraction for the toric Cod um so nonlocal connectivity is definitely possible with neutro adoms but the question is is it really compatible with this parallel efficient control that we like right this is really the key thing that enabled all of the cool experiments that you heard in DF tal
k um and I think it's something that is quite powerful as we think about going to larger systems so for qu ldpc codes there are these codes that can have much higher rate um but if you think about what they look like or or if you ask D to tell you what they look like if you didn't know um it might look something pretty random I think yosar also had some very nice illustrations in his talk um about kind of if you just naively draw a lot of the diagrams uh the whole jumo mess that it looks like an
d arbitrary connectivity may not be efficient to implement right I mean in some cases maybe if you have some kind of clever compilation tricks maybe you can get somewhere um but I think it's fair to say that with this kind of rectangular constraints it's not obvious if you want to implement something that is like generic or to or arbitrary connectivity at least at this point so really the key is to identify the right types of structures that allow you to do something a little bit more um and I t
hink what we heard from test talk earlier in the week is also a very nice example of this um as well as I think Oscar also mentioned of trying to understand the structure and trying to see how those actually match with the possible constraints that you might have so one kind of particular example that we found um and that we really like is this hypergraph product code so this has been introduced a few times I'll just go over it very quickly again uh so we have a classical code AS Illustrated her
e by kind of these rectangular checks and these uh classical bits here um and these connections I'm just schematically drawing this just means that this check uh will involve the parity of this bit so if that bit number five gets flipped then this uh check bit will kind of flag up okay um so for the hypergraph product construction what we're going to do is we're going to take two classical codes um and then take some type of product between them uh so the first thing to specify is where the data
cubits are uh so we have these data cubits here which essentially lie on these kind of intersections of a check Cubit classical check and on the two sides or a classical datab bit on the two sides um and with this we'll have some number of uh data cubits that roughly speaking is kind of on the order of n squ where n is the classical um number of bits um and then we'll also Define the checks so the X stabilizers will be these uh intersections of classical checks and classical bits um and then th
at the uh the Z stabilizes uh the other direction um and then what we're going to do is going to inherit all of the connections from the classical code so this is very important because you immediately see that this kind of aspect of you inheriting the connections in the QC code immediately gives rise to a clear product structure that is also something that we can now start to think about utilizing just in terms of code parameters uh if you use uh good classical codes which have a number of enco
ded logical cubits and distance both scaling linearly with a number of physical cubits then you'll end up with a Quantum Code that has um a linearly a linear number of encoded cubits um and a distance that scales as a square root uh so if we compare this to the surface code you'll see that we still have the square root distance scaling um with maybe a little bit less good prefactors um but more importantly we win in this encoding rate and I should also mention that this type of idea really kind
of was the beginning of this whole sequence of improvements that eventually led to the good codes so these good codes in some sense they're also kind of inspired by these original um uh simpler product type structures okay great um so now we can go back and think about how we Implement them um and I think I've kind of already told you essentially we want to match the product structure of these codes with the product structure of the optical tools they all both have product in their name so proba
bly they're the same thing um and indeed um what we can do is we essentially Implement parallel row permutations first this allows us to kind of in parallel Implement all of these connectivities like this and then we can do the same thing in the vertical Direction now here the types of connectivity that we might have in each of the classical codes however is quite generic right so so these are just some good classical codes that typically people might construct using kind of just some like rando
m construction so for the 1D actually we do need to have more General connectivity we're utilizing we're utilizing the product structure to gain a lot of mileage and make it work for the quantum codes um but we still need to implement potentially an arbitrary kind of connectivity within the one dimension uh so if you if you think about this this actually is also not too difficult to do um in a pretty efficient manner um so we can actually do a one-dimensional sorting Network the constraints are
little bit different from your usual computer science sorting networks because we're trying to move things in parallel um but nonetheless you still get a log uh depth um and this you might not be too surprised if you think about the computer kind of science or your algorithms class essentially you do a divide and conquer algorithm where you kind of take the cubits that you want to end up on the right side you move them all out to the right and then you can kind of just compact the workspace a li
ttle bit and recurse right so in one round we've kind of done like half of the work and then we go down one layer more and so the number of layers that we need is a log depth and so this gives us an algorithm to implement arbitrary sorting in 1D to implement our desired connectivity using a log number of layers and then we take the product we'll get still a log number yes as you move in this log you have to like transfer items from the two aods in every step or is it just once or yeah so in in t
his particular construction you do indeed need to transfer you you can I think yeah like one step of this if I remember correctly is like one transfer in and out um this is something that one can definitely try to optimize if you're given a particular one um and for small instances you typically can kind of cut quite a bit relative to this yeah so this is something that actually might also be interesting and I'll mention this in in kind of the open questions part maybe there's better ways to des
ign the codes such that you minimize metrics like that or like other ones yes so uh the moving time required to measure all the checks for a given hyper code be l l because you doation yeah rly speaking exactly so so I'm I'm going to show you um a movie Next of how this works you've already seen a lot of movies so I decided to add some sound to this [Music] [Laughter] one so you can see that we're kind of doing this log dep sorting Network here um and so that's one one gate then you do the same
thing for the second round it is right now quite a bit of moves um but uh nonetheless it's still a log dep and so if you go to a much larger one it actually will still roughly be like this maybe I'll turn the volume down a bit how big is this uh this particular one I think was so I guess this is like how many um so this is 50 in total so I guess it's it's uh like 200 or so including both data and and Silla um the classical code I think is like a um I don't remember the exact parameters it's been
a while since I made this but yeah um right so uh yeah so you can see very clearly from this that taking the product and the 1D log depth rearrangement you can get kind of these implementations of the syndrome rounds of the hyperr product code and this is kind of like the fundament building block also uh if you want to then go and do other stuff like gate operations and whatnot I think some of the things also to mention here U maybe towards kind of the Practical implementation it's actually a c
ool observation made uh in this nice paper from yel and also uh I think Chris also in parallel had some similar ideas um that this this uh particular way of doing it where we're actually only using a single Ancilla for the syndrome extraction is actually for tolerant um I won't go to through the details of this um it relies a little bit on the row and column structure so actually using this you can already still get most of your code distance which is very nice and I think that's part of the rea
son why both in the numerics that we did as well as some of the earlier work from from Michael Nicola and and Maxim tremble um we saw relatively good performance so we can now actually go and do the circuit level simulations and try to compare what the resources are um I'll just give you kind of the the end result of that with without going into too much of the details so essentially um we can try to compare an implementation in this particular case just of the memory um using different types of
families of corl DPC codes so in this particular case we looked at both these hypergraph product codes that I just talked about as well as some lifted product codes for the smaller sizes um uh and and we see that for example if you just compare even relatively modest sizes uh you can still get uh pretty good performance um and around maybe a thousand or so Cubit cubits you can start to see improvements over the surface code that are kind of around an order of magnitude yes in these did you acco
unt for the fact that every time you transfer get about .1% loss or something in this one we didn't I would also maybe say that this 0.1% is kind of a current number um and definitely can be improved in the future and how is that what what changes would you do to improve it and like how low can you get it what are the cost associated with producing it further yeah I think right now I mean Dov can also chime in but right now I think a lot of it is really limited by our patients in terms of like j
ust aligning things well like making sure that the traps overlap perfectly if they're a little bit offset then you could imagine you kind of like they slush around when you try to pick up um yeah so I would say yeah I don't know if you by patience I mean some of the sides can be like perfect so yeah I agree yeah different question so can you give us a feel with one example size how many to do like full extraction yeah so I think uh what you saw earlier was kind of a typical um basically what we
were essentially simulating uh in in indeed yeah so so I think so I guess for for example if you have like a instance that is say like 50 or so then you take the log of that and then you you multiply by uh I think either two or three is the preactor so it would be fre let's say free log 50 um of layers uh which log based 250 is like six right so so like 20 20 layers 20 and so 20 layers implements a size like 2000 and yeah and it grows pretty slowly as you go to great um yeah so uh so if you look
at the numbers I think they look very nice and of course I should also mention that most of these improvements are also very compatible with with for example if you have aerium where you can do aasia like shy has been pioneering um then I think a lot of these can really kind of com combine and uh and um you get even more savings yeah just like if it's about like a 20 rounds of movie then sounds like it's about I don't know like a% of the coherence time am right uh yeah sounds about right yeah s
o so it's it's maybe a little bit less than that I guess when you're when you're comparing also um typically you should think about kind of between the two gates how long the time you move right because that will set the idol error rate that you put in and this is this is what we put in here we we were assuming definitely Ely the simulations were with numbers that were that have been achieved in the community but are not current numbers for our system but also I mean if we're actually looking at
10,000 atoms right that's still something that is in the future so I think when we look at these extrapolations we're really having that in mind um that uh some of these can can kind of be combined and also I think some of the ideas like with the hierarchical codes you could also Imagine combining some of these if you're a little bit worried about um about just the kind of these idle error rates um yeah so this can potentially give you an order of magnitude saving um but also what we would real
ly like to do is also logical operations so we heard some very nice talks I think also from Alanda yesterday um so the particular scheme that uh we had in mind here uh mostly just for Simplicity because kind of once you have the surface code then you can do the universal computation in this particular case was to teleport it out through an Ancilla into another surface code um this is inspired by this cohon scheme uh that that was very nicely proposed from Sydney um and in this particular case uh
we we slightly modified the construction such that you kind of save a little bit on the spatial footprint so we we ran the same circuit level simulations um and I think it was quite encouraging to find that actually you still get pretty comparable thresholds as the regular memory case um which a priority might not necessarily be what you expect because these two codes the the ldpc code which typically has a lot of expansion might look quite different from the surface code which is kind of this
uh local and like some topological but nonetheless the numerics were still encouraging as a starting point um and uh I think it will be interesting to see uh kind of future improvements to this uh so I also want to mention that actually this is not the only natural family that you can try to implement um so in this uh very nice bivariate bicycle code construction from IBM that we heard from Ted earlier this week um I guess I won't maybe go into the details of the construction again um I'll just
very quickly illustrate um at a slightly higher level what you need to do so um you have kind of these like two different types of data cubits as well as some different types of Ancilla cubits um for the classical coding Theory people maybe think of this as it being like a quasy cyclic code and so there is these types of quasy cyclic structures that are a little bit reminiscent of translational structures um so after seeing that paper actually we we kind of immediately realized that there was a
natural matching where you kind of just like shifter and there's very nice work from Fred tron's group that also further formalizes this and analyzes in quite a bit of detail kind of teleporting in and out and whatnot um so I think it's something that is also easy uh for this where the clear translational structure makes it well matched to the platform um and this is this is another movie um I forgot to add the music um so uh yeah you can kind of see that it's like doing the translations and goi
ng around um yeah okay so um I think there's still a lot of uh interesting work to be done here um and I'll just mention a couple of maybe challenges and open questions for people to ponder about uh so one one thing is to better understand the for tolerance in the general setting uh we now know that for this simple hypergraph product things are for tolerant but I think for other types of families um as well as even for the IBM codes I don't know if we necessarily have a systematic understanding
of why and when it's for tolerant improving decoders is certainly a very important thing um and I'm very excited about yosh's new library um hopefully it can make it both faster but also better um and I think thinking about ideas of really uh kind of porting over some of the ideas that we know well in in other settings to the setting of ldpc codes either from the classical literature or from other Quantum codes I think would be very interesting uh logical operations is definitely something that
still needs to be improved we have these huge blocks and it's very difficult to act on them individually um and then also I think as we now think about these I think it's also nice to just keep in mind the possibility of further co-designing the codes and the native operations that we have this platform uh any more questions before I move on yeah I do understand correctly that you can for a hypergraph product code you do sequential Z and X measurement and for each stabilizer measurement it takes
you something like w log W steps to do it where W is how big the weight of that operator yeah yeah so maybe first about the sequential so I think in the simulations we we were simulating sequential but actually in if you look at the paper there's there's also what we call a pipelined product coloration circuit essentially the idea is so there are constraints on like the x and z and and that's why oftentimes you end up doing them sequentially what you can do is you first do the first round of X
and then when you're halfway through you bring in the Z's and then now you can actually do that Z with that X simultaneously so you kind of stagger them a little bit I think there there's very similar tricks also I think in Oscar's hyperbolic flet paper there was actually a similar way of kind of pipelining these types of things so so that's that's one comment you don't have to have them fully sequential if you're doing like several rounds um the the second comment regarding how to count kind of
the depth uh so indeed it will scale scale with the stabilizer weight um actually I guess it scales with kind of the maximal Cubit degree right which for example for if you construct the hyper product code from fre full then it will be eight um and then um then uh on top of that it will be log the system size so not log W but log l yeah but for the but for the by varied codes from bra at all there you don't have the log Factor exactly yeah and when we're implementing the codes is the dominant t
ime waiting for the movement to actually occur and the gate times are relatively quick so then does that mean that we'll also need to explore trade-offs between movement speed and the Fidelity of our overall code I think that could be interesting to look at as well and and I guess this really depends a little bit on what regime you're working in maybe right now uh as Dov also mentioned we're not maybe so immediately worried about the movement speed but certainly as we think about these architect
ures in the long run this is definitely still something that is maybe a little bit open yeah and do we have a good sense of the relationship between between movement speed and noise effects like is it linear or is it quadratic in the speed for example I would say to Leading order it's mostly just proportional to the amount of time that you move um and maybe this is this is a little bit more detailed physics but uh the move time actually in in the regimes that we operate in roughly scales as a sq
uare root of the distance the way to think about it is kind of um you uh you have heuristically it's like a constant acceleration you kind of accelerate then de accelerate Parabola yeah exactly yeah uh is the cubid coherence time is it something inherent to the cubid or something that can be improved it definitely is something that can be improved so Dov mentioned like one or two seconds um if if we don't move probably at this point I would guess it's maybe closer to 10 seconds or so um for uh I
guess we have some atom Computing people here uh they've also very nicely shown coherence times that can be as long as maybe 50 seconds I think um so and and that was even just with single pipul decoupling and nothing fancy uh yeah so I think definitely there's still quite a bit of room to improve especially if we're trying to do these systems So when you say constant acceleration I would imagine that that also depends on the Trap dep yes because if it's too shallow then you can't keep up the c
onstant limit there will be some scaling with that and and definitely you need to trade off all of these factors if you like had tons of tweezers maybe you had them shallower but then you can't move as fast yeah when you do the one sorting how far do you have to go around the other in the other direction to avoid the tra Oh you mean like the the vertical offset for example yeah I mean I think typically around like kind of one or two microns is already sufficient especially if kind of they're jus
t like relatively quickly moving past yeah maybe two microns yeah but it's it's definitely kind of more than enough work space between our SES great okay um so I'll go a little bit faster in this next section um kind of ultimately uh we're interested in not just implementing uh kind of quantum codes really our our grand Target here is doing the kind of actual computation right and so a natural question is when we think about computations are there way that we can also improve things um using the
se native operations and you already heard a lot about this in dov's talk where you can kind of implement transversal Gates by naturally just directly grabbing onto an entire logical Cubit and then doing operations with that um and I think uh most of these I think were also in the's talk so I'll go quickly through them um for transversal Gates the nice thing is that the error because you're kind of just matching cubits from the two different co- patches uh the errors uh actually don't spread wit
h within a code block so that typically helps with for tolerance um but one thing that maybe wasn't studied quite as much in the previous literature um was when you actually start doing this how does it look like in an algorithm and how do you decode it right so I think a lot of the early discussions was oh it's nice we can do transversal Gates um but it's actually surprisingly difficult to find Works um on on kind of actually trying to decode them I think there was some very nice work from Mich
ael and and Alex um uh analyzing this um but it's not something that people have really looked at that much in Earnest um and here really kind of you might imagine that now when you're trying to run an algorithm you had a physical error kind of in the first logical Cubit and you just did a bunch of C knots then this will kind of spread over to all of these other logical cubits right so if we're not careful um and if we're kind of not doing the decoding well then we might actually need to worry a
bout this massive AR buildup but the key thing is that here really kind of there was this one error event and it you kind of know that this error event is propagating in a deterministic way across this circuit so this is really information that you actually have and you should try to utilize that to improve your decoding uh maybe one comment that I that I really liked was uh Craig gidney left a comment on on um when when Scott arenson was kind of discussing this and he said that doing correlated
decoding is not a bug it's I think either he's called it a feature or competency maybe even um competence okay yeah so so I think this is something that since you know that it happened you should be doing it right like that's you're going to run the same circuit you're going to improve the decoding and so then then we should be doing this um and so so this is something that uh we've been looking at quite in quite a little bit more detail uh Maddie is the main theorist leading this and also Chen
um from quera has also been doing a lot of the work um and essentially the main challenge here is that for this decoding problem there's actually kind of it's inherently a hypergraph decoding problem I won't go through the details of where exactly it comes from um I'll just say that unlike the usual surface code decoding problem where you can kind of typically decompose things into kind of weight two edges and therefore it becomes like a graph here there are some graphs that are not decomposabl
e into weight one or two um at least like if you just naively do it um and this causes it to inherently be a hypergraph problem that you're trying to solve um so we we took kind of two different approaches to try to decode this one is to just do kind of a most likely error decoding essentially you phrase it as an optimization problem and throw it into your favorite kind of integer programming solver um and then the other one which I also won't talk about the details here but I'll just mention th
at it's kind of inspired by this Union find algorithm that's for the surface code um and Nicola Michael um and Vivian L had a very nice paper that kind of analyzed some of the theoretical properties of this decod of earlier um and so uh using this um and and I should also mention that there are many other possible approaches yeah when you to do yeah get bad results so what is it about this problem let me away the hyper so I would say that we're also not necessarily getting that great results wit
h the Union find here um and I think thinking of better ways to Improvement improve that is is like a great research question yeah so you'll see you'll see maybe in in um in I think the next slide um that there is still a performance Gap from kind of the the most time we okay so a simple thing that we can do um is to kind of just uh do this simple circuit where you maybe choose to put some noise before or after the C not um and then uh you can see that if you just do matching kind of independent
ly then essentially if the errors are mostly after the SE uh sorry mostly before the C not then down here you'll get double the double the error rate so you're going to get a threshold that is roughly half your usual threshold whereas if you do the kind of correlated decoding then uh then you kind of recover some threshold that is much closer to the one that that with all of the er um what we can also do now is then look at a little bit deeper yeah yeah so so what we can also look at now is uh a
little bit deeper circuits so here what we have is four logical cubits and depth 32 um what we're doing is doing like one round of operations and then following that by some some undetermined rounds of um of U um uh uh syndrome extraction that we kind of vary so for example if you do D rounds then you'll return to the usual setting but you can also look at as you reduce that where does it really kind of start to turn around um and what we see is that uh you can actually still get pretty competi
tive thresholds for this um even if you're doing just one round per SC not okay so we have a pretty deep circuit here and then we measure them and then you with just one round per cot in this deep circuit um we we still have a threshold and I would say these thresholds that we're quoting as you can see here they they haven't quite crossed at the number that we're quoting we're just being a little bit more kind of careful in terms of what we what we want to say about the precise number um so you
can see that it's kind of somewhat at least you'll see like a threshold that is competitive um and uh um and um and it doesn't degrade too much yes this is only for loal CU this one is just for four logical CU anything about I have bigger circuit across more CU I think I think um I think you can probably come up with arguments that at least tell you a little bit about how it generalizes I do agree that maybe you can't precisely say oh this immediately tells me that my like 20 logical Cubit circu
it will have this exact error rate but I think it still teaches you a little bit about what you might expect there and one thing that we did do is for example you can vary the depth here and you do see that the kind of um things are like all consistent so this is like average over different random 32 circuits uh this I think was multiple circuits yeah so what what was the worst case with that I don't remember out of the top of my head but I think the variation was not large um and also again we
checked a different depth with just another randomly generated circuit and the things like basically you see results that are consistent with this maybe I'll give another intuition um about why you might expect it to be the same essentially it's a competition of entropy removal using your measurements and kind of uh and and like added noise right and so if you have a different circuit if you're doing cots and whatnot the amount of noise that you're adding and the amount of entropy like kind of t
his kind of thermodynamic balance is roughly the same and so generically you probably would expect it to actually be somewhat similar of course the details could vary a little bit but also I think you can actually also prove code distance in this setting as well I mean I believe the argument for maximum likelihood deing but I don't necess believe it for like say hypergraph unit find or anything like that because it's very gra so yeah for a specific problem definitely it needs to be seen yeah I w
ould I would maybe say that it's it's not it's not like obviously clear that it will be exactly the same but I think that it's also kind of not unreasonable to say that heuristically it likely will be similar yeah I think mik okay great um okay so we can also look at kind of the the actual space time volume that you're doing per logical operation um and here what you see is that as we vary the number of rounds of C not because you're adding these rounds of syndrome extraction um your SpaceTime v
olume is going up so here what we okay maybe I should mention what we do is we have a Target logical error rate and then you can optimize the number of rounds of C not you do to minimize the SpaceTime volume right really what we're trying to do is minimize the resources to run the given circuit that we want to hit the target logical error rate and because you're adding in these extra rounds of syndrome measurement you're really paying a lot of cost when you try to do these extra rounds um that t
hat you kind of as long as your threshold isn't getting affected too much uh you won't get um you maybe slightly increase the distance for the few rounds but but that's kind of a poly log overhead yeah we're we're following them with the same number of rounds so also one round yeah um yeah and so so really I think it's kind of an interesting balance between the amount of information that you're taking out and the amount of amount of kind of errors that are added in um and I think it's interestin
g that we do indeed find regimes where you are actually potentially saving these factors of De so with these I think there are still a lot of questions um I think fruy hit on a very important point which is that in practice when we're decoding these they are going to be pretty complicated problems so how can we actually still maintain the performance at Large Scale if we're trying to do something in practice I think that's that's still an open question um and I think it's a very interesting ques
tion for people to think about decoders um and to see maybe whether there are decoders that either can improve the Practical performance or you have better guarantees about them um uh for for in our case the the kind of the most likely error decoder that one in principle probably has exponential runtime um and uh the kind of Union find in principle is polinomial but then you also kind of Might suffer from error rate um when you try to do it practically yeah uh I don't remember out of the top of
my head um but but but I guess we can maybe take a quick look at this so these I think are the same code distances so at least on this graph I would say the lines kind of look parallel but again this is not like yeah it's it's something that that uh requires more analysis for sure and this this line is most likely Arrow so I would expect it to still have the the distance but I guess we can TR more cap that's a good I want I have to and what you're telling me is now that I'm doing something I don
't have to so I would say even even if you're even if you're doing nothing maybe you shouldn't be doing D rounds necessarily but you can always show that there's a Tim like error which basically will connect boundaries so I think at least in this case you have a deep circuit right and you kind of absorb those D rounds into this and I think it's an interesting question in an actual algorithm or maybe a different setting what it looks like right so I would say this in that case you done that that'
s think nicer example to like shrink the circuit size yeah this is something that we're also kind of interested in looking at yeah it's a great question yeah okay so so I think there's kind of this question of speed and accuracy here which is indeed a little bit more challenging and I should also say that maybe eventually because our decoders can't keep keep up you kind of do choose to do a little bit more rounds right maybe the reason that we're not doing this super fast is actually not because
our errors error rate can't keep up but more because our decoders maybe maybe like we want to partition it into smaller blocks so so I think it's definitely still like a very kind of open question um and I think designing the decoders also thinking about realtime decoding I think there's been increasing work on this that also be very interesting um and co-designing it with the algorithmic context as well okay um so I'll kind of uh now uh move a little bit back into talking about the model that
you might want to have for these uh quantum computers again um and kind of uh talk briefly comment also on maybe what the future systems might look like um I should I should emphasize that the technology is still very rapidly developing and tomorrow's systems might look very different from what you're seeing here but just to kind of give you kind of a somewhat working model of maybe what we currently have we have access to hundreds to maybe low thousand of cubits um you have pretty high fidelity
gate operations with long coherence times and you can H do these parallel moves in a very natural way um also you have access to some amount of mid circuit measurement and feed forward um and I think there's some interesting analogy that that from kind of the Cs side you might think about with like a classical Ram um and kind of bring it in and out I think there's actually also interesting parallels with like for drams you actually need to do these refreshes so it's it's actually quite interest
ing how some of the parallels might go um uh so I guess this is kind of a slightly more microscopic view of how some of these parallel operations actually look like uh I'll maybe just very quickly go through some of them um since this is recorded you can kind of watch it later if I'm going too fast here but we have sites that can contain zero or one cubits you have some fixed sites and although I was assuming they're 2D rectangular earlier this one you can really assume to be quite General and a
rbitrary so that already gives you some degree of Freedom um then you have these movable sites these uh because of the current Hardware constraints you can only do in parallel in these kind of rectangular subarrays but actually a subray also doesn't have to be completely filled right so you can have kind of some sites that are empty and we'll just be picking up kind of these three atems and moving them when we move them the rows and columns will move together um but a row cannot move past anothe
r row or and same for the column um this is because you might expect them to maybe Collide um and so so uh it's something that might not be true in the future but at least at the current stage that's what it is uh we've already discussed this quite a bit you can kind of take atoms drop them up in different types of traps um and then now um there'll be say an a static trap um and so you can kind of do these transfers but that is something that currently has 0.1% costs I think it we're pretty opti
mistic as I we already mentioned of reducing that um but that's something that is also uh something to think about when you're designing things for the gates uh we've seen a lot of great work um both from the kind of the the community that we're in but also from other people so single cubic Gates nowadays are routinely in the 39 to 49s range um and they can going to be quite well paralyzed across the system as well with these same techniques um we have two cubic gate fidelities um that are kind
of on the order of these 99.5's um but you do have kind of right now these paral parallel constraints where you're applying the same gate maybe that's not a big deal for error correction because we're usually anywh doing the same type of CZ or C not Gates something to think about um and I think one nice feature about this platform is that the noise sources are relatively well understood um and also so because it's a relatively clean system you can kind of think about really looking at it and the
n designing and adapting so there very nice work uh from shuy and and Jeff Thompson um of thinking about tailoring the QC codes towards maybe the noise characteristics that you have either Erasure bias biased erasers um and uh we also um I think both in our work this is a this is kind of a eror breakdown uh from from our simulations in this High Fidelity Gates paper I think there's also some very nice tables in in other work from for example DS and um and manual essers lab um where you can kind
of look and really break down the detail Channel and these these I think will continue to improve as we push to better error rates and as we better understand these error models so really programming a neutr atom quantum computer really looks like you kind of have all of these different toolboxes um and uh you kind of utilize a lot of control parallelism um and you can kind of do pretty complex circuits nowadays uh but I want to return again back to this question of what is really enabling these
complex experiments that you saw and why really kind of maybe for many people in the community it perhaps came a little bit of a surprise as to suddenly why people are suddenly doing such complex things and I think this is also mentioned in dov's talk where really when you think about it the amount of control that you have um for these systems is just very different from when from when we're looking at a classical computer right for a classical computer you have billions of transistors and mayb
e on the of a thousand or so external controls um whereas for the current generation quantum computers um you have typically say three wires per cubit uh and this means that for a 50 Cubit experiment you're looking at hundreds of individual high performance controls to control and manipulate the system in the experiments that we're doing and this is I think a large part of why even many of these relatively small more academic Labs with like five or so people like in the Harvard experiment um we
can still do a lot of these very complex things because the control is so simple right we don't need a team of programmers to go and kind of like write tons of software and like Hardware Engineers I mean eventually we probably will and do want to do that um but right now we're at the stage where there are still scientific discoveries and both on kind of Science and on to some degree even on error correction fronts to be made um and so here really kind of one wire controls one aod and this entire
circuit because we structured it in a way that utilizes the parallelism we're just using these kind of in total like maybe five awgs in the lab um compared to like hundreds in many other platforms and so really with this we have these features of nonlocal connectivity and and this kind of parallel efficient classical control that allows one to do very complex things yeah yeah not the number of cids but the sudden increase in um 2q gate Fidelity and the description of the 2q gate Fidelity build
that's that didn't seem that new to me so what was it that made the Fidelity go up I think this is again an example that the community just didn't have the patience to go and like hammer it down um um I think so and then of course on the technical front there were some Innovations kind of there was the high laser power that was that was kind of a big thing and I think it will also be exciting to see for for some of these other other neutr atom cubits as they increase the laser power I'm sure the
y'll also like do much better um and then and then also there were better cooling schemes uh that were implemented um and then also the gate design I think actually contributed uh quite a bit like kind of using this time optimal gate that is a smooth pulse that gets a rid of a lot of transients that we didn't explicitly characterize but we're probably plaguing our previous game and then what are you using your current smooth what do you think the fundamental CH Tech yeah so I would say I mean pe
ople people typically think that three NS is something that is achievable in these systems um going to 49s is probably hard I would say um uh so ions can I would believe if ions can go to 49es I think for neutral atoms that's a little bit hard um but uh maybe there's creative ways I mean if you have like these Erasure Cub erasor things then your free NES is actually maybe more like two NES even right so yeah uh in the previous slide uh the is the individual addressability of this not necessary a
t all so there definitely are operations that you want individual addressability um and we do have individual sing Single Cubit rotations as well which can do single cubic Gates so so kind of most of the time you want the parallelism but you also do have the freedom to do the individ for that you need the number of controls should be actually number of physical Cubs you're doing them at different times and at different sites so you can also paraly a lot of that potentially it is an open question
as to how you actually parallelize um but I I I think it is possible uh so if you uh you increase the power I think 10 times to get to 99.57% felties if you 39 felties how much more power increase you would need and have you increased the number of cubits to like thousands it also proportionately increase the power yeah that's a great question and I think that's something that does need to be kind of solved in these systems um certainly right now I think we're probably more limited by things li
ke transits than necessarily are power right now and I think there is still quite a bit of room also optimizing like top hats for the atomic physics these are all things that can be further optimized so I think I think we are optimistic that with these schemes you can certainly get kind of good Gates on like at least small system sizes with with probably three nines um with larger system sizes I think it it is an interesting question that still needs to be Sol I don't know if you have any more c
omments than that oh uh I agree with everything you said and I'll clarify the record on one thing which is that we increased the so the we got to 99 like 2 or9 just by with the exact same laser power and then the 99.5 we're not even utilizing all the extra like back power actually the main thing is G very okay so of course we're also interested in kind of um uh like what limitations there are right I'm sure you you'll also like to hear that so we obviously need better cubits and more cubits I th
ink this this we've already touched on uh one thing that has also been kind of discussed a couple of times uh is the fact that here we're actually not doing a continuous operation um and so for us actually our optimization and and what we choose to do is very informed instead by the SpaceTime footprint rather than the space footprint like you typically might be thinking about that being said all of these are being actively tackled in the community a year ago the same discussion was with mid circ
uit measurement since then we had five different approaches all of which seem to work quite well here we we've already seen two papers starting to address this uh some from atom Computing some from MP Q um and so I think it's very exciting and and I'm sure we'll see more of this uh but really I think kind of as we think about these large scale we're still very far right I mean we brought these physical error rates down a little bit but we're really looking at 10 the minus 12 or 15 and so we real
ly need all of the breakthroughs that we can um and it's not something that we should view as a static technology this is just a snapshot and we need to be creative in thinking about what we what we can have um so I was asked to kind of give a possible model for a Next Generation quantum computer I don't know what it will actually look like so I'm just putting something that might appear in a few years hopefully will appear I guess I guess based on the quer road map um so there's 10,000 cubits o
r so hopefully kind of very high gate Fidelity operation on them continuous reloading mid circuit nondestructive measurement kind of many of the shortcomings that you see right at this snapshot I think people do have pretty concrete ideas to address and it will be interesting to see how it develops but I think it's also oh yeah Sor would you would you say what the total number of GES operation I think the hope would definitely be that you get into a regime of a million or so Gates um but that re
ally depends on a lot of details and whether we'll get there or not is a question yeah S I haven't seen the qu I'm not sure but uh so this a mid circuit measurement yeah are you still going to do one me measurement on 10,000 or two me two measurements on 10,000 cubits or it actually be the hope would be that if you're if you're really operating continuously and then like you're trying to do a million case then you'll definitely be doing menu mounts here uh but I also want to want to kind of ment
ion that there's a lot of exciting new hardware functionalities and ideas the neutr atom Community is very vibrant there's new Atomic species um a lot of people are working on um there's uh kind of new Optical Hardware like kind of Integrative photonics or latices there's also cavities of kind of networking and you can think about maybe how you actually distribute the algorithms and so that I think there's a lot of interesting kind of directions to think about and really I think what I what I wa
nt to say also is that especially for theorists you should keep the kind of aspects that we have in mind but I think it's also very important to think just even regardless of those constraints where can I get a 10x Improvement right like if you have ways of getting a 10x Improvement people might be inspired to then think about ways that maybe it eventually becomes a FreeX Improvement in practice but it will be inspiring and I think we've seen this example with the ldpc codes where 10 years ago w
hen it was actually I guess it was 15 years ago when it was proposed these hypog graph product codes I don't think people really had any of these types of considerations in mind but eventually it still inspired work on the side of really implementing and I think maybe if you one thing to keep in mind as you're thinking about building your own model I think this control parallelism is a key aspect um and you can Define what you mean by control parallelism right it doesn't have to be the sense tha
t we're talking about but maybe that inspires someone to make a different type of device that has a different type of control parallelism uh so we learned a lot from the community I think DV already showed this slide um but I also want to emphasize that these same lessons often times kind of what we've learned isn't really unique to our system right like we've seen that structured non local connectivity can be very powerful and the same ideas I think also actually apply to trapped ions to Quantu
m dots I think people are starting to think about these as well there's also kind of these very nice uh Works uh that Ted Yoda talked about um and also I think parallel efficient classical control doesn't have to be unique to this platform either if we think about super guton cubits I was told by my friends that the per Channel cost is $10,000 per cubit um if we think about a million cubits that's 10 billion dollars I'm not sure if if uh if the companies are necessarily willing to Shell out that
amount maybe but but I think it remains to be seen so I think there definitely will be a point where even in these other systems you also think about how you can really kind of efficiently do these control and I think it will be interesting to see how kind of other communities come up with approaches uh to try to address this uh so in summary I kind of told you a little bit about some of our perspectives on where the aor correction is today and where it might be tomorrow um and I'll just flash
this slide again from Dov um that there's really a lot of things that we can think about of co-designing these systems of thinking about the error correcting code but also about the algorithm um and the native Hardware capabilities today and tomorrow um and it will be very exciting to see where the field develops in the next few years um so with that I'd like to thank a lot of the great people who have been involved in this uh this is kind of uh the team for the for the logical algorithms experi
ment that Dov showed and then correlated decoding M and Chen were kind of mostly taking the lead here um and then for the ldpc uh experiment uh project it was a great collaboration with Lan group and in particular Tien and Pablo really spearheaded a lot of the kind of hard numerics work that were in here um so with that thank you very much and happy to take question about The Primitives so you said that you can move a block of cubits but on some of the movies I saw that you could move like every
other row of cubits or every third row can you actually do it is it part of The Primitives yeah maybe I should clarify that the rectangular subray doesn't have to be contiguous so so you do indeed like you can basically pick up whichever columns you want um and this is yeah very easy to program yeah you do it like the checker board pattern so a checker board is not a product anymore so that that is not so easy to do um uh yeah I guess I also wouldn't rule that out I mean people are creative wit
h their Optical tools current but currently allows you to do like every other row yeah or every third row or every fifth row yeah exactly if you just rotate a exactly see we we already have some creative uh Optical designers yeah that's probably how I would do it yeah yeah so if I correctly the current scalability of your platform depends on transversal this can Collective control to perform Universal competition of course yeah you may what you need really is trans yeah May if scab really needs
this kind of collective control this might be a littleit of concern know like for like so there's some there some way that you have in mind to overcome this kind of likeability for non transation yeah I mean I guess in the usual setting already most of the time people had kind of for example these magic State distillation approaches right you kind of have just a little bit in one place and then the rest of it you kind of still do your regular fed operations so we can kind of Imagine doing someth
ing like that as well I don't I don't view it as a particularly strong limitation but also at the same time it would be interesting to think about I mean you saw the a32 code uh I think people have very interesting ideas of seeing how they generalize and whatnot um so it would also be interesting to see different approaches to that as well maybe one last question [Music] thank another half hour break

Comments