Note on Belief Propagation

Weilei Zeng
April 16, 2020
Abstract

This work gives a detail description of Belief Propagation (BP) algorithm and several of its variations.

I description of binary BP decoder

The Belief Propagation (BP) algorithm can approximate the maginal probability of errors, giving the received bits or the syndrome. It is designed for factor graph. When the graph is a tree, it gives the exact marginal probability. There are several equivalent/approximate versions of it, let us start with the classical one.

I.1 codeword-based

BP decoder for classical binary codes:

A code is defined by a factor graph ( with assumed degree d+1 and) with variable nodes V={ci} and check nodes C={ci}. The message from variable to check is

uv-c(x)=m(x)=P(yv|x)j=1..dMj(x), where {c1,,cd}=N(v)c,x{0,1}

The message from check to variable is

uc-v=M(x)=x1,,xdδ(𝐡c𝐱T=0)j=1dmj(x), where{v1,,vd}=N(c)v

The initial condition is Mj(x)=1,x{0,1}.

(To generalize it to nonbinary case, one only need to modify the parity check condition from matrix miltiplication to symplectic product. Everything else will be extended smoothly)

I.2 codeword-based, LLR-simplified

By rewriting the probabilities and messages in terms of the log-likelihood ratio

li=log(mi(0)/mi(1)), Mi=log(Mi(0)/Mi(1)), one can get the simplified message:

li=li(0)+j=1d(Lj)
Li=2tanh-1j=1dtanh(lj/2)
li(0)=log(P(xi=0|yi)/P(xi=1)|yi),yi{0,1}

,

When yi=0 or 1, li(0) will be flipped.

In this simplified form, only one message need to be sent per edge, instead of two messages for 0 and 1 respectively. Hence, the complexity is reduced.

I.3 syndrome-based

In classical case, the received bits was used, instead of the syndrome. In quantum case, there are no received bits, but only the syndrome. Hence, to get the formula in quantum case, one need to change it to a syndrome-based decoder.

Ref (MacKay and Mac Kay, 2003, chapter 47.2) show that the codeword-based BP decoder is equivalent to the following syndrome-based decoder

uv-c(x)=m(x)=P(x)j=1..dMj(x), where {c1,,cd}=N(v)c
uc-v=M(x)=x1,,xdδ(𝐡c𝐱T=𝐬T)j=1dmj(x), where{v1,,vd}=N(c)v

The initial condition is still Mj(x)=1,x{0,1}.

The logic is that, the codeword-view calculate P(x|y) (the most-likely input codeword x given the recieved vector y) and the syndrome-view calculate P(e|s) (the most-likely error e given this syndrome s). Here x should be an valid zero-syndrome input codeword; y is the received vector; and e is an error vector matching syndrome s. Literally, these two marginal probability are describing the same event, thus should lead to the same result. Mathematically, one has to write it carefully and show they are isomorphic.

I.4 syndrome-based, LLR simplified

In a similar fashion of simplification, one can write the above equations into the log-likelihood-ratio form, then reach the following simple formula Liu and Poulin (2018)

li=li(0)+j=1d(Lj)
Li=(-1)si2tanh-1j=1dtanh(lj/2)
li(0)=log(P(xi=0)/P(xi=1))=const

The posterior log-likelihood ratio can be estimated as li=li(0)+j=1d+1(Lj)

This syndrome-based BP decoder can be used for quantum code as well. We first discuss the case of CSS codes, then the case of GF(4) codes.

I.5 discussion on quantum case

In CSS codes, one has GHT=0. Say H is the parity check matrix, then the only difference from a classical code with parity check matrix H is that one need to check the decoded vector is an trivial error or not, that is, if it can be eliminated by rows of G or not. Hence, the CSS code can use BP decoder directly with a post check.

In GF(4) code, the generator matrix G=(A|B) satisfies GG~T=ABT+BAT=0, where G~=(B|A). Here, one can just decode a classical code with parity check matrix G~, then check if it is a combination of rows of G.

Note that, in both CSS codes and GF(4) codes, the correlation between X and Z errors are not considered. One way to consider the correlations is as following.

One can change the generator matrix from

G=(A|B),H=(B|A)

to

G~=G(IIII)=(A|B|A+B),H~=(BAIII)

The error changes from (eX|eZ) to (eX|eZ|eY), which satisfy (I|I|I)(eX|eZ|eY)T=0. (Any single error will produce an even number of 1s in the vector. This eY is not the Pauli Y error, but simply a superposition of X and Z, eY=eX+eZ mod 2.) This extra Y node contain the information that X and Z errors tend to appear or disapear in pair but not alone.

In this construction, we can just take the new parity check matrix as a classical binary code and use the basic BP decoder.

I.6 variation of BP

I.6.1 Min-Sum

Finally, there are some optimization of BP decoder, including normalized and offset min-sum decoder Chen et al. (2005). Ref Panteleev and Kalachev (2019) says they are always using normalized offset min-sum decoder with mormalization factor α=0.625.

Here I use LBP,LMS,LNORM,LOFF to denote the check-to-variable messages for BP, min-sum, normalized min-sum, and off-set min-sum respectively. The relation on their sign and magnitute are

sgn(LBP)=sgn(LMS)=(-1)scidsgn(lj)
|LiBP|=2tanh-1j=1dtanh(|lj|/2)
|LiMS|=minid|lj|
|LiNORM|=minid|lj|/α,α>1
|LiOFF|=max(minid|lj|-β,0),β>0

I.6.2 layered scheduling for updating rule

Ref Panteleev and Kalachev (2019) claim they used layered scheduling, which helped to eliminate the oscillating errors caused by the trapping sets. The criteria for how to choose the schedule is unclear for me yet.

I.6.3 enhanced feedback

Ref Wang et al. (2012) developed an optimization called Enhanced Feedback iterative BP decoder. In the second round of BP decoding, he locate the frustrated checks and some common qubits connected with them, then use the output probability to replace the input probability for those qubits. This approach is very similar to what I tried ( in the codeword-based LLR-simplified BP decoder for toric codes). The difference is that, I simply use the output probability (LLR vector) to replace the input probability for all qubits. I saw it fix all double errors on large-size (about 35x35) toric codes, but only tiny improvement in the numerics of small size (5, 7, 9, 11, 13). I am not sure about the reason on small size. there may be a bug in the program as well.

References

  • [1] J. Chen, R. M. Tanner, C. Jones, and Y. Li (2005) Improved min-sum decoding algorithms for irregular ldpc codes. In Proceedings. International Symposium on Information Theory, 2005. ISIT 2005., pp. 449–453. Cited by: §I.6.1.
  • [2] Y. Liu and D. Poulin (2018) Neural belief-propagation decoders for quantum error-correcting codes. arXiv preprint arXiv:1811.07835. Cited by: §I.4.
  • [3] D. J. MacKay and D. J. Mac Kay (2003) Information theory, inference and learning algorithms. Cambridge university press. Cited by: §I.3.
  • [4] P. Panteleev and G. Kalachev (2019) Degenerate quantum ldpc codes with good finite length performance. arXiv preprint arXiv:1904.02703. Cited by: §I.6.1, §I.6.2.
  • [5] Y. Wang, B. C. Sanders, B. Bai, and X. Wang (2012) Enhanced feedback iterative decoding of sparse quantum codes. IEEE Transactions on Information Theory 58 (2), pp. 1231–1241. Cited by: §I.6.3.