Note on Belief Propagation
Weilei Zeng
(Dated: January 21, 2020)
This work gives a detail description of Belief Propagation (BP) algorithm and
several of its variations.
I. DESCRIPTION OF BINARY BP DECODER
The Belief Propagation (BP) algorithm can approximate the maginal probability of errors,
giving the received bits or the syndrome. It is designed for factor graph. When the graph
is a tree, it gives the exact marginal probability. There are several equivalent/approximate
A. codeword-based
BP decoder for classical binary codes:
A code is deﬁned by a factor graph ( with assumed degree d + 1 and) with variable nodes
V = {c
i
} and check nodes C = {c
i
}. The message from variable to check is
u
vc
(x) = m(x) = P (y
v
|x)
Y
j=1..d
M
j
(x), where {c
1
, ..., c
d
} = N(v) \ c, x {0, 1}
The message from check to variable is
u
cv
= M(x) =
X
x
1
,...,x
d
δ(h
c
x
T
= 0)
d
Y
j=1
m
j
(x), where{v
1
, ..., v
d
} = N(c) \ v
The initial condition is M
j
(x) = 1, x {0, 1}.
(To generalize it to nonbinary case, one only need to modify the parity check condi-
tion from matrix miltiplication to symplectic product. Everything else will be extended
smoothly)
B. codeword-based, LLR-simpliﬁed
By rewriting the probabilities and messages in terms of the log-likelihood ratio
l
i
= log(m
i
(0)/m
i
(1)), M
i
= log(M
i
(0)/M
i
(1)), one can get the simpliﬁed message:
l
i
= l
(0)
i
+
d
X
j=1
(L
j
)
L
i
= 2 tanh
1
d
Y
j=1
tanh(l
j
/2)
l
(0)
i
= log(P (x
i
= 0|y
i
)/P (x
i
= 1)|y
i
), y
i
{0, 1}
2
,
When y
i
= 0 or 1, l
(0)
i
will be ﬂipped.
In this simpliﬁed form, only one message need to be sent per edge, instead of two messages
for 0 and 1 respectively. Hence, the complexity is reduced.
C. syndrome-based
In classical case, the received bits was used, instead of the syndrome. In quantum case,
there are no received bits, but only the syndrome. Hence, to get the formula in quantum
case, one need to change it to a syndrome-based decoder.
Ref
MMK03
(chapter 47.2) show that the codeword-based BP decoder is equivalent to the
following syndrome-based decoder
u
vc
(x) = m(x) = P (x)
Y
j=1..d
M
j
(x), where {c
1
, ..., c
d
} = N(v) \ c
u
cv
= M(x) =
X
x
1
,...,x
d
δ(h
c
x
T
= s
T
)
d
Y
j=1
m
j
(x), where{v
1
, ..., v
d
} = N(c) \ v
The initial condition is still M
j
(x) = 1, x {0, 1}.
The logic is that, the codeword-view calculate P (x|y) (the most-likely input codeword x
given the recieved vector y) and the syndrome-view calculate P (e|s) (the most-likely error
e given this syndrome s). Here x should be an valid zero-syndrome input codeword; y is
the received vector; and e is an error vector matching syndrome s. Literally, these two
marginal probability are describing the same event, thus should lead to the same result.
Mathematically, one has to write it carefully and show they are isomorphic.
D. syndrome-based, LLR simpliﬁed
In a similar fashion of simpliﬁcation, one can write the above equations into the log-
likelihood-ratio form, then reach the following simple formula
LP18
l
i
= l
(0)
i
+
d
X
j=1
(L
j
)
L
i
= (1)
s
i
2 tanh
1
d
Y
j=1
tanh(l
j
/2)
l
(0)
i
= log(P (x
i
= 0)/P (x
i
= 1)) = const
The posterior log-likelihood ratio can be estimated as l
i
= l
(0)
i
+
P
d+1
j=1
(L
j
)
This syndrome-based BP decoder can be used for quantum code as well. We ﬁrst discuss
the case of CSS codes, then the case of GF(4) codes.
3
E. discussion on quantum case
In CSS codes, one has GH
T
= 0. Say H is the parity check matrix, then the only
diﬀerence from a classical code with parity check matrix H is that one need to check the
decoded vector is an trivial error or not, that is, if it can be eliminated by rows of G or not.
Hence, the CSS code can use BP decoder directly with a post check.
In GF(4) code, the generator matrix G = (A|B) satisﬁes G
˜
G
T
= AB
T
+ BA
T
= 0, where
˜
G = (B|A). Here, one can just decode a classical code with parity check matrix
˜
G, then
check if it is a combination of rows of G.
Note that, in both CSS codes and GF(4) codes, the correlation between X and Z errors
are not considered. One way to consider the correlations is as following.
One can change the generator matrix from
G = (A|B), H = (B|A)
to
˜
G = G
I I
I I
= (A|B|A + B),
˜
H =
B A
I I I
The error changes from (e
X
|e
Z
) to (e
X
|e
Z
|e
Y
), which satisfy (I|I|I)(e
X
|e
Z
|e
Y
)
T
= 0. (Any
single error will produce an even number of 1s in the vector. This e
Y
is not the Pauli Y error,
but simply a superposition of X and Z, e
Y
= e
X
+ e
Z
mod 2.) This extra Y node contain
the information that X and Z errors tend to appear or disapear in pair but not alone.
In this construction, we can just take the new parity check matrix as a classical binary
code and use the basic BP decoder.
F. variation of BP
1. Min-Sum
Finally, there are some optimization of BP decoder, including normalized and oﬀset min-
sum decoder
CTJL05
. Ref
PK19
says they are always using normalized oﬀset min-sum decoder
with mormalization factor α = 0.625.
Here I use L
BP
, L
MS
, L
NORM
, L
OF F
to denote the check-to-variable messages for BP,
min-sum, normalized min-sum, and oﬀ-set min-sum respectively. The relation on their sign
and magnitute are
sgn(L
BP
) = sgn(L
MS
) = (1)
s
c
d
Y
i
sgn(l
j
)
|L
BP
i
| = 2 tanh
1
d
Y
j=1
tanh(|l
j
|/2)
|L
MS
i
| =
d
min
i
|l
j
|
|L
NORM
i
| =
d
min
i
|l
j
|/α, α > 1 4
|L
OF F
i
| = max(
d
min
i
|l
j
| β, 0), β > 0
2. layered scheduling for updating rule
Ref
PK19
claim they used layered scheduling, which helped to eliminate the oscillating
errors caused by the trapping sets. The criteria for how to choose the schedule is unclear
for me yet.
3. enhanced feedback
Ref
WSBW12
developed an optimization called Enhanced Feedback iterative BP decoder. In
the second round of BP decoding, he locate the frustrated checks and some common qubits
connected with them, then use the output probability to replace the input probability for
those qubits. This approach is very similar to what I tried ( in the codeword-based LLR-
simpliﬁed BP decoder for toric codes). The diﬀerence is that, I simply use the output
probability (LLR vector) to replace the input probability for all qubits. I saw it ﬁx all
double errors on large-size (about 35x35) toric codes, but only tiny improvement in the
numerics of small size (5, 7, 9, 11, 13). I am not sure about the reason on small size. there
may be a bug in the program as well.
CTJL05
Jinghu Chen, R Michael Tanner, Christopher Jones, and Yan Li. Improved min-sum decod-
ing algorithms for irregular ldpc codes. In Proceedings. International Symposium on Information
Theory, 2005. ISIT 2005., pages 449–453. IEEE, 2005.
Fil09
Tom Filler. Simpliﬁcation of the belief propagation algorithm. online lecture note, 2009.
FT10
Bill Freeman and Antonio Torralba. Lecture 7: graphical models and belief propagation,
February 2010. MIT EECS course 6.869.
LP18
Ye-Hua Liu and David Poulin. Neural belief-propagation decoders for quantum error-
correcting codes. arXiv preprint arXiv:1811.07835, 2018.
MMK03
David JC MacKay and David JC Mac Kay. Information theory, inference and learning
algorithms. Cambridge university press, 2003.
PK19
Pavel Panteleev and Gleb Kalachev. Degenerate quantum ldpc codes with good ﬁnite length
performance. arXiv preprint arXiv:1904.02703, 2019.
ROJ19
Alex Rigby, JC Olivier, and Peter Jarvis. Modiﬁed belief propagation decoders for quantum
low-density parity-check codes. Physical Review A, 100(1):012330, 2019.
WSBW12
Yun-Jiang Wang, Barry C Sanders, Bao-Ming Bai, and Xin-Mei Wang. Enhanced feed-
back iterative decoding of sparse quantum codes. IEEE Transactions on Information Theory,
58(2):1231–1241, 2012. 