Skip to Main content Skip to Navigation
Journal articles

Neural network approaches to point lattice decoding

Abstract : We characterize the complexity of the lattice decoding problem from a neural network perspective. The notion of Voronoi-reduced basis is introduced to restrict the space of solutions to a binary set. On the one hand, this problem is shown to be equivalent to computing a continuous piecewise linear (CPWL) function restricted to the fundamental parallelotope. On the other hand, it is known that any function computed by a ReLU feed-forward neural network is CPWL. As a result, we count the number of affine pieces in the CPWL decoding function to characterize the complexity of the decoding problem. It is exponential in the space dimension n, which induces shallow neural networks of exponential size. For structured lattices we show that folding, a technique equivalent to using a deep neural network, enables to reduce this complexity from exponential in n to polynomial in n. Regarding unstructured MIMO lattices, in contrary to dense lattices many pieces in the CPWL decoding function can be neglected for quasioptimal decoding on the Gaussian channel. This makes the decoding problem easier and it explains why shallow neural networks of reasonable size are more efficient with this category of lattices (in low to moderate dimensions).
Complete list of metadata

https://hal.archives-ouvertes.fr/hal-03700941
Contributor : Philippe CIBLAT Connect in order to contact the contributor
Submitted on : Tuesday, June 21, 2022 - 3:16:01 PM
Last modification on : Sunday, June 26, 2022 - 12:30:35 PM

File

IT_2022_NN_vfinale.pdf
Files produced by the author(s)

Identifiers

Citation

Vincent Corlay, Joseph J Boutros, Philippe Ciblat, Loïc Brunel. Neural network approaches to point lattice decoding. IEEE Transactions on Information Theory, Institute of Electrical and Electronics Engineers, 2022, 68 (5), ⟨10.1109/TIT.2022.3147834⟩. ⟨hal-03700941⟩

Share

Metrics

Record views

10

Files downloads

1