Click on a publication for abstract and download options.
Most algorithms proposed in my papers have been implemented as “proof of concept”-implementations. Some of these have been matured and merged into SageMath, others are available in individual repositories linked to under the article. For papers before 2015, the respective algorithm is likely in Codinglib, my now deprecated Sage library for algebraic coding
See also my Google Scholar profile
We propose the first non-trivial generic decoding algorithm for codes in the sum-rank metric. The new method combines ideas of well-known generic decoders in the Hamming and rank metric. For the same code parameters and number of errors, the new generic decoder has a larger expected complexity than the known generic decoders for the Hamming metric and smaller than the known rank-metric decoders.
Furthermore, we give a formal hardness reduction, providing evidence that generic sum-rank decoding is computationally hard.
As a by-product of the above, we solve some fundamental coding problems in the sum-rank metric: we give an algorithm to compute the exact size of a sphere of a given sum-rank radius, and also give an upper bound as a closed formula; and we study erasure decoding with respect to two different notions of support.
Interleaved Reed-Solomon codes admit efficient decoding algorithms which correct burst errors far beyond half the minimum distance in the random errors regime, e.g., by computing a common solution to the Key Equation for each Reed-Solomon code, as described by Schmidt et al. This decoder may either fail to return a codeword, or it may miscorrect to an incorrect codeword, and good upper bounds on the fraction of error matrices for which these events occur are known. The decoding algorithm immediately applies to interleaved alternant codes as well, i.e., the subfield subcodes of interleaved Reed-Solomon codes, but the fraction of decodable error matrices differs, since the error is now restricted to a subfield. In this paper, we present new general lower and upper bounds on the fraction of error matrices decodable by Schmidt et al.’s decoding algorithm, thereby making it the only decoding algorithm for interleaved alternant codes for which such bounds are known
We speed up existing decoding algorithms for three code classes in different metrics: interleaved Gabidulin codes in the rank metric, lifted interleaved Gabidulin codes in the subspace metric, and linearized Reed–Solomon codes in the sum-rank metric. The speed-ups are achieved by reducing the core of the underlying computational problems of the decoders to one common tool: computing left and right approximant bases of matrices over skew polynomial rings. To accomplish this, we describe a skew-analogue of the existing PM-Basis algorithm for matrices over usual polynomials. This captures the bulk of the work in multiplication of skew polynomials, and the complexity benefit comes from existing algorithms performing this faster than in classical quadratic complexity. The new faster algorithms for the various decoding-related computational problems are interesting in their own and have further applications, in particular parts of decoders of several other codes and foundational problems related to the remainder-evaluation of skew polynomials.
We investigate algorithms for encoding of one-point algebraic geometry (AG) codes over certain plane curves called \(C_{ab}\) curves, as well as algorithms for inverting the encoding map, which we call ``unencoding’’. Some \(C_{ab}\) curves have many points or are even maximal, e.g.~the Hermitian curve.
Our encoding resp. unencoding algorithms have complexity \(O^{\sim}(n^{3/2})\) resp. \(O^{\sim}(qn)\) for AG codes over any \(C_{ab}\) curve satisfying very mild assumptions, where \(n\) is the code length and \(q\) the base field size, and \(O^{\sim}\) ignores constants and logarithmic factors in the estimate.
For codes over curves whose evaluation points lie on a grid-like structure, notably the Hermitian curve and norm-trace curves, we show that our algorithms have quasi-linear time complexity \(O^{\sim}(n)\) for both operations. For infinite families of curves whose number of points is a constant factor away from the Hasse–Weil bound, our encoding algorithm has complexity \(O^{\sim}(n^{5/4})\) while unencoding has \(O^{\sim}(n^{3/2})\).
We describe how to solve simultaneous Hermite Padé approximations, sometimes known simply as Hermite Padé, over a polynomial ring \(\mathbb{K}[x]\) for a field \(\mathbb{K}\) using \(O^{\sim}(n^{\omega - 1} t d)\) operations in \(\mathbb{K}\), where \(d\) is the sought precision, and where \(n\) is the number of simultaneous approximations using \(t<n\) polynomials. We develop two algorithms using different approaches. Both algorithms return a reduced sub-basis that generates the complete set of solutions to the input approximations problem that satisfy the given degree constraints. Previously, the cost \(O^{\sim}(n^{\omega-1} t d)\) has only been reached with algorithms finding a single solution for the case \(t < n\). Our results are made possible by recent breakthroughs in fast computations of minimal approximant basis and Hermite Padé approximations for the case \(t \geq n\).
We propose a new partial decoding algorithm for \(h\)-interleaved one-point Hermitian codes that can decode—under certain assumptions—an error of relative weight up to \(1-(\tfrac{k+g}{n})^{\frac{h}{h+1}}\), where \(k\) is the dimension, \(n\) the length, and \(g\) the genus of the code. Simulation results for various parameters indicate that the new decoder achieves this maximal decoding radius with high probability. The algorithm is based on a recent generalization of Rosenkilde’s improved power decoder to interleaved Reed–Solomon codes, does not require an expensive root-finding step, and improves upon the previous best decoding radius by Kampf at all rates. In the special case \(h=1\), we obtain an adaption of the improved power decoding algorithm to one-point Hermitian codes, which for all simulated parameters achieves a similar observed failure probability as the Guruswami–Sudan decoder above the latter’s guaranteed decoding radius.
We design and analyze new protocols to verify the correctness of various computations on matrices over the ring \(\mathbb F[x]\) of univariate polynomials over a field \(\mathbb F\). For the sake of efficiency, and because many of the properties we verify are specific to matrices over a principal ideal domain, we cannot simply rely on previously-developed linear algebra protocols for matrices over a field. Our protocols are , often randomized, and feature a constant number of rounds of communication between the Prover and Verifier. We seek to minimize the communication cost so that the amount of data sent during the protocol is significantly smaller than the size of the result being verified, which can be useful when combining protocols or in some multi-party settings. The main tools we use are reductions to existing linear algebra verification protocols and a new protocol to verify that a given vector is in the \(\mathbb F[x]\)-row space of a given matrix.
Power decoding, or “decoding using virtual interleaving” is a technique for decoding Reed–Solomon codes up to the Sudan radius. Since the method’s inception, it has been an open question if it is possible to use this approach to decode up to the Johnson radius – the decoding radius of the Guruswami–Sudan algorithm. In this paper we show that this can be done by incorporating a notion of multiplicities. As the original Power decoding, the proposed algorithm is a one-pass algorithm: decoding follows immediately from solving a shift-register type equation, which we show can be done in quasi-linear time. It is a “partial bounded-distance decoding algorithm” since it will fail to return a codeword for a few error patterns within its decoding radius; we investigate its failure behaviour theoretically as well as give simulation results.
This is an extended version where we also show how the method can be made faster using the re-encoding technique or a syndrome formulation.
We improve previously known lower bounds for the minimum distance of certain two-point AG codes constructed using a Generalized Giulietti–Korchmaros curve (GGK). Castellanos and Tizziotti recently described such bounds for two-point codes coming from the Giulietti–Korchmaros curve (GK). Our results completely cover and in many cases improve on their results, using different techniques, while also supporting any GGK curve. Our method builds on the order bound for AG codes: to enable this, we study certain Weierstrass semigroups. This allows an efficient algorithm for computing our improved bounds. We find several new improvements upon the MinT minimum distance tables.
For many algebraic codes the main part of decoding can be reduced to row reduction of a module basis, enabling general, flexible and highly efficient algorithms. Inspired by this, we develop an approach of transforming matrices over skew polynomial rings into certain normal forms. We apply this to solve generalised shift register problems, or Pad'e approximations, over skew polynomial rings which occur in error and erasure decoding \(\ell\)-Interleaved Gabidulin codes. We obtain an algorithm with complexity \(O(\ell \mu^2)\) where \(\mu\) measures the size of the input problem. Further, we show how to list-\(\ell\) decode Mahdavifar–Vardy subspace codes in \(O(\ell r^2 m^2)\) time, where \(m\) is a parameter proportional to the dimension of the codewords’ ambient space and \(r\) is the dimension of the received subspace.
We present the first two sub-quadratic complexity decoding algorithms for one-point Hermitian codes. The first is based on a fast realisation of the Guruswami–Sudan algorithm by using state-of-the-art algorithms from computer algebra for polynomial-ring matrix minimisation. The second is a Power decoding algorithm: an extension of classical key equation decoding which gives a probabilistic decoding algorithm up to the Sudan radius. We show how the resulting key equations can be solved by the matrix minimisation algorithms from computer algebra, yielding similar asymptotic complexities.
An iterated refinement procedure for the Guruswami–Sudan list decoding algorithm for Generalised Reed–Solomon codes based on Alekhnovich’s module minimisation is proposed. The method is parametrisable and allows variants of the usual list decoding approach. In particular, finding the list of closest codewords within an intermediate radius can be performed with improved average-case complexity while retaining the worst-case complexity.
We provide a detailed description of the module minimisation, reanalysing the Mulders–Storjohann algorithm and drawing new connections to both Alekhnovich’s algorithm and Lee–O’Sullivan’s. Furthermore, we show how to incorporate the re-encoding technique of Kötter and Vardy into our iterative algorithm.
We derive the Wu list-decoding algorithm for Generalised Reed-Solomon (GRS) codes by using Gröbner bases over modules and the Euclidean algorithm (EA) as the initial algorithm instead of the Berlekamp-Massey algorithm (BMA). We present a novel method for constructing the interpolation polynomial fast. We give a new application of the Wu list decoder by decoding irreducible binary Goppa codes up to the binary Johnson radius. Finally, we point out a connection between the governing equations of the Wu algorithm and the Guruswami-Sudan algorithm (GSA), immediately leading to equality in the decoding range and a duality in the choice of parameters needed for decoding, both in the case of GRS codes and in the case of Goppa codes.
Linearized Reed–Solomon (LRS) codes are sum-rank metric codes that fulfill the Singleton bound with equality. In the two extreme cases of the sum-rank metric, they coincide with Reed–Solomon codes (Hamming metric) and Gabidulin codes (rank metric). List decoding in these extreme cases is well-studied, and the two code classes behave very differently in terms of list size, but nothing is known for the general case. In this paper, we derive a lower bound on the list size for LRS codes, which is, for a large class of LRS codes, exponential directly above the Johnson radius. Furthermore, we show that some families of linearized Reed–Solomon codes with constant numbers of blocks cannot be list decoded beyond the unique decoding radius.
Suppose \(\mathbb{K}\) is a large enough field and \(\mathcal{P} \subset \mathbb{K}^2\) is a fixed, generic set of points which is available for precomputation. We introduce a technique called which allows us to design quasi-linear algorithms for both: computing the evaluations of an input polynomial \(f \in \mathbb{K}[x,y]\) at all points of \(\mathcal{P}\); and computing an interpolant \(f \in \mathbb{K}[x,y]\) which takes prescribed values on \(\mathcal{P}\) and satisfies an input \(y\)-degree bound. Our genericity assumption is explicit and we prove that it holds for most point sets over a large enough field. If \(\mathcal{P}\) violates the assumption, our algorithms still work and the performance degrades smoothly according to a distance from being generic. To show that the reshaping technique may have an impact on other related problems, we apply it to modular composition: suppose generic polynomials \(M \in \mathbb{K}[x]\) and \(A \in \mathbb{K}[x]\) are available for precomputation, then given an input \(f \in \mathbb{K}[x,y]\) we show how to compute \(f(x, A(x)) \rem M(x)\) in quasi-linear time.
We propose the first non-trivial generic decoding algorithm for codes in the sum-rank metric. The new method combines ideas of well-known generic decoders in the Hamming and rank metric. For the same code parameters and number of errors, the new generic decoder has a larger expected complexity than the known generic decoders for the Hamming metric and smaller than the known rank-metric decoders.
We show that the root-finding step in interpolation-based decoding of interleaved Gabidulin codes can be solved by finding a so-called minimal approximant basis of a matrix over a linearized polynomial ring. Based on existing fast algorithms for computing such bases over ordinary polynomial rings, we develop fast algorithms for computing them over linearized polynomials. As a result, root finding costs \(O^\sim(\ell^{\omega}\mathcal{M}(n))\) operations in \(\mathbb F_{q^m}\), where \(\ell\) is the interleaving degree, \(n\) the code length, \(\mathbb F_{q^m}\) the base field of the code, \(2 \leq \omega \leq 3\) the matrix multiplication exponent, and \(\mathcal{M}(n) \in O(n^{1.635})\) is the complexity of multiplying two linearized polynomials of degree at most \(n\). This is an asymptotic improvement upon the previously fastest algorithm of complexity \(O(\ell^3 n^2)\), in some cases \(O(\ell^2 n^2)\).
We consider the computation of two normal forms for matrices over the univariate polynomials: the Popov form and the Hermite form. For matrices which are square and nonsingular, deterministic algorithms with satisfactory cost bounds are known. Here, we present deterministic, fast algorithms for rectangular input matrices. The obtained cost bound for the Popov form matches the previous best known randomized algorithm, while the cost bound for the Hermite form improves on the previous best known ones by a factor which is at least the largest dimension of the input matrix.
We present a generalisation of Twisted Reed–Solomon codes containing a new large class of MDS codes. We prove that the code class contains a large subfamily that is closed under duality. Furthermore, we study the Schur squares of the new codes and show that their dimension is often large. Using these structural properties, we single out a subfamily of the new codes which could be considered for code-based cryptography: we show that these codes resist existing structural attacks for Reed–Solomon-like codes, i.e. methods for retrieving the code parameters from an obfuscated generator matrix.
We present a new family of maximum rank distance (MRD) codes. The new class contains codes that are neither equivalent to a generalised Gabidulin nor to a twisted Gabidulin code, the only two known general constructions of linear MRD codes.
We propose a new partial decoding algorithm for one-point Hermitian codes that can decode up to the same number of errors as the Guruswami–Sudan decoder. Simulations suggest that it has a similar failure probability as the latter one. The algorithm is based on a recent generalization of the power decoding algorithm for Reed–Solomon codes and does not require an expensive root-finding step. In addition, it promises improvements for decoding interleaved Hermitian codes.
Note: this publication has been superseded.
Let \(\mathsf{F}[\partial; \sigma, \delta]\) be a ring of Ore polynomials over a field. We give a new deterministic algorithm for computing the Popov form \(P\) of a nonsingular matrix \(A \in \mathsf{F}[\partial; \sigma, \delta]^{n \times n}\). Our main focus is to ensure controlled growth in the size of coefficients from \(\mathsf{F}\) in the case \(\mathsf{F} = \mathsf{K}(z)\), and even \(\mathsf{K} = \mathbb{Q}\). Our algorithms are based on constructing from \(A\) a linear system over \(\mathsf{F}\) and performing a structured fraction-free Gaussian elimination. The algorithm is output sensitive, with a cost that depends on the orthogonality defect of the input matrix: the sum of the row degrees in \(A\) minus the sum of the row degrees in \(P\). The resulting bit-complexity for the differential and shift polynomial case over \(\mathbb{Q}(z)\) improves upon the previous best.
We give an algorithm for computing all roots of polynomials over a univariate power series ring over an exact field \(\mathbb{K}\). More precisely, given a precision \(d\), and a polynomial \(Q\) whose coefficients are power series in \(x\), the algorithm computes a representation of all power series \(f(x)\) such that \(Q(f(x)) = 0 \bmod x^d\). The algorithm works unconditionally, in particular also with multiple roots, where Newton iteration fails. Our main motivation comes from coding theory where instances of this problem arise and multiple roots must be handled.
The cost bound for our algorithm matches the worst-case input and output size \(d \deg(Q)\), up to logarithmic factors. This improves upon previous algorithms which were quadratic in at least one of \(d\) and \(\deg(Q)\). Our algorithm is a refinement of a divide & conquer algorithm by Alekhnovich (2005), where the cost of recursive steps is better controlled via the computation of a factor of \(Q\) which has a smaller degree while preserving the roots.
We propose a new partial decoding algorithm for \(m\)-interleaved Reed–Solomon (IRS) codes that can decode, with high probability, a random error of relative weight \(1-R^{\frac m {m+1}}\) at all code rates \(R\), in time polynomial in the code length \(n\). This is an asymptotic improvement over the previous state-of-the-art for all rates, and the first improvement for \(R > 1/3\) in the last 20 years. The method combines collaborative decoding of IRS codes with power decoding up to the Johnson radius.
We present a new general construction of MDS codes over a finite field \(\mathbb{F}_q\). We describe two explicit subclasses which contain new MDS codes of length at least \(q/2\) for all values of \(q \ge 11\). Moreover, we show that most of the new codes are not equivalent to a Reed–Solomon code.
We describe how to solve simultaneous Padé approximations over a power series ring \(K[[x]]\) for a field \(K\) using \(O~(n^{\omega - 1} d)\) operations in \(K\), where \(d\) is the sought precision and \(n\) is the number of power series to approximate. We develop two algorithms using different approaches and with different advantages. Both algorithms return reduced bases for the complete set of solutions to the input approximation problem. Our results are made possible by recent breakthroughs in fast computations of minimal approximant bases and Hermite Padé approximations.
We prove that Alekhnovich’s algorithm can be used for row reduction of skew polynomial matrices. This yields an \(O(\ell^3 n^{(w+1)/2}\log(n))\) decoding algorithm for \(\ell\)-Interleaved Gabidulin codes of length \(n\), where \(w\) is the matrix multiplication exponent, improving in the exponent of \(n\) compared to previous results.
For many algebraic codes the main part of decoding can be reduced to a shift register synthesis problem. In this paper we present an approach for solving generalised shift register problems over skew polynomial rings which occur in error and erasure decoding of \(\ell\)-Interleaved Gabidulin codes. The algorithm is based on module minimisation and has time complexity \(O(\ell \mu^2)\) where \(\mu\) measures the size of the input problem.
Note: this publication has been superseded.
Power decoding, or “decoding by virtual interleaving”, of Reed–Solomon codes is a method for unique decoding beyond half the minimum distance. We give a new variant of the Power decoding scheme, building upon the key equation of Gao. We show various interesting properties such as behavioural equivalence to the classical scheme using syndromes, as well as a new bound on the failure probability when the powering degree is 3.
Power decoding, or “decoding using virtual interleaving” is a technique for decoding Reed–Solomon codes up to the Sudan radius. Since the method’s inception, it has been a tantalising goal to incorporate multiplicities, the parameter allowing the Guruswami–Sudan algorithm to decode up to the Johnson radius. In this paper we show how this can be done, and describe how to efficiently solve the resulting key equations.
Note: this publication has been superseded.
The Kötter–Nielsen–Høholdt algorithm is a popular way to construct the bivariate interpolation polynomial in the Guruswami–Sudan decoding algorithm for Reed–Solomon codes. In this paper, we show how one can use Divide & Conquer techniques to provide an asymptotic speed-up of the algorithm, rendering its complexity quasi-linear in \(n\). Several of our observations can also provide a practical speed-up to the classical version of the algorithm.
We decode Reed-Solomon codes using soft information provided at the receiver. The Extended Euclidean Algorithm (EEA) is considered as an initial step to obtain an intermediate result. The final decoding result is obtained by interpolating the output of the EEA at the least reliable positions of the received word. We refer to this decoding method as reduced list-decoding, since not all received positions are used in the interpolation as in other list-decoding methods, such as the Guruswami–Sudan and Wu algorithms. Consequently the complexity of the interpolation step is reduced considerably. The probability of failure can be minimised by adjusting certain parameters, making it comparable with the Kötter–Vardy algorithm but having a much lower complexity.
We show how to solve a generalised version of the Multi-sequence Linear Feedback Shift-Register (MLFSR) problem using minimisation of free modules over \(\mathbb F[x]\). We show how two existing algorithms for minimising such modules run particularly fast on these instances. Furthermore, we show how one of them can be made even faster for our use. With our modelling of the problem, classical algebraic results tremendously simplify arguing about the algorithms. For the non-generalised MLFSR, these algorithms are as fast as what is currently known. We then use our generalised MLFSR to give a new fast decoding algorithm for Reed–Solomon codes.
We model the decoding of Interleaved Chinese Remainder codes as that of finding a short vector in a \(\mathbb Z\)-lattice. Using the LLL algorithm, we obtain an efficient decoding algorithm, correcting errors beyond the unique decoding bound and having nearly linear complexity. The algorithm can fail with a probability dependent on the number of errors, and we give an upper bound for this. Simulation results indicate that the bound is close to the truth. We apply the proposed decoding algorithm for decoding a single CR code using the idea of “Power” decoding, suggested for Reed–Solomon codes. A combination of these two methods can be used to decode low-rate Interleaved Chinese Remainder codes.
An iterated refinement procedure for the Guruswami–Sudan list decoding algorithm for Generalised Reed–Solomon codes based on Alekhnovich’s module minimisation is proposed. The method is parametrisable and allows variants of the usual list decoding approach. In particular, finding the list of closest codewords within an intermediate radius can be performed with improved average-case complexity while retaining the worst-case complexity.
Note: this publication has been superseded.
This paper presents a complete design tool which performs automatic behavioral synthesis of asynchronous circuits (resource sharing, scheduling and binding).
The tool targets a traditional control-datapath-style template architecture. Within the limitations set by this template architecture it is possible to optimize for area (which is our main focus) or for speed. This is done by simply using different cost functions.
Input to the tool is a behavioral description in the Haste language, and output from the tool is a Haste program describing the synthesized implementation consisting of a datapath and a controller. The tool may be seen as an add-on to the Haste/TiDE tool flow, and it can be used to automatically optimize parts of a design and to quickly explore alternative optimizations. The paper outlines the design flow, explains key elements of the design tool, and presents a number of benchmark results.
We consider a type of Padé approximations over polynomial rings over fields, dubbed “2D Padé approximations”, which generalise both simultaneous and Hermitian Padé approximations. This form is related to but distinct from both matrix-Padé approximations as well as order bases. These approximations occur in e.g.~decoding of algebraic codes.
We demonstrate how to solve such approximations using row reduction of polynomial matrices, and we review the achieved asymptotic complexity when using various fast, known methods for this.
We take a closer look at the Mulders–Storjohann for row reduction and show that its complexity is linked to the orthogonality defect of the input matrix, implying that it is fast when applied to 2D Padeé approximations. We design a new algorithm, which essentially carries out the computations performed by Mulders–Storjohann in a demand-driven manner, yielding an algorithm which is often even faster. This algorithm could be considered as a heavy generalisation of the classical Berlekamp–Massey algorithm.
We also consider “weighted” variants of 2D Padé approximations, and it is shown how the considered algorithms can handle this with very little speed penalty.
We investigate three paradigms for polynomial-time decoding of Reed–Solomon codes beyond half the minimum distance: the Guruswami–Sudan algorithm, Power decoding and the Wu algorithm. The main results concern shaping the computational core of all three methods to a problem solvable by module minimisation; by applying the fastest known algorithms for this general problem, we then obtain realisations of each paradigm which are as fast or faster than all previously known methods. An element of this is the “2D key equation”, a heavily generalised form of the classical key equation, and we show how to solve such using module minimisation, or using our new Demand–Driven algorithm which is also based on module minimisation.
The decoding paradigms are all derived and analysed in a self-contained manner, often in new ways or examined in greater depth than previously. Among a number of new results, we give: a fast maximum-likelihood list decoder based on the Guruswami–Sudan algorithm; a new variant of Power decoding, Power Gao, along with some new insights into Power decoding; and a new, module based method for performing rational interpolation for the Wu algorithm. We also show how to decode Hermitian codes using Guruswami–Sudan or Power decoding faster than previously known, and we show how to Wu list decode binary Goppa codes.
Recently, Wu proposed in [Wu08] a new approach to list decoding Reed-Solomon codes, quite different from the Guruswami-Sudan algorithm and its derivatives . It promises lower running-time and has shown new perspectives of the codes. Sadly, the impact of this idea has hitherto been minimal; the contribution is indisputable, so the reason might be found in the idea’s sole exposition: a convoluted article, presenting and proving a long range of intricate results on a strict page limit.
In this thesis, we give a complete, self-contained and novel presentation of Wu’s decoding algorithm. The key mechanics of the algorithm build on a combination of several previous decoders, and we begin with a full presentation of these and their background theory. We then derive Wu’s algorithm itself, with a clear division between the results in general algebraic theory, and the application of these on the decoding problem.