My talks are usually drawn by hand in Inkscape using a Wacom digital pen. They are brought to presentation-life by the Inkscape-extension Sozi which riddles the SVG with javascript. I used to use JessyInk for this. See also my Sozi hack for making objects appear and disappear.
To watch a presentation, simply open the SVG in a modern browser (except Internet Explorer) and navigate using Space and Backspace. Chrome seems to have best performance. Unfortunately, the file sizes are quite large, so depending on your connection, it might take a moment to finish loading.
Ore polynomials, also known as skew polynomials, are non-commutative polynomials which can algebraically model differential equations, time-dependent systems, linear maps over finite fields, and more. Matrices over Ore polynomial rings can model systems of these objects and have found applications in diverse areas. Computing normal forms of such matrices can be useful for checking system equivalence or finding special elements in the space, e.g. shortest vectors.
In this talk I will introduce Ore polynomial rings, some important examples hereof, and describe some recent work on computing certain reduced normal forms of matrices of Ore polynomials.
Similar talk also given at:
We describe how to solve simultaneous Padé approximations over a power series ring \(K[[x]]\) for a field \(K\) using \(O~(n^{\omega - 1} d)\) operations in \(K\), where \(d\) is the sought precision and \(n\) is the number of power series to approximate. We develop two algorithms using different approaches and with different advantages. Both algorithms return reduced bases for the complete set of solutions to the input approximation problem. Our results are made possible by recent breakthroughs in fast computations of minimal approximant bases and Hermite Padé approximations.
Similar talk also given at:
Padé approximations over \(F[x]\) for \(F\) a finite field, as well as their Hermitian and Simultaneous generalisations, have numerous applications in coding theory, and have recently been identified as the computational bottleneck of many decoding algorithms.
We describe this purely symbolic-algebraic problem, exemplify the applications in coding theory and describe a recent algorithm which solves this problem faster than previously possible.
SageMath is a powerful open-source computer-algebra system. In excess of being open-source – a clear advantage and “ethical” feature for researchers - SageMath has user-friendly interfaces that support experimentation, such as the Jupyter Notebook and SageMathCloud. For Coding Theory, Sagemath has in recent years become vastly more powerful, mainly due to the ACTIS project, which employed David Lucas full-time for two years as software developer. We give a succinct demonstration of selected capabilities that SageMtah now offers to the working and teaching coding theorist.
The presentation is a Jupyter Notebook file, which can be opened by Sage, either installed on your own system or in SageMathCloud.
Error-correcting codes are low-dimensional vector spaces embedded in larger-dimensional ones such that vectors, or codewords, are spaced far apart. This enables one to recover codewords by knowing noisy versions of them. Algebraic Coding Theory deals with constructing such spaces using algebraic relations. Recovering a codeword then becomes solving certain structured equations.
Reed-Solomon codes are the most famous, and arguably the simplest, algebraic codes. Since the work of Berlekamp in the 60s, we know how to correct errors by solving a Padé approximation over \(GF(q)[x]\). Since a breakthrough result of Guruswami and Sudan in 1999, interest in generalising this method has been reinvigorated. In Power decoding, by Schmidt, Sidorenko and Bossert, 2006, one solves a Simultaneous Padé approximation with the aim of correcting more errors.
In this talk, I will introduce the problem of error-correction, and describe in modern and simple terms how these approximation problems naturally arise for Reed-Solomon codes. I will go on to describe a very recent extension of Power decoding, where one solves a type of Simultaneous Hermite Padé approximation.
Similar talk also given at:
Power decoding, also known as “decoding using virtual interleaving” is a simple technique for decoding Reed-Solomon codes up to the Sudan radius. It is based on the classical approach of solving a “key equation” involving a special polynomial whose roots reveal the error positions. Since the inception of Power decoding, it has been an open question if it is possible to incorporate “multiplicities”: the parameter allowing the Guruswami-Sudan algorithm (G-S) to decode up to the Johnson radius.
In this talk I will describe how that can be done. I’ll start by classical key equation decoding, and show how simple extensions lead first to Power decoding and then to the new decoder. I continue to describe how the resulting equations can be solved in an efficient manner using lattice basis reduction. This gives a new, simple decoding algorithm with the same decoding radius and speed as the G-S algorithm, but which does not need the additional root-finding step of the G-S. It has the drawback that it is a partial bounded-distance decoding algorithm, as it will fail for a few received words.
Reed–Solomon codes have optimal minimum distance and we know efficient encoding and decoding algorithms of quasi-linear complexity in the length. Their main drawback is that their lengths are bounded by the size of the alphabet, i.e. the field over which they are defined.
Algebraic geometry codes are a generalisation allowing longer codes on the same alphabet, and one of the most interesting sub-families of these are the Hermitian codes. The price for the greater length is more complicated computations: so far, no decoding algorithm with sub-quadratic complexity in the length of the code was known.
We show how to achieve this by building the decoder around the problem of finding a short vector in an \(\mathbb{F}[x]\)-module, and performing this step using state-of-the-art algorithms from computer algebra. This approach follows recent trends in decoding of Reed–Solomon codes. Furthermore, our decoder is a “Power decoder”, probabilistically capable of decoding errors beyond half-the-minimum distance for low-rate codes.
Joint work with Peter Beelen.
Similar talk also given at:
Joint work with Peter Beelen.
Hermitian codes are a sub-family of Algebraic Geometry codes - a huge family of evaluation codes defined using function fields over (usually) plane curves. Hermitian codes have very good minimum distance and can be significantly longer than Reed-Solomon codes over the same alphabet. Unfortunately, decoding becomes more complicated, and the best algorithms we know are still asymptotically slower than those we know for Reed-Solomon codes.
In this talk, I will present the first method which brings the complexity of decoding Hermitian codes down to \(\tilde{O}(n^{5/3})\), where \(n\) is the length of the code. We achieve this by embedding the core function field equation into multiple equations over \(\mathbb{F}[x]\), \(\mathbb{F}\) being the base field of the code. The resulting problem becomes a generalised Padé approximation over \(\mathbb{F}[x]\), which can be solved by finding a “short” vector in an explicitly given \(\mathbb{F}[x]\) lattice. This in turn can be found using recent methods from computer algebra, though the notion of “short” involves some extra difficulties.
Our method applies to two approaches of decoding beyond half the minimum distance: Guruswami-Sudan and Power decoding. In this talk, however, we will focus on how we embed the function field problem into \(\mathbb{F}[x]\), and therefore we discuss only decoding up to half the minimum distance (minus half the genus).
Power decoding, or “decoding by virtual interleaving”, of Reed–Solomon codes is a method for unique decoding beyond half the minimum distance. We give a new variant of the Power decoding scheme, building upon the key equation of Gao. We show various interesting properties such as behavioural equivalence to the classical scheme using syndromes, as well as a new bound on the failure probability when the powering degree is 3.
Power decoding, or “decoding using virtual interleaving” is a technique for decoding Reed–Solomon codes up to the Sudan radius. Since the method’s inception, it has been a tantalising goal to incorporate multiplicities, the parameter allowing the Guruswami–Sudan algorithm to decode up to the Johnson radius. We show how this can be done, and describe how to efficiently solve the resulting key equations.
The Kötter–Nielsen–Høholdt algorithm is a popular way to construct the bivariate interpolation polynomial in the Guruswami–Sudan decoding algorithm for Reed–Solomon codes. In this paper, we show how one can use Divide & Conquer techniques to provide an asymptotic speed-up of the algorithm, rendering its complexity quasi-linear in \(n\). Several of our observations can also provide a practical speed-up to the classical version of the algorithm.
Computer algebra is concerned with finding efficient algorithms for carrying out exact algebraic manipulations, such as multiplying very large integers or computing standard forms of matrices over the rational numbers. In this talk, I will demonstrate some common considerations and techniques used for obtaining asymptotically fast algorithms from simpler, more iterative procedures. The problem considered is that of finding a bivariate polynomial over a (finite) field which has a list of prescribed zeroes, each with high multiplicity. This is an important sub-problem in modern decoding techniques of various algebraic error-correcting codes.
We consider a type of Padé approximations over polynomial rings over fields, dubbed “2D Padé approximations”, which generalise several known Padé approximations. These approximations occur frequently in decoding of algebraic codes (most recently in a preprint coauthored by several members of the AriC team).
We’ll see how such approxiations can be solved using row reduction of polynomial matrices. We take a closer look at the Mulders–Storjohann for doing this and show that its complexity is linked to the orthogonality defect of the input matrix, implying that it is faster than initially guessed when applied to 2D Padé approximations.
We then present a new algorithm, which essentially carries out the computations performed by Mulders–Storjohann in a demand-driven manner, yielding an algorithm which is often even faster. This algorithm has a very Berlekamp–Massey-like quality, but is derived entirely in the module language.
We investigate three paradigms for polynomial-time decoding of Reed–Solomon codes beyond half the minimum distance: the Guruswami–Sudan algorithm, Power decoding and the Wu algorithm. The main results concern shaping the computational core of all three methods to a problem solvable by module minimisation; by applying the fastest known algorithms for this general problem, we then obtain realisations of each paradigm which are as fast or faster than all previously known methods. An element of this is the “2D key equation”, a heavily generalised form of the classical key equation, and we show how to solve such using module minimisation, or using our new Demand–Driven algorithm which is also based on module minimisation.
The decoding paradigms are all derived and analysed in a self-contained manner, often in new ways or examined in greater depth than previously. Among a number of new results, we give: a fast maximum-likelihood list decoder based on the Guruswami–Sudan algorithm; a new variant of Power decoding, Power Gao, along with some new insights into Power decoding; and a new, module based method for performing rational interpolation for the Wu algorithm. We also show how to decode Hermitian codes using Guruswami–Sudan or Power decoding faster than previously known, and we show how to Wu list decode binary Goppa codes.
We show how to solve a generalised version of the Multi-sequence Linear Feedback Shift-Register (MLFSR) problem using minimisation of free modules over \(\mathbb F[x]\). We show how two existing algorithms for minimising such modules run particularly fast on these instances. Furthermore, we show how one of them can be made even faster for our use. With our modelling of the problem, classical algebraic results tremendously simplify arguing about the algorithms. For the non-generalised MLFSR, these algorithms are as fast as what is currently known. We then use our generalised MLFSR to give a new fast decoding algorithm for Reed–Solomon codes.
An iterated refinement procedure for the Guruswami–Sudan list decoding algorithm for Generalised Reed–Solomon codes based on Alekhnovich’s module minimisation is proposed. The method is parametrisable and allows variants of the usual list decoding approach. In particular, finding the list of closest codewords within an intermediate radius can be performed with improved average-case complexity while retaining the worst-case complexity.
Similar talk also given at:
We derive the Wu list-decoding algorithm for Generalised Reed-Solomon codes by using Gröbner bases over modules and the Euclidean algorithm as the initial algorithm instead of the Berlekamp-Massey algorithm. We give a new application of the Wu list decoder by decoding irreducible binary Goppa codes up to the binary Johnson radius.
In 2007 Wu published a new list decoder for Reed–Solomon codes. It has similarities with the famous Guruswami–Sudan algorithm, most notably the decoding radius, but works in a significantly different manner. Importantly, for high-rate codes the Wu list-decoder has lower complexity than the Guruswami-Sudan. At the core, it is an extension of the minimum-distance decoder using the Berlekamp-Massey algorithm (BMA). The derivation, however, is somewhat involved and requires extensive arguments on the precise workings on the BMA. I will describe how the Wu list-decoder works, but using the Extended Euclidean algorithm as its main engine. This reveals more clearly the algebra underlying the decoder.
We derive the Wu list-decoding algorithm for Generalised Reed-Solomon codes by using Gröbner bases over modules and the Euclidean algorithm as the initial algorithm instead of the Berlekamp-Massey algorithm.
Similar talk also given at:
Subjective Advice on Going from PhD to Postdoc (and Beyond).
Short and light presentation of paper-art, especially Origami, with examples of the mathematical side and engineering applications. Following the presentation, the participants were to gather in groups and fold a mathematical while beautiful paper star – without the use of scissors or glue.
Computing a fraction of polynomials, a rational expression, which approximate a power series up to a given precision is a well-known tool in many computational domains, known as Pad'e approximation. A natural generalisation is to approximate multiple power series simultaneously, with multiple rational expressions sharing the same denominator.
We describe an easily understandable and quite efficient method to solving this problem when the polynomials are over a finite field.
A short introduction to being a PhD student, targeted at high-school students.
No abstract available