Not human, but inhabited by humans: writing mathematics

Mathematics can be written in many ways. One approach, very popular with professional pure mathematicians, is to write as little as possible. Often the best proof of a mathematical theorem is the shortest and most elegant. This fact, combined with some of the history and culture of mathematics, leads to the classic terse mathematical style: theorem-proof, theorem-proof, lemma-theorem-proof, definition-prposition-theorem-proof, and so on.

(The fact that most mathematicians dislike writing may also have something to do with this!)

I think that, on every mathematical subject, there ought to be texts which are written in this way: short, crisp, elegant, minimalist. But there should also be others.

The standard terse style is, however, imperfect for learning mathematics — especially anyone below PhD level. Perhaps this style is tolerable for learning how the proofs go. It’s useful for understanding the exact steps in rigorous proofs of theorems. And it often works well with a highly motivated or sophisticated reader — one who understands that reading such books is not actually about reading, but about knowing when and how to ask oneself questions, filling in the details which have been omitted. The standard style is hard, both in the sense of “not easy” to read, and in the sense of “not soft”, with no surrounding story or context.

Such an approach is fine for insiders: those who already understand the culture and the conventions of mathematical literature. But for learners — particularly those with a weak background, as is increasingly the case — it is a different matter.

What tends to be left out in the standard terse style? Everything that makes mathematics human: history and context; motivation; commentary; connections within and beyond mathematics. And even a mathematician may not appreciate every book being so hard to read.

Other mathematicians may disagree, but given a choice between terse text, and a gentle version which is twice as long but twice as easy to read — and full of interesting details and tidbits — my preference is clear: genial is better than brutal and terse.

Why are we interested in the topic we are talking about? What are its implications and connections? Why do we cover the parts of the subject that we do? Why do we use the arguments we do, and why not others? Where did this proof come from? How could we use similar ideas to prove other results? These questions are often as important as the mathematical content itself.

More generally, contemporary curriculum and culture, at least in Australia, leads to the situation that students may know little about the background of their subject — even when they are studying at an advanced level.

Further, the classic terse approach can descend into a combination of intimidation and disrespect. Proofs and arguments are routinely omitted as “obvious” or “trivial”. Steps are skipped. Some readers may be fine with one or two gaps to fill in themselves, though no harm would have been done had the author included it; but every skipped step is a potential hazard, and a successful reader must navigate them all.

In extreme cases, authors leave a trail of breadcrumbs which the reader may be able to pick up and follow along, if they have enough knowledge or curiosity or insight or gumption or tenacity or luck. Mathematical writing then becomes a set of puzzles, where every sentence must be solved by the reader to progress to the next. Mathematicians in certain fields will know certain “classic” texts in the mathematical literature are precisely of this type. All this in the supposed pursuit of communicating mathematics as fast and efficiently as possible!

Such an approach makes reading mathematics, in its terse classic style, a completely different affair from reading almost any other subject.

Why erect walls of unexplained argumentation, and browbeat those who cannot scale them with cries of “obvious” and “trivial”?

For students first arriving upon the abstract world of pure mathematics, it can seem a harsh, even brutal subject. That is because it is a harsh, brutal subject. Mathematics does not forgive your one mistaken observation: your proof will come crashing down despite your pleadings. Most of your thoughts on mathematics will be wrong — to the extent they are even precise enough to be wrong. To do mathematics is to work through all the wrong thoughts to make them right.

Mathematical arguments are true independent of what humans think of them: in this sense, the truths of mathematics live in their own world, a world that has no feelings and is not human. The independence of mathematics from the human world is the source of an austere beauty, but it can also make the subject seem cold and desolate.

It is a cold world, it is a harsh world, but it is a beautiful world, and its statements are pure, honest, and beautiful. And while it is not human, it is a world inhabited by humans. It also provides the language of science and the universe.

Some can brave entry to this world themselves. But why should we not provide some guidance as to the nature of this world, as we enter it?

A-polynomials, Ptolemy varieties, and Dehn filling, Melbourne June 2020

On 15 June 2020 I gave a talk in the topology seminar at the University of Melbourne.

Title: A-polynomials, Ptolemy varieties, and Dehn filling, Melbourne June 2020

Abstract: The A-polynomial is a 2-variable knot polynomial which encodes topological and hyperbolic geometric information about a knot complement. In recent times it has been shown that the A-polynomial can be calculated from Ptolemy equations. Historically reaching back to antiquity, Ptolemy equations arise all across mathematics, often alongside cluster algebras.

In recent work with Howie and Purcell, we showed how to compute A-polynomials by starting with a triangulation of a manifold, then using symplectic properties of the Neumann-Zagier matrix to change basis, eventually arriving at a set of Ptolemy equations. This work refines methods of Dimofte, and the result is similar to certain varieties studied by Zickert and others. Applying this method to families of manifolds obtained by Dehn filling, we find relations between their A-polynomials and the cluster algebra of the cusp torus.

20-06_unimelb_talk_web

Monash topology talk on Circle packings, Lagrangian Grassmannians, and Scattering Diagrams, April 2020

On 1 April 2020 I gave a talk in the Monash topology seminar.

Title: Circle packings, Lagrangian Grassmannians, and scattering diagrams

Abstract: I’ll discuss some recent work, in progress, relating the theory or circle packing to various ideas in geometry and physics. In paticular, we’ll show how ideas of Penrose and Rindler can shed light on circle packings, describing them by spinors or by Lagrangian planes satisfying various conditions. We’ll also touch on how the resulting spinor equations are related to on-shell diagrams in scattering theory.

20-04_circle_packing_talk

A-polynomials, Ptolemy varieties and Dehn filling

(45 pages) – on the arXiv

Abstract: The A-polynomial encodes hyperbolic geometric information on knots and related manifolds. Historically, it has been difficult to compute, and particularly difficult to determine A-polynomials of infinite families of knots. Here, we show how to compute A-polynomials by starting with a triangulation of a manifold, similar to Champanerkar, then using symplectic properties of the Neumann-Zagier matrix encoding the gluings to change the basis of the computation. The result is a simplicifation of the defining equations. Our methods are a refined version of Dimofte’s symplectic reduction, and we conjecture that the result is equivalent to equations arising from the enhanced Ptolemy variety of Zickert, which would connect these different approaches to the A-polynomial.

We apply this method to families of manifolds obtained by Dehn filling, and show that the defining equations of their A-polynomials are Ptolemy equations which, up to signs, are equations between cluster variables in the cluster algebra of the cusp torus. Thus the change in A-polynomial under Dehn filling is given by an explicit twisted cluster algebra. We compute the equations for Dehn fillings of the Whitehead link.

APolysDehn_arxiv_v2

The sensitivity conjecture, induced subgraphs of cubes, and Clifford algebras

(4 pages) – on the arXiv

Abstract: We give another version of Huang’s proof that an induced subgraph of the n-dimensional cube graph containing over half the vertices has maximal degree at least , which implies the Sensitivity Conjecture. This argument uses Clifford algebras of positive definite signature in a natural way. We also prove a weighted version of the result.

sensitivity_conjecture_induced_subgraphs_clifford_algebras

Talk in Monash discrete mathematics seminar, September 2019

On 16 September 2019 I gave a talk in the Monash discrete mathematics seminar.

Title:

The sensitivity conjecture, induced subgraphs of cubes, and Clifford algebras

Abstract:

Recently, Hao Huang gave an ingenious short proof of a longstanding conjecture in computer science, the Sensitivity Conjecture, about the complexity of boolean functions. Huang proved this conjecture by establishing a result about the maximal degree of induced subgraphs of cube graphs. In recent work, we gave a new version of this result, and slightly generalise it, by connecting it to the theory of Clifford Algebras, algebraic structures which arise all across mathematics.

Monash topology talk on sensitivity conjecture and Clifford algebras, July 2019

On 31 July 2019 I gave at talk at Monash University in the topology seminar.

Title:

The sensitivity conjecture, induced subgraphs of cubes, and Clifford algebras

Abstract:

Recently, Hao Huang gave an ingenious short proof of a longstanding conjecture in computer science, the Sensitivity Conjecture. Huang proved this conjecture by establishing a result about the maximal degree of induced subgraphs of cube graphs. In very recent work, we gave a new version of this result, and slightly generalise it, by connecting it to the theory of Clifford Algebras, algebraic structures which arise naturally in geometry, topology and physics.

Breakthroughs in primary school arithmetic

Humans have known how to multiply natural numbers for a long time. In primary school you learn how to multiply numbers using an algorithm which is often called long multiplication, and it was known to the ancient Babylonians. But it’s called “long” for a reason — you have to write a lot of lines! If you’re multiplying two numbers which both have length n, then you have to multiply every digit of the first number by every digit of the second number, so there are \(n^2\) multiplication operations. Then there are several additions. But addition is much easier than multiplication, as you learn in primary school: you can just go down column by column and work it out, and adding up two numbers of length n only takes roughly n operations.

In 1960, the great mathematician Andrey Kolmogorov was teaching a seminar in Soviet Russia. He conjectured that the ancient Babylonian method was best possible, in the sense that any algorithm to multiply natural numbers of length n must involve at least n^2 single-digit multiplication operations. One of the students in that seminar was Anatoly Karatsuba. One week later, Karatsuba came back with an improved method, which requires only  about \(n^{\log_2 3} \sim n^{1.58}\) multiplications. (Strictly speaking it’s \(O(n^{log_2 3})\), if you know “big-O notation“.)

Karatsuba’s method to multiply two 2-digit numbers involves multiplying the the units digits, multiplying the tens digits, and then multiplying the sum of the digits of one number by the sum of the digits of the other number. With some judicious addition, subtraction and placement of extra zeroes, the required product can be found. Karatsuba’s method in general repeats this method, in a recursive fashion, on larger numbers.

This was all in the news recently. Since Karatsuba’s breakthrough, there have been several further advances in the multiplication of natural numbers. But in the last few weeks, a paper was posted by two mathematicians, including David Harvey, an Australian number theorist at UNSW. It purports to give an algorithm to multiply natural numbers in time \(O(n \log n)\). A good article about all this recently appeared in quanta magazine, which is worth a read: here’s a link.