# Young Researchers in Mathematics

There are now some videos available from the Beyond Part III / Young Researchers in Mathematics conference I attended earlier this year. Of particular note is David Spiegelhalter’s plenary lecture on probability and uncertainty. I summarised one of the ideas from that talk – the micromort – on Everything2, mentioning a comparison between the risks of Ecstasy and horse riding by “the chairman of the Advisory Council on the Misuse of Drugs” which had led to calls for his resignation as early as January. The expert in question was Professor David Nutt, whose sacking in October has sparked controversy and debate over the role of science in policy making. Spiegelhalter’s presentation was highly accessible (and amusing!), so anyone interested in learning a bit more about these often-unintuive subjects should check it out.

There is also video from the panel discussion and some of the accessible talks in the various themed sessions. All of which should help convince you to sign up for next year’s Young Researchers In Mathematics conference, running 25-27 March again at Cambridge.

# Workshop on Discovery and Experimentation in Number Theory

I’m back from the Fields Institute in Toronto, where I spoke at the above workshop, on my usual topic of cyclotomic and 4-cyclotomic matrices/graphs. During the talk I described my conjecture that a graph is maximal cyclotomic if-and-only if it’s 4-cyclotomic, and after an hour at the blackboards with James McKee I now have a potential approach for proving that. So although I don’t think my talk went especially well, it’s had the desired effect!

You can find my slides here, and an audio recording may become available in the future- the conference was held in Toronto and Vancouver via videoconference (which worked well) so hopefully all the talks will be archived online.

Update 24/x/09: Audio of all talks at Fields (hence, including mine) can now be found on their website.

# Rule of Succession

Noders – users of Everything2 – often meet up in the real world in what are imaginatively known as nodermeets. Sometimes they even brave the British outdoors, and the two London nodermeets in parks have had an unexpected side effect: at each a couple met and ended up getting married! Next month there will be another such meet, and (as one of the more mathematically-inclined britnoders) I was asked what the odds were of it being three times a charm marriage-wise.

It’s easy to cook up a dodgy mathematical formula in support of a cause, and that particular flavour of bad science seems fairly popular with the media, so I wanted to set things on a vaguely valid theoretical basis for a change. Plus I knew I’d recently seen a similar question – what was the probability of the 44th President of the United States being a white male? – and its solution at a lecture during Beyond Part III; I just couldn’t remember the result or its originator.

Much googling of half-remembered formulae and likely candidate long-dead French mathematicians later, I’d recovered the answer. The desired theorem is the rule of succession, due to Laplace, and it can be described as follows-

If a trial can only succeed or fail, but nothing is known about the probability of either outcome except that there have been s successful trials out of n in total, then the probability of the next trial being a success is (s+1)/(n+2).

As an immediate corollary, if you know nothing about an event except that so far it has happened n times in a row, then the probability it will happen next time is (n+1)/(n+2). (This more specific version is also sometimes refered to as the rule of succession.) Laplace was trying to solve the sunrise problem: as the sun has risen every day, what is the probability of it rising tomorrow? Armed with the rule, we still require an estimate of how many successful sunrises there have been; Laplace, working in the 18th century, took a literal reading of the bible for this, a practice which still appeals to young earth creationists. But although a more modern figure gives a probablity much closer to 1, it still admits a 1/(n+2) chance of the sun not rising tomorrow.

This has often been used as a criticism of the rule of succession, but as often occurs the problem is more one of inappropriate application of a model than a flaw in the model itself: Laplace himself immediately cautioned that “…[the probability of the sun rising tomorrow] is far greater for him who, seeing in the totality of phenomena the principle regulating the days and seasons, realizes that nothing at present moment can arrest the course of it.”

In other words, our astronomical knowledge means that we have more to go on than just observed sunrises in estimating the chance of another, and we should defer to that. The rule of succession is to be used when you have little or no knowledge of the underlying processes or probability of an event. It’s particularly useful when there have only been a few trials, or no successes have been observed at all – the rule of succession provides a non-zero estimate in that case, which is desirable by Cromwell’s rule.

With the small sample space of a pair of nodermeets, and noder romance being infinitely more mysterious than celestial mechanics, I was thus happy to apply the rule of succession and declare the probability of a third marriage to be 3/4.

Proof of the Rule of Succession
This proof is lifted from here, which is easier to read anyway…

Laplace’s assumptions were

• The event has some chance of happening, between 0 and 1.
• All possible values of this chance, from 0 to 1, are equally
probable a priori.
• His sixth principle of probability: for E an event, C_1…C_n possible causes of E,

P(C_i|E) = P(E|C_i)*P(C_i) / (Σ_{k=1..n}P(E|C_k)P(C_k)) (this is just Bayes’ Theorem.)
• His seventh principle of probability: for E an event, F a possible future event and C_1…C_n possible causes,

P(F|E)=Σ_k=1..n P(F|C_1)P(C_1|E)

We may then derive the special case of the rule of succession. Let E indicate that the event has occurred n times in a row; F that the event will occur next time; and C_x that the chance of the event occurring is x. The C_x are then considered as the possible causes of the event- so P(E|C_x)=x^n and P(F|C_x) is just x. Since there are infinitely many x in [0,1], we pass from summations to integrals in the sixth and seventh principles to obtain infinite versions and thus find

$P(C_x|E)=\frac{P(E|C_x)}{\int_{y=0}^{y=1} P(E|C_y)dy}=(n+1)x^n$

and so

$P(F|E)=\int_{x=0}^{x=1} P(F|C_x)P(C_x|E)dx =\int_{x=0}^{x=1} x(n+1)x^n dx=\frac{n+1}{n+2}$

as claimed.

# Geometry Club Talk: Why Cryptography Doesn’t Guarantee Security

Tomorrow, in what seems to be becoming an annual tradition, I’ll be giving a talk on cryptography at the Geometry Club. Since it’ll be a talk-and-chalk style seminar, I don’t have any slides to make available, but many of the topics I’ll be discussing have appeared earlier on this blog or everything2. In particular, I’m planning to include:

• A brief overview of public-key cryptography: my notes from last year’s talk cover technical details in the context of (EC)DLP, or there’s this friendlier article on Diffie-Hellman key exchange and man-in-the-middle attacks.
• The problems of Authentication and Mutual Authentication at a protocol level, and the undecidability of the secrecy problem via Post’s Correspondence Problem.
• Lattice-based cryptography- which hopefully I can summarise in a later post.

# An infinite family of n-hypercubes

Having hit a bit of a wall trying to prove that a maximal cyclotomic matrix necessarily squares to 4I, I’ve been exploring related questions instead. For instance, it only took a couple of tweaks to my code to search for matrices that square to 3I instead of 4I; there turn out to be only finitely many, which isn’t particularly interesting. However, one of them is an 8 vertex cube which I recognised as ‘half’ the maximal cyclotomic graph S16.

This got me thinking about the properties of graphs obtained by stitching together two graphs, and lead to an interesting construction. If M squares to nI, then by taking a second copy of its graph, negating and joining each vertex in the original to the corresponding one in the copy, you get a new matrix which squares to (n+1)I.

By iterating this process, many of the maximal cyclotomic graphs can be recovered; and since there are infinite families of maximal cyclotomic graphs, I can demonstrate infinitely many non-trivial integer matrices with all eigenvales of absolute value sqrt(n) for any integer n≥5 too.

A particularly nice example is the family of signed n-hypercubes generated by running this procedure on the ’1-cyclotomic’ graph consisting of just a line. The picture at the top of this post is of the 5th step, and I’ve put together an animation illustrating its construction:

No mathematical reason to stop at 5, it just gets harder and harder to draw these things!

# A dangerous assumption

I’ve often said that trying to explain your work to others is the best way to check you understand it, and whilst preparing for this week’s talk at Cambridge (which was apparently well received!) I started to have some doubts about an algorithm I’d been using. In the end I didn’t go into enough technical detail during the talk for the issue to come up, but those concerns nagged at me during the commutes and lunchbreaks: eventually, I confirmed that there’s quite a big flaw in my proof as it stands.

For cyclotomic matrices over the rational integers, it turns out that all maximal examples satisfy the equation M2=4I. However, this is a side effect of the classification, rather than an independent result which can be used to arrive at that classification. Which is a problem, as (despite attempts to guard against it) my approach implicity assumes this condition…

Thus what I currently have is a classification not necessarily of all the maximal cyclotomic graphs, but of all the graphs whose matrix representation squares to 4I; no less substantial a piece of work, but a less significant result.

It’s still my belief that this is the full classification, but to prove that, I’ll need to demonstrate the equivalence of these two conditions (the reverse direction I already have, but that’s no help); or at least the weaker result that any cyclotomic graph with a vertex of weighted degree less than four admits a cyclotomic supermatrix, which is the hidden assumption in my existing algorithm. Suggestions welcome!

# MAGIC Talk

This week I’ve been at the rather swish Alan Turing building in Manchester for the MAGIC Postgraduate Conference and LMS Northern Regional Meeting.

I spoke at the former, on the subject of Integer Matrices with Constrained Eigenvalues. Here are my slides: it’s a fairly breezy 15 minute overview of the problem (which integer symmetric matrices have all eigenvalues in [-2,2]?) and its solution, covering Mahler measure, cyclotomic matrices, interlacing, and charged signed graphs. For further reading, here is the paper by McKee and Smyth (my supervisor) with their proof of the presented classification; also by Smyth is a survey on Mahler Measure of 1-variable polynomials.

In my own work I’ve generalised the idea of cyclotomicity (all eigenvalues in [-2,2]) to Hermitian matrices with algebraic integer entries from imaginary quadratic extension fields. I think I have a complete classification of these, with an alternative proof of the above rational integer case as a subcase. The results at least will hoepfully appear here at some point, although for the proofs you’ll probably have to wait for my thesis.

# Genetic Algorithms with Processing

Genetic Algorithms are something I’ve found intriguing for a long time, but I’d never got around to actually implementing one. A recent flurry of blog posts and forum chatter about them – such as this program for evolving the Mona Lisa as a vector image, or this one which breeds vehicles, rekindled my interest. I quickly found a tutorial outlining the basics, and posing a sample problem: given a series of disks in a given area, find the largest disk which fits within the area without overlapping any of the others. I’d been planning to work, as usual, with SAGE, but the graphical nature of this problem gave me an opportunity to dive once again in to processing, which as it happens has recently left beta.

You can play with the results here; for an afternoon’s coding, I’m quite pleased with it as a proof of concept even though it doesn’t always do a fantastic job of solving the problem! As implemented, each run generates a fixed obstacle set of a dozen disks (in black) and an initial population of 25 candidate solutions. The ‘genome’ is 28 bit, split 10/9/9 for x co-ordinate, y co-ordinate and radius. Disks that overlap an obstacle or go off screen are awarded a fitness of zero and painted red; otherwise the fitness function is simply the radius. In line with the tutorial, I pick pairs by roulette selection based on this fitness (so red disks shouldn’t survive into the next generation), breed by crossover at bit 20, and mutate about 1 in 200 bits.

However, for this particular set up the crossover approach doesn’t seem ideal- offspring will inherit almost exactly the co-ordinates of one parent and the radius of the other. This often means large areas of the screen are missed unless a chance mutation moves a disk, which then has to be lucky enough to produce a few offspring in the next generation. It might work better if I generated offspring bit by bit (say, taking bits from each parent a certain percentage of the time), rather than the simple cut currently in use. Fortunately, it shouldn’t be hard to experiment in this way; ideally, I’d build several algorithms into the applet and offer a choice at runtime, as well as control over the assorted variables. If nothing else, it’s teaching me a lot about processing, which I continue to be impressed with.

# iSquared: The limits of Computation

The winter issue of iSquared magazine is now available, with the cover article ‘The limits of Computation’ being by, well, me! I discuss Turing Machines, undecidability, intractability, and (briefly) Artificial Intelligence.

I’ve not received my own copy so can’t comment on the other articles yet, although I’ve attended a talk by interviewee Keith Briggs before which was very good, so I expect the interview will be interesting too. So I hope you’ll pick up a copy- it’s a venture well worth supporting!