Explaining Mathematics

Source: Michael Nielsen blog, Feb 2016

Exploration and discovery require a logic that is different to, and at least as valuable as, conventionally “correct” reasoning. The idea of semi-concrete reasoning is a step toward media to support such exploration and discovery, and perhaps toward new ways of thinking about mathematics.

Alan Kay has asked “what is the carrying capacity for ideas of the computer?” We may also ask the closely related question: “what is the carrying capacity for discovery of the computer?” In this essay we’ve made progress on that question using a simple strategy: develop a prototype medium to represent mathematics in a new way, and carefully investigate what we can learn when we use the prototype to attack a serious mathematical problem.

In future, it’d be of interest to pursue a similar strategy with other problems, and with more adventurous interface ideas. And, of course, it would be of interest to build out a working system that develops the best ideas fully, not merely prototypes.

A visual proof that neural nets can compute any function

Source: Michael Nielsen blog, Jan 2016

universality tells us that neural networks can compute any function; and empirical evidence suggests that deep networks are the networks best adapted to learn the functions useful in solving many real-world problems.

One of the most striking facts about neural networks is that they can compute any function at all. That is, suppose someone hands you some complicated, wiggly function, f(x)f(x):



No matter what the function, there is guaranteed to be a neural network so that for every possible input, xx, the value f(x)f(x) (or some close approximation) is output from the network, e.g.:


Understanding Creativity

Source:  DeGruyter.com, 2015

on the more cognitive system and problem-solving oriented side, progress has been slower and many questions concerning the cognitive nature and computational modelling of creativity, for instance in concept invention, idea generation, or inventive problem-solving, remain unanswered.
This delay in development is partially due to one of the fundamental questions in creativity research and computational creativity, namely the question for a general definition of creativity as cognitive capacity. While it usually seems straightforward for humans to recognise (or at least judge) the presence or absence of creativity in different forms of artistic performance or in a solution to a problem or task, giving an explicit characterisation of creativity or reasonably general criteria for deciding when an artefact, behaviour, or idea has to be acknowledged as creative has hitherto not been achieved.

…  the lack of process models or mechanistically-informative theories which could serve as basis for a computational (re-)implementation of creativity.

How Computational Creativity Began

Source: Springer, 2015

Computational creativity (CC, for short) is the use of computers to generate results that would be regarded as creative if produced by humans alone. Strictly speaking, this includes not only art, but also innovative scientific theories, mathematical concepts, and engineering designs. But the term is often used—as I shall do, here— to apply mainly to results having artistic interest.

CC was glimpsed on the horizon over 170 years ago, when Ada Lovelace said of Charles Babbage’s Analytical Engine that it “might compose elaborate and scientific pieces of music of any degree of complexity or extent” [41, p. 270].
A century later, Alan Turing was producing (as a joke) programmed love-letters on Manchester’s MADM computer [37]; and haikus would soon be generated on Cambridge’s EDSAC machine (see below). Even more to the point (or so it might seem), “creativity” was identified as one of the chief goals in the document planning the Dartmouth Summer School of 1956 [42]. That meeting was where artificial intelligence was officially named, and where hopes for computational modeling first reached beyond a tiny coterie.

Collaborative Theorem Proving

Source: De Gruyter.com, 2015

Another project of distributed cooperative proving was initiated by Timothy Gowers [G], a Fields Medal-winner Cambridge mathematician, in 2009. He called on the community in his blog to find a new, more intelligible, solution of a special case of the density Hales-Jewett theorem [DHJ] (Hales and Jewett 1963).

This marks the beginning of a proof-event, i.e. the system starts from the state 〈 〉 G DHJ , . Each agent Ai , who joined the system, had to use the comment function of Gowers’s’ blog to communicate insights, ideas, approaches, pieces of proof, etc. (whenever acted as prover), or to comment, correct, or reject ideas proposed by other agents (whenever acted as interpreter). Each attempt has the features of a brainstorming session and represents by itself a proof-event in a sequence of proof-events. The evolution of the sequence in time is represented in the “history” kept by the medium (the blog)18. The attempts (proof-events) were developed in time along two main sequences of proof-events in the respective blogs of Gowers and Terence Tao, another winner of the Fields Medal.

Both sequences produced outputs that were finally evaluated as actual proofs of the problem and were published under the pseudonym “Polymath” (2009, 2010a, 2010b), which denotes a collective author19

The Polymath project was the first real experiment of collective Web-based proving carried out by agents of varied knowledge background and expertise skills, who worked as a goal-directed system. A blog was used to create an interest-based community of agents, whose exchange and cross-fertilization of ideas amplified collective creative thinking that led to a fast solution of the problem.

Related Resource: Polymath Timeline, Apr 2009

Star Wars/Disney t-shirts



Amazon: Taking Big Bets for Outsized Returns

Source: SEC database, Apr 2016

Outsized returns often come from betting against conventional wisdom, and conventional wisdom is usually right. Given a ten percent chance of a 100 times payoff, you should take that bet every time. But you’re still going to be wrong nine times out of ten.