The Wrong Classroom

This is my response to Rich Sutton’s The Bitter Lesson, Rodney Brooks’ A Better Lesson, and Andy Kitchen’s A Meta Lesson. The bitter lesson from Sutton was that we should stop trying to incorporate human knowledge and human ways of thinking into our AI systems because these approaches always lose out to those using massive scale search and learning. Examples provided include chess, Go and computer vision. Brooks’ better lesson was that the human ingenuity required to get search and learning based approaches to succeed at all gives the lie to the idea that we are ever talking about pure search and learning when looking at these examples of success, i. Read On →

AI and the Future of Work

Disclaimer: the views expressed in this post are my own and do not necessarily reflect the view of my employer, Accenture. I participated in a debate on Artificial Intelligence and the future of work last month. The motion was that “A.I. will create more jobs than it destroys” and my teammate, Dr Augustin Chevez, and I were arguing in favour of the motion. I’m happy to report that we won the debate, which was decided based on the number of audience members we managed to persuade. Read On →

Can Machine Learning Answer Your Question?

(This piece was originally posted on Medium) This post aims to make it easier for stakeholders looking to enhance processes with Machine Learning capabilities to formulate their question as a Machine Learning question in order to get the conversation with data scientists off to the right start. More and more business functions are looking to Machine Learning (ML) to solve problems. Sometimes the motivation can be questionable: “We should figure out a way to use ML for this because because every business these days should be using ML,” or “I want to use TensorFlow to solve this problem because TensorFlow is cool. Read On →

You Don’t Know What A.I. Is (an afternoon with Elon Musk)

Inspired by the Raymond Carver flavor of this post by Tom Davenport on Artificial Intelligence, here’s my take on Elon Musk’s warnings to a room full of governors about A.I. and the future of humanity. It is based on Raymond Carver’s You Don’t Know What Love Is (an evening with Charles Bukowski). You don't know what A.I. is Musk said I'm 46 years old look at me There isn't one of you in this room would recognize A. Read On →

Was AlphaGo's Move 37 Inevitable?

This question is interesting to me both because of the way this particular move was reported on at the time, and because it works as a starting point for me to understand the inner workings of AlphaGo. I’m talking, of course, about the AI that beat the human Go champion, Lee Sedol, last March. The famous “move 37” happened in the second game of the 5-game match, and was described by commentators, once they got over their initial shock, with words like “beautiful” and “creative. Read On →

Put away your Machine Learning hammer, criminality is not a nail

(This piece was originally posted on Medium) Earlier this month, researchers claimed to have found evidence that criminality can be predicted from facial features. In “Automated Inference on Criminality using Face Images,” Xiaolin Wu and Xi Zhang describe how they trained classifiers using various machine learning techniques that were able to distinguish photos of criminals from photos of non-criminals with a high level of accuracy. The result these researchers found can be interpreted differently depending on what assumptions you bring to interpreting it, and what question you’re interested in answering. Read On →

Reframing the "AI Effect"

(This piece was originally posted on Medium) There’s a phenomenon known as the AI effect, whereby as soon as Artificial Intelligence (AI) researchers achieve a milestone long thought to signify the achievement of true artificial intelligence, e.g., beating a human at chess, it suddenly gets downgraded to not true AI. Kevin Kelly wrote in a Wired article in October 2014: In the past, we would have said only a superintelligent AI could drive a car, or beat a human at Jeopardy! Read On →

Why Machine Learning is not a path to the Singularity

(This piece was originally posted on Medium) All of the advances in Artificial Intelligence that you ever hear about — whether it’s machines beating humans at Go, great advances in Machine Translation, self-driving cars, or anything else — are examples of weak AI. They are each focused on a single, narrowly defined task. Some would have you believe, even fear, that these advances will inevitably lead to strong AI (human-level intelligence), which in turn will lead to a Superintelligence we’ll no longer control. Read On →

Thoughts on data-driven language learning

I used to be a language pedant. I would bemoan the use of the word “presently” to mean “currently”, shudder at “between you and I”, gasp at the use of “literally” to mean… “not literally” (“I literally peed my pants laughing.” “Orly?”) I would get particularly exasperated when I heard people use phrases that were clearly (to me) nonsensical but that sounded almost correct. A classic example of this is when people say “The reason being is…” or start a sentence with “As such, …” when the word “such” does not refer to anything. Read On →

Gaussian Processes for Dummies

Source: The Kernel Cookbook by David Duvenaud It always amazes me how I can hear a statement uttered in the space of a few seconds about some aspect of machine learning that then takes me countless hours to understand. I first heard about Gaussian Processes on an episode of the Talking Machines podcast and thought it sounded like a really neat idea. I promptly procured myself a copy of the classic text on the subject, Gaussian Processes for Machine Learning by Rasmussen and Williams, but my tenuous grasp on the Bayesian approach to machine learning meant I got stumped pretty quickly. Read On →