Reframing the "AI Effect"

(This piece was originally posted on Medium) There’s a phenomenon known as the AI effect, whereby as soon as Artificial Intelligence (AI) researchers achieve a milestone long thought to signify the achievement of true artificial intelligence, e.g., beating a human at chess, it suddenly gets downgraded to not true AI. Kevin Kelly wrote in a Wired article in October 2014: In the past, we would have said only a superintelligent AI could drive a car, or beat a human at Jeopardy! Read On →

Why Machine Learning is not a path to the Singularity

(This piece was originally posted on Medium) All of the advances in Artificial Intelligence that you ever hear about — whether it’s machines beating humans at Go, great advances in Machine Translation, self-driving cars, or anything else — are examples of weak AI. They are each focused on a single, narrowly defined task. Some would have you believe, even fear, that these advances will inevitably lead to strong AI (human-level intelligence), which in turn will lead to a Superintelligence we’ll no longer control. Read On →

Thoughts on data-driven language learning

I used to be a language pedant. I would bemoan the use of the word “presently” to mean “currently”, shudder at “between you and I”, gasp at the use of “literally” to mean… “not literally” (“I literally peed my pants laughing.” “Orly?”) I would get particularly exasperated when I heard people use phrases that were clearly (to me) nonsensical but that sounded almost correct. A classic example of this is when people say “The reason being is…” or start a sentence with “As such, …” when the word “such” does not refer to anything. Read On →

Gaussian Processes for Dummies

Source: The Kernel Cookbook by David Duvenaud It always amazes me how I can hear a statement uttered in the space of a few seconds about some aspect of machine learning that then takes me countless hours to understand. I first heard about Gaussian Processes on an episode of the Talking Machines podcast and thought it sounded like a really neat idea. I promptly procured myself a copy of the classic text on the subject, Gaussian Processes for Machine Learning by Rasmussen and Williams, but my tenuous grasp on the Bayesian approach to machine learning meant I got stumped pretty quickly. Read On →

From both sides now: the math of linear regression

Linear regression is the most basic and the most widely used technique in machine learning; yet for all its simplicity, studying it can unlock some of the most important concepts in statistics. If you have a basic undestanding of linear regression expressed as $ \hat{Y} = \theta_0 + \theta_1X$, but don’t have a background in statistics and find statements like “ridge regression is equivalent to the maximum a posteriori (MAP) estimate with a zero-mean Gaussian prior” bewildering, then this post is for you. Read On →

AI and Music

I originally wrote the post below in May 2010 as a guest blogger on a now-defunct blog called high-c.com. I’m re-posting it here because I recently had cause to dig it up and was pleasantly surprised at how well it still reflects my views on this topic. The only change I made is the recording of Bach’s Chaconne that I link to… On one stave, for a small instrument, the man writes a whole world of the deepest thoughts and most powerful feelings. Read On →

Tay and the Dangers of Artificial Stupidity

This is an imagined conversation between Tay, Microsoft’s AI chatbot, and me. Tay was let loose on Twitter a couple of weeks ago to pretty disastrous effect. It was trained by a bunch of racists to say racist things. It could just as easily have been trained by a bunch of sexists to say sexist things, hence my imagined conversation above. The conversation is completely unrealistic though - I would never take career advice from an AI! Read On →

Matrix Factorization with Tensorflow

I’ve been working on building a content recommender in TensorFlow using matrix factorization, following the approach described in the article Matrix Factorization Techniques for Recommender Systems (MFTRS). I haven’t come across any discussion of this particular use case in TensorFlow but it seems like an ideal job for it. I’ll explain briefly here what matrix factorization is in the context of recommender systems (although I highly cough recommend reading the MFTRS article) and how things needed to be set up to do this in TensorFlow. Read On →

My Favourite Data Science Podcasts

There’s a pretty impressive selection of high-quality podcasts out there these days on topics in data science. Here are four that I am really enjoying right now, along with my take on what is good about each of them. Not So Standard Deviations NSSD podcast on SoundCloud Two very smart people with PhDs in biostatistics, one still in academia and the other working as a data scientist for Etsy, Roger and Hilary sure do ramble on but the ramblings are great :) They cover all sorts of topics, always at least loosely related to data science and my favourite things about the podcast are 1. Read On →

Machine Learning talk

Here are the “slides” from a talk I gave on machine learning last week. The idea is to give an overview of the different topics and how they fit together. I may end up building on it as I learn about more facets of ML. In case you’re wondering, the slides were created using Hovercraft which is a python tool for creating impress.js slides but authoring them in reStructuredText instead of HTML.