Jun 2, 2016
Linear regression is the most basic and the most widely used technique in machine learning; yet for all its simplicity, studying it can unlock some of the most important concepts in statistics.
If you have a basic undestanding of linear regression expressed as $ \hat{Y} = \theta_0 + \theta_1X$, but don’t have a background in statistics and find statements like “ridge regression is equivalent to the maximum a posteriori (MAP) estimate with a zero-mean Gaussian prior” bewildering, then this post is for you.
Read On →
May 6, 2016
I originally wrote the post below in May 2010 as a guest blogger on a now-defunct blog called high-c.com. I’m re-posting it here because I recently had cause to dig it up and was pleasantly surprised at how well it still reflects my views on this topic. The only change I made is the recording of Bach’s Chaconne that I link to…
On one stave, for a small instrument, the man writes a whole world of the deepest thoughts and most powerful feelings.
Read On →
Apr 12, 2016
This is an imagined conversation between Tay, Microsoft’s AI chatbot, and me. Tay was let loose on Twitter a couple of weeks ago to pretty disastrous effect. It was trained by a bunch of racists to say racist things. It could just as easily have been trained by a bunch of sexists to say sexist things, hence my imagined conversation above. The conversation is completely unrealistic though - I would never take career advice from an AI!
Read On →
Mar 11, 2016
I’ve been working on building a content recommender in TensorFlow using matrix factorization, following the approach described in the article Matrix Factorization Techniques for Recommender Systems (MFTRS). I haven’t come across any discussion of this particular use case in TensorFlow but it seems like an ideal job for it. I’ll explain briefly here what matrix factorization is in the context of recommender systems (although I highly cough recommend reading the MFTRS article) and how things needed to be set up to do this in TensorFlow.
Read On →
Feb 6, 2016
There’s a pretty impressive selection of high-quality podcasts out there these days on topics in data science. Here are four that I am really enjoying right now, along with my take on what is good about each of them.
Not So Standard Deviations NSSD podcast on SoundCloud Two very smart people with PhDs in biostatistics, one still in academia and the other working as a data scientist for Etsy, Roger and Hilary sure do ramble on but the ramblings are great :) They cover all sorts of topics, always at least loosely related to data science and my favourite things about the podcast are 1.
Read On →
Jan 26, 2016
Here are the “slides” from a talk I gave on machine learning last week. The idea is to give an overview of the different topics and how they fit together. I may end up building on it as I learn about more facets of ML.
In case you’re wondering, the slides were created using Hovercraft which is a python tool for creating impress.js slides but authoring them in reStructuredText instead of HTML.
Read On →
Dec 21, 2015
This documents my efforts to learn both neural networks and, to a certain extent, the Python programming language. I say “to a certain extent” because far from feeling all “yay! I know Python now!” I feel more like “I can use Python 2.7 in certain ways to do certain things… yay?”
And what of my understanding of neural nets as a result of this exercise? After battling with my naïve implementation of a multi-layer perceptron as described below, I felt I had a pretty visceral understanding of them.
Read On →
Oct 15, 2015
I wrote some code for doing a Welch’s T-Test in Go. You can read up on what a Welch’s t-test is here but in short it’s a significance test for the difference between two treatments (like in an A/B test) where the distributions may have unequal variances.
* * * * * * * * * * * ** * *** * ***** * * * *********** -----------------|-----|----------- So if you are doing an A/B test and you have the mean and variance of each treatment, you can get a confidence measure for whether the mean of one is truly higher than the mean of the other.
Read On →