May 22, 2023

A Bayesian beer tasting

Me and my mates are big fans of craft beer, and from time to time organise our own beer tastings. Each participant gets to give a single score between 1 and 5 (half point increments are allowed) to each beer and afterwards we compare scores and see which beer came out on top. Sour beer and imperial stouts are some of our favourite styles, but this time we decided to do something different: A blind tasting with the cheapest lager we could find. Read more

November 9, 2022

Split-Apply-Combine Is More Flexible Than You Think

The data frame is the workhorse of anything related to data and is an incredibly flexible tool. It can be used for almost everything: organising, filtering, transforming, plotting, etc. In this article we are going to see that it is probaly even more flexible than people think! Good old data frames When you think of data frames you might be thinking of something square with numbers in it. Perhaps something like Read more

October 13, 2022

When Neural ODEs fail

Over the years I have received a lot of emails in response to my post about neural ODEs where people ask for advice on a particular pitfall when applying neural ODEs on regression style problems. So here is a (long overdue) blog post to address that! Code can be found here. The first lesson of machine learning If you are a machine learning practitioner I’m sure you’ve been told to “start simple”. Read more

May 9, 2021

VAEs as a framework for probabilistic inference

VAEs frequently get compared to GANs, and then dismissed since “GANs produce better samples”. While this might be true for specific VAEs, I think this sells VAEs short. Do I claim that VAEs generate better samples of imaginary celebrities? No (but they are also pretty good). What I mean is that they are qualitatively different and much more general than people give them credit. In this article we are going to consider VAEs as a family of latent variable models and discover that they offer a unified black-box inference framework for probabilistic modelling. Read more

February 2, 2021

Time series forecasting with Spectral Mixture Kernels

Time series modelling is a fundamental yet difficult problem. Forecasting in particular is incredibly challenging and requires strong inductive biases to give good predictions. One powerful framework for encoding inductive biases are kernel functions used with Gaussian Processes (GPs), however, kernels require manual work to embed domain knowledge which might not always be desirable. One might ask if we can learn kernel structures directly from the data, and indeed the answer is yes! Read more

© Sebastian Callh 2020