August 22, 2023

Bayesian Ordinal Regression for Wine data

A while ago I wanted to explore my career options so I did a bit of interviewing for various companies. In one of the technical interviews, I was tasked to analyse a dataset and build a predictive model. Noticing that the target variable was ordinal, I decided to build an ordinal regression model using a Bayesian approach. Now, I’m guessing ordinal regression, and Bayesian methods, aren’t that well known, because the interviewers were completely unfamiliar and somewhat sceptical. Read more

May 22, 2023

A Bayesian beer tasting

Me and my mates are big fans of craft beer, and from time to time organise our own beer tastings. Each participant gets to give a single score between 1 and 5 (half point increments are allowed) to each beer and afterwards we compare scores and see which beer came out on top. Sour beer and imperial stouts are some of our favourite styles, but this time we decided to do something different: A blind tasting with the cheapest lager we could find. Read more

November 9, 2022

Split-Apply-Combine Is More Flexible Than You Think

The data frame is the workhorse of anything related to data and is an incredibly flexible tool. It can be used for almost everything: organising, filtering, transforming, plotting, etc. In this article we are going to see that it is probaly even more flexible than people think! Good old data frames When you think of data frames you might be thinking of something square with numbers in it. Perhaps something like Read more

October 13, 2022

When Neural ODEs fail

Over the years I have received a lot of emails in response to my post about neural ODEs where people ask for advice on a particular pitfall when applying neural ODEs on regression style problems. So here is a (long overdue) blog post to address that! Code can be found here. The first lesson of machine learning If you are a machine learning practitioner I’m sure you’ve been told to “start simple”. Read more

May 9, 2021

VAEs as a framework for probabilistic inference

VAEs frequently get compared to GANs, and then dismissed since “GANs produce better samples”. While this might be true for specific VAEs, I think this sells VAEs short. Do I claim that VAEs generate better samples of imaginary celebrities? No (but they are also pretty good). What I mean is that they are qualitatively different and much more general than people give them credit. In this article we are going to consider VAEs as a family of latent variable models and discover that they offer a unified black-box inference framework for probabilistic modelling. Read more

© Sebastian Callh 2020