When I started working as a Data Scientist nearly ten years ago, the data science team I joined did something I found really strange at first: They had a single GitHub repo where they put all their “throwaway” code. An R script to produce some plots for a presentation, a Python notebook with a machine learning proof-of-concept, a bash script for cleaning some logs. It all went into the same repo. Initially, this felt sloppy to me, and sure, there are better ways to organize code, but I’ve come to learn that not having a single place for throwaway code in a team is far worse. Without a place for throwaway code, what’s going to happen is:
Some ambitious person on the team will create a new GitHub repo for every single analysis/POC/thing they do, “swamping” the GitHub namespace.
Some others will stow their code on the company wiki or drop it in the team Slack channel.
But most people aren’t going to put it anywhere, and we all know that code “available on request” often isn’t available at all.
So, in all teams I’ve worked in, I’ve set up a GitHub repo that looks something like this:
If you’ve ever looked at a Makefile in a python or R repository chances are that it contained a collection of useful shell commands (make test -> runs all the unit tests, make lint -> runs automatic formatting and linting, etc.). That’s a perfectly good use of make, and if that’s what you’re after then
here’s a good guide for how to set that up. However, the original reason why make was made was to run shell commands, which might depend on other commands being run first, in the right order. In
1976 when Stuart Feldman created make those shell commands were compiling C-programs, but nothing is stopping you from using make to set up simple data pipelines instead. And there are a couple of good reasons why you would want to use make for this purpose:
make is everywhere. Well, maybe not on Windows (but it’s
easy to install), but on Linux and MacOS make comes installed out of the box.
make allows you to define pipelines that have multiple steps and complex dependencies (this needs to run before that, but after this, etc.), and figures out what step needs to be rerun and executes them in the correct order.
make is language agnostic and allows you to mix pipelines with Python code, Jupyter notebooks, R code, shell scripts, etc.
Here I’ll give you a handy template and some tips for building a data pipeline using python and make. But first, let’s look at an example data pipeline without make.
The first thing I thought when I tried all the cool tools of the Year of the AI Revolution (aka 2022) was: OMG this is amazing, it’s the AI future that I never thought I would see. The second thing I thought was: OMG this is going to be used to spam the internet with so much bland auto-generated content.
I hate bland auto-generated content as much as the next person, but I was tempted by the forbidden fruit, I irresponsibly took a bite, and two short R scripts and a weekend later I’m now the not-so-proud owner of
officialcocktails.com: A completely auto-generated website with recipes, description, tips, images, etc. covering all the official International Bartenders Association cocktails.
Here’s the quick recipe for how I whipped this up.
Yesterday I put up
a post where I described how I scraped The International Bartenders Association (IBA) cocktails into csv and json format.
Timothy Wolodzko had a reasonable question regarding this on
Mastodon:
Two reasons:
Data that’s not sitting in a CSV file make me a bit nervous.
With data snugly in a CSV file, there are so many things you can do with it! 😁
I find it fascinating that the International Bartenders Association (IBA) keeps a list of “official” cocktails. Like, it’s not like the
World Association of Chefs’ Societies
keeps a list of official dishes. But yet the IBA keeps a list of official cocktails and keeps this up to date (!), as well. For example, I have sad news for all you vodka and orange juice fans out there: As of 2020
the Screwdriver is not an official cocktail anymore.
While a list of official cocktails is a bit silly, it’s also a nice dataset that I’ve now scraped and put into an
iba-cocktails repo. This includes all the International Bartenders Association (IBA) Official Cocktails in CSV and JSON format as of 2023, from two different sources:
The IBA website and
Wikipedia’s list of IBA cocktails. My take on the difference between these sources is that the IBA website is more “official” (it’s their list, after all), but the Wikipedia recipes are easier to follow.
It’s March 2023 and right now
ChatGPT, the amazing AI chatbot tool from OpenAI, is all the rage. But when OpenAI released their public web API for ChatGPT on the 1st of March you might have been a bit disappointed. If you’re an R user, that is. Because, when scrolling through
the release announcement you find that there is a python package to use this new API, but no R package.
I’m here to say: Don’t be disappointed! As long as there is a web API for a service then it’s going to be easy to use this service from R, no specialized package needed. So here’s an example of how to use the new (as of March 2023) ChatGPT API from R. But know that when the next AI API hotness comes out (likely April 2023, or so) then it’s going to be easy to interface with that from R, as well.
I recently went bowling, and you know those weird 3D-animated bowling animations that all bowling alleys seemed to show whenever you made a strike? They are still alive and well! (At least at my local bowling place). And then I thought: Can I get animations like that into my daily data science workflow? With
Rstudio’s built-in Viewer tab, I absolutely could! Below you find the code for a much improved t.test function that gives you different animations when you hit a strike ($p < 0.01$), a spare ($p < 0.05$), a “near miss” ($p < 0.1$) and a complete miss ($p > 0.1$).
(If you think this is silly, then I agree. Roughly as silly as using ritualized p-value cutoffs to decide whether an experiment is a “success” or not.)
While Big Data™ might not be a buzzword anymore, data that’s uncomfortably large is not going anywhere. In this 30 min. screencast I go through three strategies you can use to tackle big data in R and Python. I also briefly cover three tools:
duckDB,
Apache Spark, and
SnowflakeDB.
You can say what you want about Twitter, but the way animated GIFs are presented on that platform is pretty nice. It’s not so surprising that they play and loop, as one would expect them to do, but the nice thing is that if you click them, they pause. This tiny change in GIF behavior has resulted in a small cottage industry of GIF games (like
here or
here) and click-the-GIF-and-see-what-you-get animations (like
Mario roulette). Here I’ll go through how I made one of the latter in R with
gganimate showing the top 100 downloaded R packages. But first the actual GIF! Click to pause it and learn more about a popular R-package:
I’ve dug up an old, never published, dataset that I collected back in 2013. This dataset fairly cleanly shows that it’s harder to remember words correctly if you also have to remember the case of the letters. That is, if the shown word is Banana and the subject recalls it as Banana, then it’s correct, but banana is as wrong as if the subject had recalled bapple. It’s not very surprising that it’s harder to correctly remember words when case matters, but the result and the dataset are fairly “clean”: Two groups, simple-to-understand experimental conditions, plenty of participants (200+), the data could even be analyzed with a t-test (but then please look at the confidence interval, and not the p-value!). So maybe a dataset that could be used when teaching statistics, who knows? Well, here it is, released by me to the public domain:
In the rest of this post, I’ll explain what’s in this dataset and how it was collected, and I’ll end with a short example analysis of the data. First up, here’s how the memory task was presented to the participants (click here if you want to try it out yourself):