Rimma Boshernitsan (who I do not know) provides a very (overly?) optimistic view on how we can leverage technology to better manage natural resources.

I found out that Annie Atkins has or had a substack.

If I every wanna try very expensive Japanese clothing, I’ll try kaptain sunshine.

Here is a great collection of commercial illustration from 1950-1975.

Last weekend I was in the museum of military history in Vienna and coincidentally learned about the artist Karl Pippich (only in German).

I do not know what this website is, but it turned out that I really like its museum reviews.

Lemony Snicket and Mike Mignola team up to reimagine PINOCCHIO. WOOOOOOO 🤯

Science

Gauthier et al. (2026) claim that the twitter-(now-x)-algorithm “promotes conservative content and demotes posts by traditional media. Exposure to algorithmic content leads users to follow conservative political activist accounts, which they continue to follow even after switching off the algorithm, helping explain the asymmetry in effects”. They do not show the data (and I am too lazy to check), but the results are certainly not encouraging.

Zhou et al. (2026) introduce a time series foundation model that is pre-trained with latent states predictions (similar to JEPA, but different)

Munro et al. (2026) use a meta-meta-analysis (whatever that is) to tell us that exercise seems to be as effective as antidepressants. Not sure if I buy that, but great if that is the case (and: This would be another reason why pupils should do more sports in school not less). This article summarizes the results.

For different reasons I recently was reminded of Gaier and Ha (2019), who use an evolutionary approach to create neural networks that work well for a given task irrespective of the weights. As so often with Ha’s papers from this time the contribution comes with it’s own little website.

I always liked to take a multi-objective angle on multi-task learning, but never worked on it. Yu seems to be a seminal paper in that direction, since their approach provides the baseline for many follow up studies. Essentially they try to adapt the gradients for multiple tasks so they do not infer with each other by projecting the gradients into a common subspace (or at least that is what I think they do).