Open in app

Sign In

Write

Sign In

Michał Oleszak
Michał Oleszak

2K Followers

Home

Lists

About

Pinned

About me

Welcome to my Medium page! — Bio Hello there! I’m Michał, a machine learning engineer with a statistics background based in Zurich, Switzerland. On Medium, I usually write about topics I’m most interested in: AI, machine learning, computer vision, self-supervised representation learning, MLOps, Bayesian methods, and old-school statistics. You can find out more about me on…

2 min read

2 min read


Published in

Towards AI

·Sep 4

Building a Recommender for Implicit Feedback Data

Provide personalized recommendations without knowing your users. — Each recommendation system is different, and some of them are much easier to build than others. Think about Netflix. They know all about each of their movies, have rich personal user data, and an abundance of user-produced data: plays, ratings, watch time, and so on. In this data-rich environment, one…

Machine Learning

16 min read

Building a Recommender for Implicit Feedback Data
Building a Recommender for Implicit Feedback Data
Machine Learning

16 min read


Published in

Towards Data Science

·Aug 18

Organizing a Machine Learning Monorepo with Pants

Streamline your ML workflow management — Have you ever copy-pasted chunks of utility code between projects, resulting in multiple versions of the same code living in different repositories? Or, perhaps, you had to make pull requests to tens of projects after the name of the GCP bucket in which you store your data was updated? Situations…

Machine Learning

20 min read

Organizing a Machine Learning Monorepo with Pants
Organizing a Machine Learning Monorepo with Pants
Machine Learning

20 min read


Published in

Towards AI

·Jul 11

AI Pulse #2: Meta’s Human-Like AI & Small Language Models

Meta’s two new models learn more as humans do, and what do small language models actually learn? — In this edition: ImageBind from Meta, a model learning from six modalities; I-JEPA, the first step towards Yann LeCun’s dream of human-like AI; Will Microsoft’s Orca set the trend for small language models? AI Pulse is also available as a free newsletter on Substack.

Artificial Intelligence

6 min read

Artificial Intelligence

6 min read


Published in

Towards Data Science

·May 17

How to Detect Data Drift with Hypothesis Testing

Hint: forget about the p-values — Data drift is a concern to anyone with a machine learning model serving live predictions. The world changes, and as the consumers’ tastes or demographics shift, the model starts receiving feature values different from what it has seen in training, which may result in unexpected outputs. Detecting feature drift appears…

Data Science

18 min read

How to Detect Data Drift with Hypothesis Testing
How to Detect Data Drift with Hypothesis Testing
Data Science

18 min read


Published in

Towards Data Science

·May 7

Unboxing DINOv2, Meta’s new all-purpose computer vision backbone

Are vision foundational models catching up with LLMs? — Self-supervised training methods continue to deliver breakthrough after breakthrough. Last week, Meta AI released the second version of their self-DIstillation with NO labels or DINO model. The model can supposedly be used as a backbone to solve virtually any computer vision task without fine-tuning! Have the foundational models in computer…

Data Science

8 min read

Unboxing DINOv2, Meta’s new all-purpose computer vision backbone
Unboxing DINOv2, Meta’s new all-purpose computer vision backbone
Data Science

8 min read


Published in

Towards AI

·May 1

AI Pulse #1: DINOv2, All The LLMs & Open-Source AI

A new foundational model for computer vision, making sense of the spree of open-source LLMs, and should AI be open-source? — AI Pulse is also available at pulseofai.substack.com. In this edition: DINOv2, a universal computer vision backbone; A spree of open-source LLMs emerges following LlaMa’s leak; Should AI models be open-sourced?

Artificial Intelligence

8 min read

AI Pulse #1
AI Pulse #1
Artificial Intelligence

8 min read


Published in

Towards Data Science

·Apr 17

Model Optimization with TensorFlow

Reduce your models' latency, storage, and inference costs with quantization and pruning — Over the last few years, machine learning models have seen two seemingly opposing trends. On the one hand, the models tend to get bigger and bigger, culminating in what’s all the rage these days: the large language models. Nvidia’s Megatron-Turing Natural Language Generation model has 530 billion parameters! On the…

Machine Learning

9 min read

Model Optimization with TensorFlow
Model Optimization with TensorFlow
Machine Learning

9 min read


Published in

Towards AI

·Feb 21

Forget About ChatGPT

Bard, Sparrow, and multimodal chatbots will render it obsolete soon, and here is why. — ChatGPT, the OpenAI chatbot released last autumn, has taken the Internet by storm. Arguably, no other machine learning model has ever made so many headlines outside of the AI community. It provides a near-human experience when talked to and can help many of us do our work faster: from SEO…

Artificial Intelligence

11 min read

Forget About ChatGPT
Forget About ChatGPT
Artificial Intelligence

11 min read


Published in

Towards Data Science

·Feb 13

Bad machine learning models can still be well-calibrated

You don’t need a perfect oracle to get your probabilities right. — Machine learning models are often evaluated based on their performance, measured by how close some metric is to zero or one (depending on the metric) but this is not the only factor that determines their usefulness. In some cases, a model that is not very accurate overall can still be…

Machine Learning

11 min read

Bad machine learning models can still be well-calibrated
Bad machine learning models can still be well-calibrated
Machine Learning

11 min read

Michał Oleszak

Michał Oleszak

2K Followers

ML Engineer & Manager | Top Writer in AI & Statistics | michaloleszak.com | Book 1:1 @ topmate.io/michaloleszak

Help

Status

Writers

Blog

Careers

Privacy

Terms

About

Text to speech

Teams