Every day interesting things happen in the world of Data Science. People unveil new, unknown secrets of machine learning so we can learn more about the wonders of technology. What has happened in machine learning over the last week? We’ve gathered the most interesting stories.
Here goes a dose of the latest news, discoveries, and inspiring stories. There is something for everyone. Enjoy your read!
Weekly Roundup: March 3rd – 8th
> Doing machine learning the right way by Rob Matheson | MIT News Office, March 7, 2020
A fascinating story on how professor Aleksander Madry strives to build machine-learning models that are more reliable, understandable, and robust.
> Best Machine Learning Books (Updated for 2020) Alessio Gozzoli | March 3
If you enjoy reading, you will find this article helpful. The best positions you should read on machine learning.
> Step-By-Step Framework for Imbalanced Classification Projects by Jason Brownlee, March 9
A detailed tutorial on a systematic framework for working through an imbalanced classification dataset. An interesting read for those who want to learn techniques specifically designed for imbalanced classification.
- You can also check out Jason Brownlee’s book Imbalanced Classification with Python: Better Metrics, Balance Skewed Classes, Cost-Sensitive Learning for a longer read on the imbalanced classification
> 20 women doing fascinating work in AI, machine learning and data science by Elaine Burke, March 9
Here’s a little inspiration for all the ladies (and gentlemen) working in the ML field.
> Best 13 Machine Learning Methods And Techniques For Newbies by Nasir Hawlader, March 7
If you’re new to the world of machine learning and not sure which way to go, this article will be your guide.
> KDnuggets™ News of the week with top stories and tweets of the past week, plus opinions, tutorials, events, webinars, meetings, and jobs
> Don’t forget about the reliable Reddit thread on ML for more news on machine learning!
That’s all folks! I hope you found something of interest in this weekly roundup. Don’t forget to check our blog for more inspiring articles.
👉 Came across an interesting ML article? Or maybe you wrote one yourself and would like to share it with other people? Let us know, we’ll spread the news in our weekly roundup!
ML Experiment Tracking: What It Is, Why It Matters, and How to Implement It
10 mins read | Author Jakub Czakon | Updated July 14th, 2021
Let me share a story that I’ve heard too many times.
”… We were developing an ML model with my team, we ran a lot of experiments and got promising results…
…unfortunately, we couldn’t tell exactly what performed best because we forgot to save some model parameters and dataset versions…
…after a few weeks, we weren’t even sure what we have actually tried and we needed to re-run pretty much everything”
– unfortunate ML researcher.
And the truth is, when you develop ML models you will run a lot of experiments.
Those experiments may:
- use different models and model hyperparameters
- use different training or evaluation data,
- run different code (including this small change that you wanted to test quickly)
- run the same code in a different environment (not knowing which PyTorch or Tensorflow version was installed)
And as a result, they can produce completely different evaluation metrics.
Keeping track of all that information can very quickly become really hard. Especially if you want to organize and compare those experiments and feel confident that you know which setup produced the best result.
This is where ML experiment tracking comes in.Continue reading ->