Blog » General » This Week in Machine Learning: Algorithms to Know in 2021, Underspecification, Saving Food, and Removing Bias in ML

This Week in Machine Learning: Algorithms to Know in 2021, Underspecification, Saving Food, and Removing Bias in ML

Welcome back, Neptuners! It’s been a week so it’s time for our weekly roundup. Here’s a dose of the latest news from the world of machine learning and AI. We hope you find something of interest.

Check out our propositions and learn new things. Enjoy the read!

Weekly Roundup: November 17 – 23

» Neptune.ai blog – as always, make sure to visit our blog to find out interesting and in-depth articles on machine learning from the last week.

We’ve published a lot of new content recently so don’t wait, go through our blog!

» All Machine Learning Algorithms You Should Know in 2021 by Terence Shin on Towards Data Science | November 22

A nice list of different machine learning algorithms that may turn out to be helpful for beginners or those who want to experiment with different algorithms.

» The way we train AI is fundamentally flawed by Will Douglas Heaven on MIT Technology Review | November 18

Data shift is already a problem for training AI. But now a group of 40 researchers across seven different teams at Google have identified another major cause for the common failure of machine-learning models – underspecification.“We are asking more of machine-learning models than we are able to guarantee with our current approach,” says Alex D’Amour, who led the study.

Read more in this interesting article!

👉 Here’s the original paper Underspecification Presents Challenges for Credibility in Modern Machine Learning

» Know-How to Learn Machine Learning Algorithms Effectively by Shareef Shaik on KDnuggets | November 23

The author shares his approach to actually learning algorithms beyond the surface because machine learning is more than just fit and predict methods.

» Food Tech Stories: Solving Food Waste Problem With The Help Of AI by Arthur Tkachenko on Hacker Noon | November 20

The author writes: Internationally, between 33-50% of all food is never eaten, and the worth is over $1 trillion. That being said, 800 million individuals battle to get a meal on a daily basis.

Is there a hope for saving food with the use if AI? Read yourself!

» Practical strategies to minimize bias in machine learning by Charna Parkey on VentureBeat | November 21

Quite a concise article on how to instrument, monitor, and mitigate bias through a disparate impact measure with helpful strategies.

» KDnuggets™ News of the week with top stories and tweets of the past week, plus opinions, tutorials, events, webinars, meetings, and jobs.

» Old but gold, the reliable Reddit thread on ML for more news on machine learning. There’s always something for everyone – tips, tricks, hacks, and more news.


That’s all folks! Have you found something of interest in this weekly roundup? We hope you got inspired! Don’t forget to check our blog for more inspiring articles.

And if you came across an interesting ML article, or maybe wrote one yourself and would like to share it with other people, let us know, we’ll spread the news in our weekly roundup! See you next week!


READ NEXT

ML Experiment Tracking: What It Is, Why It Matters, and How to Implement It

10 mins read | Author Jakub Czakon | Updated July 14th, 2021

Let me share a story that I’ve heard too many times.

”… We were developing an ML model with my team, we ran a lot of experiments and got promising results…

…unfortunately, we couldn’t tell exactly what performed best because we forgot to save some model parameters and dataset versions…

…after a few weeks, we weren’t even sure what we have actually tried and we needed to re-run pretty much everything”

– unfortunate ML researcher.

And the truth is, when you develop ML models you will run a lot of experiments.

Those experiments may:

  • use different models and model hyperparameters
  • use different training or evaluation data, 
  • run different code (including this small change that you wanted to test quickly)
  • run the same code in a different environment (not knowing which PyTorch or Tensorflow version was installed)

And as a result, they can produce completely different evaluation metrics. 

Keeping track of all that information can very quickly become really hard. Especially if you want to organize and compare those experiments and feel confident that you know which setup produced the best result.  

This is where ML experiment tracking comes in. 

Continue reading ->
Cross-validation

Cross-Validation in Machine Learning: How to Do It Right

Read more
Colab Neptune

How to Track and Organize ML Experiments That You Run in Google Colab

Read more
PIL tutorial

Essential Pil (Pillow) Image Tutorial (for Machine Learning People)

Read more
TensorBoard Neptune

TensorBoard vs Neptune: How Are They ACTUALLY Different

Read more