Blog » General » This Week in Machine Learning: SEO, History, Product Management, & Top Startups

This Week in Machine Learning: SEO, History, Product Management, & Top Startups

The world of machine learning is changing by every second. And if you want to stay on top of the latest innovations, you need to follow the stories and discoveries. And we’ve got your back!

Here are some of the most insightful stories, helpful resources, and news. Only the top picks. Enjoy the read!

Weekly Roundup: May 5-11

Neptune.ai blog – make sure to visit our blog to find out interesting and in-depth articles on machine learning.

Over the past week, we’ve published a series of posts concerning the Papers from the ICLR 2020 Conference

Also, we’ve recently launched a podcast so tune in and enjoy! 🎧

> Top 10 Machine Learning Startups of 2020 by Priya Dialani on Analytics Insight | May 10

A list of the most innovative Machine Learning Companies in 2020 hopefully, you’ll find some inspiration here.

> Unexpected Scientific Insights into COVID-19 From AI Machine Learning Tool on SciTechDaily | May 2020

A team of materials scientists at Lawrence Berkeley National Laboratory (Berkeley Lab) – scientists who normally spend their time researching things like high-performance materials for thermoelectrics or battery cathodes – have built a text-mining tool in record time to help the global scientific community synthesize the mountain of scientific literature on COVID-19 being generated every day.

An interesting read, make sure to check it out!

> Millions of historic newspaper images get the machine learning treatment at the Library of Congress by Devin Coldewey at Techcrunch | May 7

A new effort from the Library of Congress has digitized and organized photos and illustrations from centuries of news using state of the art machine learning. (The run lasted for 19 days!)

> A Product Manager’s Guide to Machine Learning: Core Ideas by Vijay Patha on Towards Data Science blog | May 6

A must-read for all product managers working in the ML field.

> From Google AI Blog – these folks always have something interesting on the plate. Two positions this week:

> A Practical Introduction to Machine Learning for SEO Professionals by Hamlet Batista on SEJ | May 8, 2020

This guide to machine learning will teach you how to build a model to predict whether adding keywords in title tags can increase organic search clicks. It may be nothing spectacular, but if you’re interested in SEO and ML, check it out.

KDnuggets™ News of the week with top stories and tweets of the past week, plus opinions, tutorials, events, webinars, meetings, and jobs.

> Old but gold, the reliable Reddit thread on ML for more news on machine learning. There’s always something for everyone – tips, tricks, hacks, and more news.


That’s all folks! I hope you found something of interest in this weekly roundup. Don’t forget to check our blog for more inspiring articles.

👉 Came across an interesting ML article? Or maybe you wrote one yourself and would like to share it with other people? Let us know, we’ll spread the news in our weekly roundup!


READ NEXT

ML Experiment Tracking: What It Is, Why It Matters, and How to Implement It

10 mins read | Author Jakub Czakon | Updated July 14th, 2021

Let me share a story that I’ve heard too many times.

”… We were developing an ML model with my team, we ran a lot of experiments and got promising results…

…unfortunately, we couldn’t tell exactly what performed best because we forgot to save some model parameters and dataset versions…

…after a few weeks, we weren’t even sure what we have actually tried and we needed to re-run pretty much everything”

– unfortunate ML researcher.

And the truth is, when you develop ML models you will run a lot of experiments.

Those experiments may:

  • use different models and model hyperparameters
  • use different training or evaluation data, 
  • run different code (including this small change that you wanted to test quickly)
  • run the same code in a different environment (not knowing which PyTorch or Tensorflow version was installed)

And as a result, they can produce completely different evaluation metrics. 

Keeping track of all that information can very quickly become really hard. Especially if you want to organize and compare those experiments and feel confident that you know which setup produced the best result.  

This is where ML experiment tracking comes in. 

Continue reading ->
24 evaluation metrics featured

24 Evaluation Metrics for Binary Classification (And When to Use Them)

Read more

How to Set Up Continuous Integration for Machine Learning with Github Actions and Neptune: Step by Step Guide

Read more

The Best MLOps Tools You Need to Know as a Data Scientist

Read more
ML in containers

Data Science & Machine Learning in Containers

Read more