We Raised $8M Series A to Continue Building Experiment Tracking and Model Registry That “Just Works”

Read more

Blog » General » This Week in Machine Learning: Visualizing Neurons, AI, Resources for Data Scientists, and More

This Week in Machine Learning: Visualizing Neurons, AI, Resources for Data Scientists, and More

The last week has been really hectic in the world of ML. It was really hard to choose from all the fascinating stories so we included them all in our weekly roundup.

Here goes a dose of the latest news, discoveries, and inspiring stories from the world of Machine Learning. There is something for everyone. Enjoy your read!

Weekly Roundup: April 14 – 20

> Neptune.ai blog – make sure to visit our blog to find out interesting and in-depth articles on machine learning.

Also, we’ve recently launched a podcast so tune in and enjoy! 🎧

> 20 Best Machine Learning Resources for Data Scientists by Limarc Ambalina on Hackernoon blog | April 17th

The author presents guides, papers, tools and datasets for both computer vision and natural language processing.

A list definitively worth checking!

OpenAI launches Microscope to visualize the neurons in popular machine learning models by Khari Johnson pm Venture Beat blog | April 14

OpenAI launched Microscope, a library of neuron visualizations starting with nine popular or heavily neural networks. There are millions of images in the collection. Like a microscope can do in a laboratory, Microscope is made to help AI researchers better understand the architecture and behavior of neural networks with tens of thousands of neurons.

Read more in the article!

> Google launches Cloud Healthcare API in general availability by Kyle Wiggers on Venture Beat | April 20

Google announced the launch in general availability of Cloud Healthcare API, a service that facilitates the exchange of data between health care applications and solutions built on Google Cloud. 

👉 You can read the original news release here.

Artificial Intelligence that can evolve on its own is being tested by Google scientists by Jason Murdock on Newsweek | April 14

Computer scientists working for a high-tech division of Google are testing how machine learning algorithms can be created from scratch, then evolve naturally, based on simple math.

👉 Read the original research paper here.

> Model quantifies the impact of quarantine measures on Covid-19’s spread by Mary Beth Gallagher | Department of Mechanical Engineering | April 16

A machine learning algorithm combines data on the disease’s spread with a neural network, to help predict when infections will slow down in each country.

> This Doctor From Kashmir Uses Machine Learning To Crunch Coronavirus Data by Andrew Wight at Forbes | April 19

An inspiring story about a physician-turned-entrepreneur raised in Kashmir who is now part of a team using big data and machine learning to help detect useful patterns in the tsunami of public health data generated world-wide by the COVID-19 crisis and do what he can for those back home.

> KDnuggets™ News of the week with top stories and tweets of the past week, plus opinions, tutorials, events, webinars, meetings, and jobs.

> Old but gold, the reliable Reddit thread on ML for more news on machine learning

That’s all folks! I hope you found something of interest in this weekly roundup. Don’t forget to check our blog for more inspiring articles.

👉 Came across an interesting ML article? Or maybe you wrote one yourself and would like to share it with other people? Let us know, we’ll spread the news in our weekly roundup!


ML Experiment Tracking: What It Is, Why It Matters, and How to Implement It

10 mins read | Author Jakub Czakon | Updated July 14th, 2021

Let me share a story that I’ve heard too many times.

”… We were developing an ML model with my team, we ran a lot of experiments and got promising results…

…unfortunately, we couldn’t tell exactly what performed best because we forgot to save some model parameters and dataset versions…

…after a few weeks, we weren’t even sure what we have actually tried and we needed to re-run pretty much everything”

– unfortunate ML researcher.

And the truth is, when you develop ML models you will run a lot of experiments.

Those experiments may:

  • use different models and model hyperparameters
  • use different training or evaluation data, 
  • run different code (including this small change that you wanted to test quickly)
  • run the same code in a different environment (not knowing which PyTorch or Tensorflow version was installed)

And as a result, they can produce completely different evaluation metrics. 

Keeping track of all that information can very quickly become really hard. Especially if you want to organize and compare those experiments and feel confident that you know which setup produced the best result.  

This is where ML experiment tracking comes in. 

Continue reading ->
24 Evaluation Metrics for Binary Classification (And When to Use Them)

24 Evaluation Metrics for Binary Classification (And When to Use Them)

Read more

How to Structure, Organize, Track and Manage Reinforcement Learning (RL) Projects

Read more
MLOps guide

MLOps: What It Is, Why It Matters, and How to Implement It

Read more
MLflow alternatives

The Best MLflow Alternatives (2022 Update)

Read more