The spring vibe can be felt in the air. And even though we’re all stuck at home due to the coronavirus pandemic, let’s not waste the spring positivity but treat ourselves with new things.
Here goes a dose of the latest news, discoveries, and inspiring stories from the world of Machine Learning. There is something for everyone. Enjoy your read!
Weekly Roundup: March 30th – April 5th
> Neptune.ai blog – make sure to visit our blog to find out interesting and in-depth articles on machine learning.
Also, we’ve recently launched a podcast so tune in and enjoy! 🎧
> Exploring Nature-Inspired Robot Agility by Xue Bin (Jason) Peng, Student Researcher and Sehoon Ha, Research Scientist, Robotics at Google, Google AI Blog | April 3
The authors present a framework that takes a reference motion clip recorded from an animal (a dog, in this case) and uses RL to train a control policy that enables a robot to imitate the motion in the real world.
> Machine translation of cortical activity to text with an encoder–decoder framework by Makin, J.G., Moses, D.A. & Chang, E.F. on Nature Neuroscience journal | April 1
The authors of the study show how to decode the electrocorticogram with high accuracy and at natural-speech rates. Taking a cue from recent advances in machine translation, they train a recurrent neural network to encode each sentence-length sequence of neural activity into an abstract representation, and then to decode this representation, word by word, into an English sentence.
In other words: AI systems can translate our brain activity into fully formed text, without hearing a single word uttered.
Here’s the article summarizing the study: New AI System Translates Human Brain Signals Into Text With Up to 97% Accuracy by Peter Dockrill | April 1
> Should you really use machine learning for that? by Caleb Kaiser on Towards Data Science | April 1
A checklist for deciding to use applied machine learning.
> New Ways To Optimize Machine Learning by Bryon Moyer | April 2
A quick read on the different approaches for improving performance and lowering power in ML systems.
> KDnuggets™ News of the week with top stories and tweets of the past week, plus opinions, tutorials, events, webinars, meetings, and jobs.
> Old but gold, the reliable Reddit thread on ML for more news on machine learning
That’s all folks! I hope you found something of interest in this weekly roundup. Don’t forget to check our blog for more inspiring articles.
👉 Came across an interesting ML article? Or maybe you wrote one yourself and would like to share it with other people? Let us know, we’ll spread the news in our weekly roundup!
ML Experiment Tracking: What It Is, Why It Matters, and How to Implement It
10 mins read | Author Jakub Czakon | Updated July 14th, 2021
Let me share a story that I’ve heard too many times.
”… We were developing an ML model with my team, we ran a lot of experiments and got promising results…
…unfortunately, we couldn’t tell exactly what performed best because we forgot to save some model parameters and dataset versions…
…after a few weeks, we weren’t even sure what we have actually tried and we needed to re-run pretty much everything”
– unfortunate ML researcher.
And the truth is, when you develop ML models you will run a lot of experiments.
Those experiments may:
- use different models and model hyperparameters
- use different training or evaluation data,
- run different code (including this small change that you wanted to test quickly)
- run the same code in a different environment (not knowing which PyTorch or Tensorflow version was installed)
And as a result, they can produce completely different evaluation metrics.
Keeping track of all that information can very quickly become really hard. Especially if you want to organize and compare those experiments and feel confident that you know which setup produced the best result.
This is where ML experiment tracking comes in.Continue reading ->