Machine learning is fascinating. New things happen every second while we’re busy performing our daily tasks. If you want to know what big things have happened over the last week, make sure to check this weekly roundup.
Here are the best picks from the last week from the world of machine learning. Enjoy the read!
Weekly Roundup: July 21-27
» Neptune.ai blog – as always, make sure to visit our blog to find out interesting and in-depth articles on machine learning from the last week. 🙂
» Will The Latest AI Kill Coding? by Frederik Bussler on Towards Data Science | July 21
Is superintelligence is closer than we think? 🤖 “OpenAI’s GPT-3, now in use by beta testers, can already code in any language. Machine-dominated coding is almost at our doorstep.” Check more in this interesting article!
» Machine can learn unsupervised ‘at speed of light’ after AI breakthrough, scientists say by Anthony Cuthbertson on Independent | July 22
Researchers have achieved a breakthrough in the development of artificial intelligence by using light instead of electricity to perform computations. 💡 Should we be afraid? Check it out yourself!
» Speech Emotion Recognition (SER) through machine learning on Analytics Insight | July 25
Scientists can leverage machine learning to obtain the underlying emotion from speech audio data and some insights on the human expression of emotion through voice. A must-read for the enthusiasts of SER.
» Disney Research Studios demonstrates automatic face swapping with faster, cheaper AI by Brittany Hillen on DP Preview | July 21
A little something for the fans of face swapping. Disney Research Studios and ETH Zurich have published a study detailing a new algorithm that is able to swap faces from one subject to another in high-resolution photos and videos. 👩🏿👩 A video included.
» KDnuggets™ News of the week with top stories and tweets of the past week, plus opinions, tutorials, events, webinars, meetings, and jobs.
» Old but gold, the reliable Reddit thread on ML for more news on machine learning. There’s always something for everyone – tips, tricks, hacks, and more news.
That’s all folks! I hope you found something of interest in this weekly roundup. Don’t forget to check our blog for more inspiring articles.
👉 Came across an interesting ML article? Or maybe you wrote one yourself and would like to share it with other people? Let us know, we’ll spread the news in our weekly roundup!
ML Experiment Tracking: What It Is, Why It Matters, and How to Implement It
10 mins read | Author Jakub Czakon | Updated July 14th, 2021
Let me share a story that I’ve heard too many times.
”… We were developing an ML model with my team, we ran a lot of experiments and got promising results…
…unfortunately, we couldn’t tell exactly what performed best because we forgot to save some model parameters and dataset versions…
…after a few weeks, we weren’t even sure what we have actually tried and we needed to re-run pretty much everything”
– unfortunate ML researcher.
And the truth is, when you develop ML models you will run a lot of experiments.
Those experiments may:
- use different models and model hyperparameters
- use different training or evaluation data,
- run different code (including this small change that you wanted to test quickly)
- run the same code in a different environment (not knowing which PyTorch or Tensorflow version was installed)
And as a result, they can produce completely different evaluation metrics.
Keeping track of all that information can very quickly become really hard. Especially if you want to organize and compare those experiments and feel confident that you know which setup produced the best result.
This is where ML experiment tracking comes in.Continue reading ->