There are many things in deep learning that we are not able to understand (yet). There are limitations, hidden secrets, and obstacles scientists are not ready to overcome. These are fascinating aspects that make us want to go deeper into the ML world. This week, we’re focusing on the limitations and human nature of the AI.
What has happened in ML over the last week? Here are the best picks. Enjoy the read!
Weekly Roundup: July 14-20
» Neptune.ai blog – as always, make sure to visit our blog to find out interesting and in-depth articles on machine learning from the last week. 🙂
» How an AI graphic designer convinced clients it was human by Thomas Macaulay on The Next Web | July 16
Quite a funny but extremely interesting story about an AI graphic designer and its clients. Doesn’t it make you think whether AI will actually replace us in the future? Make sure to check out this interesting read! 👨🎨
» Researchers develop AI algorithm that can generate images by Bryan Walsh on Axios | July 18
Short but informative. Check the article if you’re interested in working with text-generating algorithms. 🖼
» MIT researchers warn that deep learning is approaching computational limits by Kyle Wiggers on Venture Beat | July 15
Have we reached the computational limits in deep learning? What’s ahead of us? Read more in this article.
Also, check out the original study The Computational Limits of Deep Learning by MIT researchers
» Weird AI illustrates why algorithms still need people by Ben Dickson on Venture Beat | July 18
There’s a wide debate on whether AI will ever become independent and won’t need humans. Is it true? If you’re interested, check out this article.
» Could super Artificial Intelligence be, in some sense, alive? on Mind Matters | July 18
A little read for enthusiast of ethics and humanism in AI.
» Old but gold, the reliable Reddit thread on ML for more news on machine learning. There’s always something for everyone – tips, tricks, hacks, and more news.
That’s all folks! I hope you found something of interest in this weekly roundup. Don’t forget to check our blog for more inspiring articles.
👉 Came across an interesting ML article? Or maybe you wrote one yourself and would like to share it with other people? Let us know, we’ll spread the news in our weekly roundup!
ML Experiment Tracking: What It Is, Why It Matters, and How to Implement It
10 mins read | Author Jakub Czakon | Updated July 14th, 2021
Let me share a story that I’ve heard too many times.
”… We were developing an ML model with my team, we ran a lot of experiments and got promising results…
…unfortunately, we couldn’t tell exactly what performed best because we forgot to save some model parameters and dataset versions…
…after a few weeks, we weren’t even sure what we have actually tried and we needed to re-run pretty much everything”
– unfortunate ML researcher.
And the truth is, when you develop ML models you will run a lot of experiments.
Those experiments may:
- use different models and model hyperparameters
- use different training or evaluation data,
- run different code (including this small change that you wanted to test quickly)
- run the same code in a different environment (not knowing which PyTorch or Tensorflow version was installed)
And as a result, they can produce completely different evaluation metrics.
Keeping track of all that information can very quickly become really hard. Especially if you want to organize and compare those experiments and feel confident that you know which setup produced the best result.
This is where ML experiment tracking comes in.Continue reading ->