The last two weeks have brought a lot of data and plenty of news! From scientific and medical discoveries to interesting stories, and practical guides.
What’s been happening in the world of data, robots, and numbers? Check out the latest news in our weekly roundup!
Weekly Roundup: September 22 – October 5
» Neptune.ai blog – as always, make sure to visit our blog to find out interesting and in-depth articles on machine learning from the last week. We’ve published a lot of new content recently so don’t wait, go through our blog!
» How close is AI to decoding our emotions? on MIT Technology Review | September 24
Here’s an intriguing interview (podcast) with the leading researchers on Emotion AI, which already is a huge business. If you’re familiar with Replika, and AI friend, you’ll find this conversation interesting. There’s more to that so just tune in and listen 🎧
» These weird, unsettling photos show that AI is getting smarter by Karen Hao on MIT Technology Review | September 25
Don’t get scared by the title! It’s an interesting read about the use of GPT-3 method on both images and text. The researchers from the Allen Institute for Artificial Intelligence, AI2, have developed a new text-and-image model, aka a visual-language model, that can generate images given a caption. AI is getting better and better! 😉
» The state of AI in 2020: Democratization, industrialization, and the way to artificial general intelligence by George Anadiotis on ZDNet | October 1
To quote the author “from fit for purpose development to pie in the sky research, this is what AI looks like in 2020.” It’s a great summary of how the AI industry looks like in 2020
» Here’s what happened when neural networks took on the Game of Life by Ben Dickson on The Next Web | September 29
In their research AI researchers at Swarthmore College and the Los Alamos National Laboratory investigate how neural networks explore the Game of Life and why they often miss finding the right solution.
Their findings highlight some of the key issues with deep learning models and give some interesting hints at what could be the next direction of research for the AI community. A read worth checking out!
» Anticipating heart failure with machine learning by Adam Conner-Simons on MIT News | October 1
Machine learning is known for its great role in helping to save human lives. Here’s news about an algorithm that can detect whether the excess of fluid in the lungs may cause heart failure. All by looking at a single X-ray.
» Neural Hallucinations by Rishab Sharma on Towards Data Science | October 5
Can AI hallucinate? However abstract it may sound, it turns out AI can (sort of 😉 ) hallucinate. In this article, the author discusses how the phenomenon of hallucination in neural networks can be used for image inpainting.
» KDnuggets™ News of the week with top stories and tweets of the past week, plus opinions, tutorials, events, webinars, meetings, and jobs.
» Old but gold, the reliable Reddit thread on ML for more news on machine learning. There’s always something for everyone – tips, tricks, hacks, and more news.
That’s all folks! Have you found something of interest in this weekly roundup? We hope you got inspired! Don’t forget to check our blog for more inspiring articles.
And if you came across an interesting ML article, or maybe wrote one yourself and would like to share it with other people, let us know, we’ll spread the news in our weekly roundup! See you next week!
ML Experiment Tracking: What It Is, Why It Matters, and How to Implement It
10 mins read | Author Jakub Czakon | Updated July 14th, 2021
Let me share a story that I’ve heard too many times.
”… We were developing an ML model with my team, we ran a lot of experiments and got promising results…
…unfortunately, we couldn’t tell exactly what performed best because we forgot to save some model parameters and dataset versions…
…after a few weeks, we weren’t even sure what we have actually tried and we needed to re-run pretty much everything”
– unfortunate ML researcher.
And the truth is, when you develop ML models you will run a lot of experiments.
Those experiments may:
- use different models and model hyperparameters
- use different training or evaluation data,
- run different code (including this small change that you wanted to test quickly)
- run the same code in a different environment (not knowing which PyTorch or Tensorflow version was installed)
And as a result, they can produce completely different evaluation metrics.
Keeping track of all that information can very quickly become really hard. Especially if you want to organize and compare those experiments and feel confident that you know which setup produced the best result.
This is where ML experiment tracking comes in.Continue reading ->