Machine learning and artificial intelligence help to discover new things, push the boundaries, and cross the limits of science. Every week brings tons of news, opinions, and discoveries in the world of machine learning and AI. But with our Weekly Roundup, you can easily catch up with the most important information and learn what’s happened in the industry.
If you’re interested in what has happened in the machine learning realm over the last two weeks, see what we’ve gathered for you. Here are a few best articles from the last week. Enjoy the read!
Weekly Roundup: January 12-25
As every week, on our blog, you can find professional articles on ML written by experts. We publish regularly so visit our blog for all the latest articles!
» Jumbled-up sentences show that AIs still don’t really understand language by Will Douglas Heaven on MIT Technology Review | January 12
NLP systems are well-trained but do they really understand the human language?
Researchers at Auburn University in Alabama and Adobe Research discovered the flaw when they tried to get an NLP system to generate explanations for its behavior, such as why it claimed different sentences meant the same thing. When they tested their approach, they realized that shuffling words in a sentence made no difference to the explanations.
» AI Models from Google and Microsoft Exceed Human Performance on Language Understanding Benchmark by Anthony Alford on InfoQ | January 12
Research teams from Google and Microsoft have recently developed natural language processing (NLP) AI models which have scored higher than the human baseline score on the SuperGLUE benchmark. SuperGLUE measures a model’s score on several natural language understanding (NLU) tasks, including question answering and reading comprehension.
Check out more in this story!
» Artificial intelligence researchers rank the top A.I. labs worldwide by Sam Shead on CNBC | January 21
Here’s a short article on the top AI labs worldwide. Who’s included and who’s not? Make sure to check it out and read this piece if you’re interested in what are the best AI labs! 🥇
» Google trained a trillion-parameter AI language model by Kyle Wiggers on VentureBeat | January 12
Google researchers developed and benchmarked techniques they claim enabled them to train a language model containing more than a trillion parameters. They say their 1.6-trillion-parameter model, which appears to be the largest of its size to date, achieved an up to 4 times speedup over the previously largest Google-developed language model (T5-XXL).
Here’s the original paper: Switch transformers: scaling to trillion parameter models with simple and efficient sparsity
» Developing Algorithms That Might One Day Be Used Against You by Ryan F. Mandelbaum on Gizmodo | January 24
Machine learning algorithms are extremely helpful, there’s no doubt about that. But there might be the dark side to it. Brian Nord is a researcher weighing his own work against the potential to cause harm with AI algorithms.
Nord is a cosmologist at Fermilab and the University of Chicago, where he uses artificial intelligence to study the cosmos. At the same time, he’s struggling with the idea that the algorithms he’s writing may one day be biased against him—and even used against him—and is working to build a coalition of physicists and computer scientists to fight for more oversight in AI algorithm development.
Read this interesting interview to know more.
» NASA Is Training an AI to Detect Fresh Craters on Mars by Daniel Oberhaus on Wired | January 19
An algorithm discovered dozens of Martian craters. It’s a promising remote method for exploring our solar system and understanding planetary history. Maybe, with the aid of AI and machine learning, we’ll get to set up the first steps on Mars sooner than we think. 🪐
» KDnuggets™ News of the week with top stories and tweets of the past week, plus opinions, tutorials, events, webinars, meetings, and jobs.
» Old but gold, the reliable Reddit thread on ML for more news on machine learning. There’s always something for everyone – tips, tricks, hacks, and more news.
That’s all folks! Have you found something of interest in this weekly roundup? We hope you got inspired! Don’t forget to check our blog for more inspiring articles.
And if you came across an interesting ML article, or maybe wrote one yourself and would like to share it with other people, let us know, we’ll spread the news in our weekly roundup! See you next time!
ML Experiment Tracking: What It Is, Why It Matters, and How to Implement It
10 mins read | Author Jakub Czakon | Updated July 14th, 2021
Let me share a story that I’ve heard too many times.
”… We were developing an ML model with my team, we ran a lot of experiments and got promising results…
…unfortunately, we couldn’t tell exactly what performed best because we forgot to save some model parameters and dataset versions…
…after a few weeks, we weren’t even sure what we have actually tried and we needed to re-run pretty much everything”
– unfortunate ML researcher.
And the truth is, when you develop ML models you will run a lot of experiments.
Those experiments may:
- use different models and model hyperparameters
- use different training or evaluation data,
- run different code (including this small change that you wanted to test quickly)
- run the same code in a different environment (not knowing which PyTorch or Tensorflow version was installed)
And as a result, they can produce completely different evaluation metrics.
Keeping track of all that information can very quickly become really hard. Especially if you want to organize and compare those experiments and feel confident that you know which setup produced the best result.
This is where ML experiment tracking comes in.Continue reading ->