This Week in Machine Learning: Language & Robotics, 10 Underappreciated Python Packages, Avocado Armchair, and More

Posted January 12, 2021
ML roundup 1201

The new year brings new opportunities, news, and discoveries. Today, we’re bringing you the first weekly roundup in the new year 2021. What’s new? What’s waiting ahead? And what the past year has changed? Check our short summary of the latest news.

Here are the best articles from the first one and a half of week of January. Enjoy the read and stay on top of the latest AI trends!

Weekly Roundup: January 1-11

» Neptune.ai blog

The Christmas season and the New Year’s Eve might have slowed things down, but not for us! On our blog, you can find professional articles on ML written by experts. We publish regularly so visit our blog for all the latest articles.

» OpenAI GPT-3 Wrote This Article About Webpack by… OpenAI GPT-3 model on HackerNoon | January 3

It’s widely known that the GPT-3 can generate coherent and, more or less, logical texts. This year, we’ll probably see the rise in machine-generated texts since it’s nothing spectacular these days.

So how about an article written entirely by the ML model? Can you spot the differences between this and a human-written article? Check for yourself.

» Leading computer scientists debate the next steps for AI in 2021 by Ben Dickson on VentureBeat | January 2

What do we need to push AI to the next level? More data and larger neural networks? New deep learning algorithms? Approaches other than deep learning?

This is a topic that has been hotly debated in the AI community and was the focus of an online discussion Montreal.AI held two weeks ago. Titled “AI debate 2: Moving AI forward: An interdisciplinary approach,” the debate was attended by scientists from a range of backgrounds and disciplines.

Read this article with interesting insights to get a sense of what the machine learning world needs to achieve even more.

» Why robotics and language need each other by Matthew Hutson on The Week | January 3

We already know machine learning and language have a lot in common (and it’s not only about the programming language 😉 ). But are we closer to learning robots how to talk? Will it ever be possible?

In this article, you’ll find several stories of the ML experts who share their insights on the language and algorithms.

» This avocado armchair could be the future of AI by Will Douglas Heaven on MIT Technology Review | January 5

What does the Avocado chair have to do with AI – DALL·E and CLIP two new models built by OpenAI, that combine language and images to give its AI a better understanding of everyday concepts.

» 10 Underappreciated Python Packages for Machine Learning Practitioners by Vinay Uday Prabhu on KDnuggets

Here’s a more technical piece in which you’ll find 10 underappreciated Python packages covering neural architecture design, calibration, UI creation and dissemination.

» Five ways to make AI a greater force for good in 2021 by Karen Hao on MIT Technology Review | January 8

Here are the five hopes the author has for the 2021 and machine learning. They can bring a lot of good into the world of technology, and not only. Will they come to an existence? Let’s hope so! Check it yourself!

» Best Machine Learning and Artificial Intelligence Books by Stella Sebastian on Reconshell | January 9

Books are always a great resource of knowledge for everyone. If you’re looking for new positions to add to your reading list, here you can find some. There are 20 titles on the list, so make sure to check it out if you’re interested!

» KDnuggets™ News of the week with top stories and tweets of the past week, plus opinions, tutorials, events, webinars, meetings, and jobs.

» Old but gold, the reliable Reddit thread on ML for more news on machine learning. There’s always something for everyone – tips, tricks, hacks, and more news.


That’s all folks! Have you found something of interest in this weekly roundup? We hope you got inspired! Don’t forget to check our blog for more inspiring articles.

And if you came across an interesting ML article, or maybe wrote one yourself and would like to share it with other people, let us know, we’ll spread the news in our weekly roundup! See you next time! 


READ NEXT

ML Experiment Tracking: What It Is, Why It Matters, and How to Implement It

Jakub Czakon | Posted November 26, 2020

Let me share a story that I’ve heard too many times.

”… We were developing an ML model with my team, we ran a lot of experiments and got promising results…

…unfortunately, we couldn’t tell exactly what performed best because we forgot to save some model parameters and dataset versions…

…after a few weeks, we weren’t even sure what we have actually tried and we needed to re-run pretty much everything”

– unfortunate ML researcher.

And the truth is, when you develop ML models you will run a lot of experiments.

Those experiments may:

  • use different models and model hyperparameters
  • use different training or evaluation data, 
  • run different code (including this small change that you wanted to test quickly)
  • run the same code in a different environment (not knowing which PyTorch or Tensorflow version was installed)

And as a result, they can produce completely different evaluation metrics. 

Keeping track of all that information can very quickly become really hard. Especially if you want to organize and compare those experiments and feel confident that you know which setup produced the best result.  

This is where ML experiment tracking comes in. 

Continue reading ->