This Week in Machine Learning: ML & Human Brain, Project Amber, South Park & Deep Fake, and More

Posted November 4, 2020

Algorithms and numbers are all around us, we just don’t realize how much it affects our daily lives or even actions. What has happened in the machine learning realm over the last week? If you’re interested in how machine learning is changing our world, make sure to check out our weekly roundup. News, interesting stories. Read, learn, and get inspired!

Weekly Roundup: October 27 – November 2

» Neptune.ai blog – as always, make sure to visit our blog to find out interesting and in-depth articles on machine learning from the last week.

We’ve published a lot of new content recently so don’t wait, go through our blog!

» Deep Neural Networks Help to Explain Living Brains by Anil Ananthaswamy on Quanta Magazine | October 28

There’s a coterie of neuroscientists using deep neural networks to make sense of the brain’s architecture. In particular, scientists have struggled to understand the reasons behind the specializations within the brain for various tasks. They have wondered not just why different parts of the brain do different things, but also why the differences can be so specific.

Read more to find out how deep neural networks, often criticized as “black boxes,” are helping neuroscientists understand the organization of living brains.

» The 10 Commandments of Self-Taught Machine Learning Engineers by Daniel Bourke on Towards Data Science | October 29

A short and to-the-point article. Make sure to check it out if you don’t want to be stagnant in your career!

» Alphabet’s Project Amber uses AI to try to diagnose depression from brain waves by Kyle Wiggers on VentureBeat | November 2

Project Amber is a new, open source resources to help researchers collect and interpret electroencephalography (EEG) data for mental health measurement.

There’s still a lot to work but perhaps some day it’ll be possible to bring this project to successful end.

» The creators of South Park have a new weekly deepfake satire show by Karen Hao on MIT Technology Review | October 28

Nobody said that machine learning can’t be used for fun to entertain people. 😉 A new weekly satire show from the creators of South Park is using deepfakes, or AI-synthesized media, to poke fun at some of the most important topics of our time. Called Sassy Justice, the show is hosted by the character Fred Sassy, a reporter for the local news station in Cheyenne, Wyoming, who sports a deepfaked face of president Trump, though a completely different voice, hair style, and persona.

Just remember, don’t take anything on this show seriously!

» AI has cracked a key mathematical puzzle for understanding our world by Karen Hao on MIT Technology Review | October 30

Researchers at Caltech have introduced a new deep-learning technique for solving PDEs that is much more accurate than deep-learning methods developed previously. It’s also much more generalizable, capable of solving entire families of PDEs and it’s 1,000 times faster than traditional mathematical formulas, which would ease our reliance on supercomputers and increase our computational capacity to model even bigger problems.

» Insights Into AI Adoption In The Federal Government by Ron Schmelzer on Forbes | October 21

In this interesting interview Ellery Taylor, Acting Director of the Office of Acquisition Management and Innovation Division, at the US General Services Administration (GSA) provided his insights into how the government plans to continue to accelerate its adoption of AI.

» KDnuggets™ News of the week with top stories and tweets of the past week, plus opinions, tutorials, events, webinars, meetings, and jobs.

» Old but gold, the reliable Reddit thread on ML for more news on machine learning. There’s always something for everyone – tips, tricks, hacks, and more news.


That’s all folks! Have you found something of interest in this weekly roundup? We hope you got inspired! Don’t forget to check our blog for more inspiring articles.

And if you came across an interesting ML article, or maybe wrote one yourself and would like to share it with other people, let us know, we’ll spread the news in our weekly roundup! See you next week!


READ NEXT

ML Experiment Tracking: What It Is, Why It Matters, and How to Implement It

Jakub Czakon | Posted November 26, 2020

Let me share a story that I’ve heard too many times.

”… We were developing an ML model with my team, we ran a lot of experiments and got promising results…

…unfortunately, we couldn’t tell exactly what performed best because we forgot to save some model parameters and dataset versions…

…after a few weeks, we weren’t even sure what we have actually tried and we needed to re-run pretty much everything”

– unfortunate ML researcher.

And the truth is, when you develop ML models you will run a lot of experiments.

Those experiments may:

  • use different models and model hyperparameters
  • use different training or evaluation data, 
  • run different code (including this small change that you wanted to test quickly)
  • run the same code in a different environment (not knowing which PyTorch or Tensorflow version was installed)

And as a result, they can produce completely different evaluation metrics. 

Keeping track of all that information can very quickly become really hard. Especially if you want to organize and compare those experiments and feel confident that you know which setup produced the best result.  

This is where ML experiment tracking comes in. 

Continue reading ->