Blog » General » This Week in Machine Learning: AI and Google Search, LO-shot Learning, Dangers of AI, New Deep Learning Models

This Week in Machine Learning: AI and Google Search, LO-shot Learning, Dangers of AI, New Deep Learning Models

It’s been two weeks since our weekly roundup. Interesting things have happened, new stories have emerged so let’s get right to it!

Here are all the best articles, research papers, and news from the last two weeks. Enjoy the read!

Weekly Roundup: October 13-26

» Neptune.ai blog – as always, make sure to visit our blog to find out interesting and in-depth articles on machine learning from the last week. 🙂 We’ve published a lot of new content recently so don’t wait, go through our blog!

» How AI is powering a more helpful Google by Prabhakar Raghavan on Google Blog | October 15

If you’re wondering how they’re making it at Google to accelerate online search with AI, read this short but interesting article!

» A radical new technique lets AI learn with practically no data by Karen Hao on MIT Technology Review | October 16

“Less than one”-shot or LO-shot learning can teach a model to identify more objects than the number of examples it is trained on. That could be a big deal for a field that has grown increasingly expensive and inaccessible as the data sets used become ever larger.

👉 Here you can find the original paper.

» The true dangers of AI are closer than we think by Karen Hao on MIT Technology Review | October 21

An interesting conversation with William Isaac, a senior research scientist on the ethics and society team at DeepMind. He talks about the current and potential challenges facing AI development, as well as the solutions.

» New deep learning models: Fewer neurons, more intelligence by Institute of Science and Technology Austria on TechXplore | October 13

An international research team from TU Wien (Vienna), IST Austria, and MIT (USA) has developed a new artificial intelligence system based on the brains of tiny animals, such as threadworms. This novel AI-system can control a vehicle with just a few artificial neurons.

The team says that the system has decisive advantages over previous deep learning models: It copes much better with noisy input, and, because of its simplicity, its mode of operation can be explained in detail. It does not have to be regarded as a complex “black box”, but it can be understood by humans.

👉 Here’s the original paper published in the journal Nature Machine Intelligence.

» AI, Health Insurance, And Data Harmonization: Interview With Shiv Misra, CVS Health by Kathleen Walch on Forbes | October 24

An interesting and insightful interview with Shiv Misra who is the Head of Medicare Retention Analytics at CVS Health about how he compares data to oxygen, how companies can support actionable insights from their data, and examples of how data has successfully been used throughout his diverse and impressive previous positions at Fortune 500 organizations.

» IBM and Pfizer claim AI can predict Alzheimer’s onset with 71% accuracy by Kyle Wiggers on VentureBeat | October 22

We’re much closer to curing Alzheimer’s disease than we’ve ever been. Pfizer and IBM researchers claim to have developed a machine learning technique that can predict Alzheimer’s disease years before symptoms develop. By analyzing small samples of language data obtained from clinical verbal tests, the team says their approach achieved 71% accuracy when tested against a group of cognitively healthy people.

» KDnuggets™ News of the week with top stories and tweets of the past week, plus opinions, tutorials, events, webinars, meetings, and jobs.

» Old but gold, the reliable Reddit thread on ML for more news on machine learning. There’s always something for everyone – tips, tricks, hacks, and more news.


That’s all folks! Have you found something of interest in this weekly roundup? We hope you got inspired! Don’t forget to check our blog for more inspiring articles.

👉 And if you came across an interesting ML article, or maybe wrote one yourself and would like to share it with other people, let us know, we’ll spread the news in our weekly roundup! See you next week!


READ NEXT

ML Experiment Tracking: What It Is, Why It Matters, and How to Implement It

10 mins read | Author Jakub Czakon | Updated July 14th, 2021

Let me share a story that I’ve heard too many times.

”… We were developing an ML model with my team, we ran a lot of experiments and got promising results…

…unfortunately, we couldn’t tell exactly what performed best because we forgot to save some model parameters and dataset versions…

…after a few weeks, we weren’t even sure what we have actually tried and we needed to re-run pretty much everything”

– unfortunate ML researcher.

And the truth is, when you develop ML models you will run a lot of experiments.

Those experiments may:

  • use different models and model hyperparameters
  • use different training or evaluation data, 
  • run different code (including this small change that you wanted to test quickly)
  • run the same code in a different environment (not knowing which PyTorch or Tensorflow version was installed)

And as a result, they can produce completely different evaluation metrics. 

Keeping track of all that information can very quickly become really hard. Especially if you want to organize and compare those experiments and feel confident that you know which setup produced the best result.  

This is where ML experiment tracking comes in. 

Continue reading ->
Experiment tracking Experiment management

15 Best Tools for ML Experiment Tracking and Management

Read more

How to Organize Your LightGBM ML Model Development Process – Examples of Best Practices

Read more
Dalex-Neptune

Explainable and Reproducible Machine Learning Model Development with DALEX and Neptune

Read more
MLOps guide

MLOps: What It Is, Why it Matters, and How To Implement It

Read more