Blog » General » This Week in Machine Learning: DGM Models, Yeast, and Customer Retention

This Week in Machine Learning: DGM Models, Yeast, and Customer Retention

To stay on top of the latest news in Data Science, AI, tech, and ML, you need to follow the trends and changes. To help you, we’ve picked top stories of the week.

Here goes a dose of the latest news, discoveries, and inspiring stories from the world of Machine Learning. There is something for everyone. Enjoy your read!

Weekly Roundup: April 27 – May 4

> Neptune.ai blog – make sure to visit our blog to find out interesting and in-depth articles on machine learning.

Also, we’ve recently launched a podcast so tune in and enjoy! 🎧

> Microsoft Research Unveils Three Efforts to Advance Deep Generative Models by Jesus Rodriguez | April 27

Microsoft Research efforts with Optimus, FQ-GAN and Prevalent present new ideas that can be incorporated into the new generation of DGM models. Microsoft Research open sourced the code related to this efforts together with the research papers.

> Applying Machine Learning to…..Yeast? on Google AI Blog | April 29

In collaboration with Calico Life Sciences, Google AI presents “Learning causal networks using inducible transcription factors and transcriptome-wide time series”, published in Molecular Systems Biology. Based on exhaustive experiments, they built a genome-wide model for the regulation of gene expression in S. cerevisiae and verified some of the results experimentally, enabling future investigations into less well understood biological systems. The Induction Dynamics gene Expression Atlas is available from Calico in a format easy to manipulate in python, with open-sourced code to do this on the Google Research GitHub. The data is hosted in a standard format at the Gene Expression Omnibus.

> How Machine Learning Can Help with Customer Retention by Euge Inzaugarat | April 30

In the article, the author writes about building a churn model to understand why customers are leaving.

> How A.I. may help solve science’s ‘reproducibility’ crisis by Jonathan Vanian on Fortune | May 4

Researchers often have trouble reproducing, or verifying, supposedly groundbreaking work described in scientific papers, raising questions about whether the findings in studies are genuine. Read about how AI can help.

> KDnuggets™ News of the week with top stories and tweets of the past week, plus opinions, tutorials, events, webinars, meetings, and jobs.

> Old but gold, the reliable Reddit thread on ML for more news on machine learning

That’s all folks! I hope you found something of interest in this weekly roundup. Don’t forget to check our blog for more inspiring articles.

👉 Came across an interesting ML article? Or maybe you wrote one yourself and would like to share it with other people? Let us know, we’ll spread the news in our weekly roundup!


READ NEXT

ML Experiment Tracking: What It Is, Why It Matters, and How to Implement It

10 mins read | Author Jakub Czakon | Updated July 14th, 2021

Let me share a story that I’ve heard too many times.

”… We were developing an ML model with my team, we ran a lot of experiments and got promising results…

…unfortunately, we couldn’t tell exactly what performed best because we forgot to save some model parameters and dataset versions…

…after a few weeks, we weren’t even sure what we have actually tried and we needed to re-run pretty much everything”

– unfortunate ML researcher.

And the truth is, when you develop ML models you will run a lot of experiments.

Those experiments may:

  • use different models and model hyperparameters
  • use different training or evaluation data, 
  • run different code (including this small change that you wanted to test quickly)
  • run the same code in a different environment (not knowing which PyTorch or Tensorflow version was installed)

And as a result, they can produce completely different evaluation metrics. 

Keeping track of all that information can very quickly become really hard. Especially if you want to organize and compare those experiments and feel confident that you know which setup produced the best result.  

This is where ML experiment tracking comes in. 

Continue reading ->

Best Practices for Dealing with Concept Drift

Read more

Deep Dive into TensorBoard: Tutorial With Examples

Read more
Data analysis nlp featured

Exploratory Data Analysis for Natural Language Processing: A Complete Guide to Python Tools

Read more
Colab Neptune

How to Track and Organize ML Experiments That You Run in Google Colab

Read more