The availability of tremendous computing power in the cloud was one of the factors behind the machine learning revolution. Thus, it is not surprising that there are cloud-based services emerging, aimed at machine learning specialists. But which one to pick?
Cloud-based services are not that new in fact. Every user has at least one email account, making him or her a cloud user – he or she has no idea where the emails are stored or how much computing power was used to send the message. Usually – nobody cares.
What is Machine Learning as a Service (MLaaS)?
When it comes to machine learning, the access to the computing power was essential to make this technique popular – the company needs not a server farm to develop a model and use it to automate daily tasks. But still, there are multiple tools to use and processes to keep eye on when it comes to developing a model. Business reality is never as straightforward as the development theory.
Machine Learning as a Service (MLaaS) is basically an umbrella term for a set of cloud-based tools. These tools aim to support the daily work of data scientists and data engineers in the way cloud based office suites have revolutionized the office environment. The MLaaS tools support collaboration, version control, parallelization and other processes that otherwise would be troublesome. Also, larger vendors deliver easy ways to integrate their MLaaS services with the rest of their portfolio, automating the deployment process or enabling ones to enrich daily tasks with machine learning-based tools.
Also, there is a growing need for MLaaS tools. The AI market worldwide is getting only bigger, According to IDC estimations it will reach up to $156.5 billion of worth by the end of the year. So there are many companies willing to pay for data scientists’ work. And many other ones, just waiting to grab their share by delivering grat tools of trade. A cherry-picked list of the latter ones below.
AWS Machine Learning is a Jack-of-all-trades when it comes to cloud services. It allows companies to leverage nearly infinite amounts of computing power and storage. It also provides more sophisticated tools, with MLaaS among others.
AWS Machine Learning (ML) provides six machine learning solutions:
1. Amazon Polly
It is a service that turns text into lifelike speech. It aids in developing applications that talk and build entirely new categories of speech-enabled products by leveraging the power of deep learning. It is also a huge step forward in building inclusive apps for people with sigh disabilities.
Polly mainly supports English, Brazilian Portuguese, Danish, French, Japanese, Korean, Mandarin Chinese, and Spanish among others. The full list of languages that Polly supports is here.
Polly’s Neural Text-to-Speech(TTS) supports two speaking styles.
- Newscaster Style – For news narration use cases.
- Conversational Style – Ideal for two-way communication like telephony applications.
It also provides Amazon Polly Brand that can create a custom voice for organizations.
SageMaker provides developers and data scientists services to build quickly, train, and deploy machine learning models without worrying about the code.
All machine learning model development, including notebooks, experiment management, automatic model creation, and debugging, can be performed using SageMaker’s visual interface. There is a lot of repetitive and relatively standardized tasks when it comes to machine learning and that’s where Amazon SageMaker comes to play.
Benefits of using Amazon SageMaker:
- Fully Integrated development environment(IDE) for machine learning tasks
- Automatically build, train, and tune models.
- Reduce data labeling costs by up to 70%.
- Supports all leading deep learning frameworks such as TensorFlow, PyTorch, Apache MXNet, Chainer, Keras, Gluon, Scikit-learn.
3. Amazon Lex
Amazon Lex is a conversational AI for chatbots that build “Conversational Interfaces” into any application using voice and text by using the advanced deep learning techniques of Automatic Speech Recognition(ASR).
Amazon Lex can be considered polly au rebours – where Polly speaks to the user, Lex can read the text.
Use cases for Amazon Lex:
- Call Center Chatbots and Voice Assistants
- QnA bots and informational Bots
- Application Bots
- Enterprise Productivity bots
Amazon Rekognition helps identify objects, people, scenes, text, and activities in images and videos and detect any inappropriate content. It also provides accurate facial analysis and facial search capabilities to detect, analyze, and compare faces for user verification tasks.
Benefits of using Amazon Rekognition:
- It provides labels to identify objects such as bikes, telephone, building, and scenes as a parking lot, beach, city.
- Custom labels for extending the ability to detect more objects.
- Content moderation
- Text Detection
- Face detection and analysis
- Face search and verification
Amazon Comprehend is a natural language processing(NLP) service that uses machine learning to find insights and relationships in a text.
The tools leverages the power of machine learning to find the insights and relationships in unstructured data. The service identifies the sentence’s language, extracts key phrases, places, people, brands, or events.
Amazon Comprehend Medical is widely used to extract the Medical Corpus information to help identify medical conditions, medications, and drug inventions.
Use cases of Amazon Comprehend:
- Call center analytics
- Index and search product reviews
- Personalized content on the website
- Customer support ticket handling
- Clinical trial recruitment
Amazon Transcribe makes it easy for developers to add speech to text capabilities to their applications by leveraging a deep learning technique called Automatic Speech Recognition (ASR) by converting speech to text.
Additionally, AWS also provides Amazon Transcribe Medical for medical speech to text capabilities for clinical documentation applications.
Benefits of Amazon Transcribe:
- Create easy-to-read transcriptions
- Filter specific words
- Increase accuracy with customized transcriptions
- Customer Experience
- Post call analytics
- Clinical conversation documentation
- Captioning & Subtitling workflows
- Cataloging audio archives
Google Cloud Platform (GCP), offered by Google, is a suite of cloud computing services.
GCP also provides developers and data scientists an AI platform to build, deploy, and manage machine learning models. What makes this offer distinctive is that the platform offers access to the Tensor Processing Unit, a Google-designed chip that is used for machine learning-only purposes.
Apart from this, GCP has MLOps services that can help manage machine learning models, experiments, and end-to-end workflows with MLOps by deploying robust, repeatable pipelines.
GCP provides the following AI and machine learning services (with a free trial).
This machine learning service provides four following suites for building machine learning models:
I. AI platform
This platform provides developers, data scientists, and data engineers to streamline their ML workflows. It helps prepare and store the datasets with BigQuery by labeling the data to perform tasks such as classification, object detection, entity extraction.
II. Cloud AutoML
Cloud AutoML helps developers with limited machine learning knowledge and expertise to train high-quality models specific to their business needs. The tool enables the developers to access to the research works of Google and tune their results to the own needs. Thus, a data scientist or software developer with little to no experience but basic knowledge can fine-tune the model using this service.
Products AutoML offers:
- Sight: AutoML vision to derive insights from images and AutoML Video Intelligence (Beta Version only) enables powerful content discovery in videos.
- Language: AutoML Natural Language enables us to build and deploy custom machine learning models that analyze documents, categorizing them, identify entities within them, or assessing attitudes within them.
- Structured Data: AutoML Tables (Beta Version only) build and deploy machine learning models on structured data.
III. AI building blocks
There are two types of AI building blocks:
- Custom Models – It helps in object identification, classification by leveraging the state-of-the-art transfer learning and neural architecture to build domain-specific custom machine learning models with better accuracy.
- Pretrained Models – This can cut down the hassle of training a model and help in ML models’ rapid development.
Google Cloud Platform provides the infrastructure to train deep learning models cost-effectively with the high-performance Cloud GPUs and Cloud TPUs. This offer is aimed on seasoned AI-deveoping teams who can take advantage of the powerful hardware offered there. Also, this offering enablest the ML-using company to save by using specialized hardware for minutes of training instead of using hours of commodity hardware for hours of raining process.
The simplest example of conversational AI is a chatbot. A chatbot is an application that interacts with humans in the form of text or text-to-speech.
So what does conversational AI offers?
A speech-to-text is a process where voice commands get converted into texts.
Google’s speech-to-text provides the following features:
- Speech Adaptation – It helps in customize speech recognition to transcribe domain-specific terms and rare words.
- Domain-specific models – Speech-to-text can use pre-trained machine learning models for specific audio types and sources.
- Stream speech recognition – Real-time speech recognition.
- Speech-to-text-On-Prem – Organizations can protect speech data with Google’s speech recognition technology on-premises while having full control over the infrastructure.
Text-to-Speech can convert text into a natural-sounding human speech.
Google’s text-to-speech provides the following features.
- WaveNet Voices – A waveNet generates speech that sounds more natural than other text-to-speech systems. It creates raw audio waveforms from scratch then the model leverages the power of neural networks that have been trained using a large volume of speech samples.
- Voice tuning – Personalize the selected voice pitch up to 20 semitones and adjust the speaking rate to be 4x faster or slower.
- Text and SSML support – Customize speech with SSML(Speech Synthesis Markup Language) tags that allows you to add pauses, data and time formatting, and pronunciation instructions.
- Custom Voice (beta) – Train a custom speech synthesis model using your audio recordings to create a unique and more natural-sounding voice.
Dialogflow is a linguistic and visual bot-building platform to design and integrate a conversational user interface into mobile applications, web applications, interactive voice response systems. The tool can analyze multiple types of input, such as text or audio information.
Dialogflow has the following terms in its environment.
- Agents – It is a virtual agent that handles conversations with the end-users.
- Intents – Intent is the end-users intention for conversation. For each agent, there could be many intents that make up a conversation. Dialogflow intent matches the end-user expression to the best sense in the defined agent by performing an intent classification task.
- Entities – Every intent parameter has a type called Entity that extracts an end-user expression.
- Contexts – Dialogflow can control the flow of the conversation.
- Follow-up intents – A follow-up intent is a child of its associated parent intent. When a follow-up intent gets created, an output context is automatically added to the parent intent, and an input context of the same name is added to the follow-up intent.
- Dialogflow Console – It is a web-user interface to manage Dialogflow agents.
Use cases of Dialogflow:
- Chatbots – Interfaces can be programmed to answer questions, access orders, make appointments, and take requests.
- Internet of things (IoT) – It can be applied to IoT devices to make them better at understanding context and responding with precision.
Microsoft Azure ML Studio is a web interface for developers and data scientists that provides a wide range of services for building, training, and deploying machine learning models faster. Microsoft does its best to keep up with the biggest web players despite the company’s offline beginnings.
Azure provides a no-code UI that accelerates the development and deployment of machine learning models with the help of a drag-and-drop interface.
It has built-in modules that help preprocess the data to build and train ML/DL models using machine learning and deep learning algorithms such as computer vision, text analytics, recommendation system, anomaly detection.
Azure also provides Automated Machine Learning to rapidly build highly accurate models by automating iterative tasks with smarter prototyping and development.
2. Azure MLOps
Azure now supports managing the end-to-end machine learning lifecycle. It helps data scientists and developers to track datasets, code, experiments, and environments.
If an organization works for regulated industries such as government, finance, and healthcare, a Private IP is necessary.
Workspace Private Link is a network isolation feature that enables Azure Machine Learning access over a private IP in its virtual network(VNet).
Azure Machine Learning Studio supports all major frameworks such as sci-kit learn, TensorFlow, Keras, MxNet, PyTorch.
Key features of Azure Machine Learning Studio:
- Collaborative Notebooks
- Automated Machine Learning
- Drag and Drop Model Development and Management
- Data Labelling
IBM Watson Machine Learning provides a wide range of tools and services so anyone can build, train, and deploy Machine Learning models.
With the help of graphical tools, we can build a model in minutes, automate hyperparameter optimization with popular frameworks such as TensorFlow, Caffe, PyTorch, and Keras.
IBM Watson is famous for the pioneering in ML marketing. The machine took part in Jeopadry TV game beating champions in the sam way the IBM-designed Deep Blue has bet Garry Kasparov in a chess match years before.
Watson Studio AI suite has three toolkits.
This category is specifically for visual recognition. It has built-in models to analyze images for scenes, objects, and many other categories.
Agraphical visual recognition modeler does not require technical proficiency to automatically train a model to classify images for scenes, objects, or custom content.
This category is specifically for natural language classification. It has multiple languages support such as English, Arabic, French, German, Italian, Japanese, Korean, Portuguese(Brazilian), and Spanish. It is accessible for people with little to no tech skills in the same way is the tool above.
Watson Machine Learning enables us to build, train, and deploy analytical models and neural networks.
IBM Watson Studio tools
- AutoAI experiments – AutoAI automatically preprocess data, selects the best estimator, and then generate model candidate pipelines for review and compare.
- SPSS Modeler – It presents a graphical view of the model while developing it.
- Notebooks -It provides an interactive programming environment of working with data, testing models, and rapid prototyping.
BigML provides a comprehensive machine learning platform that provides a wide range of algorithms to develop and manage machine learning models.
The tool facilitates predictive applications across industries such as aerospace, automotive, energy, entertainment, financial services, food, healthcare, IoT.
BigML provides the following modes of services:
- Web Interface: A responsive web interface that helps upload data and develop predictive models.
- Command Line Interface: The command-line tool is called a bigmler built on python API for service that allows more flexibility than the web interface, such as developing models locally or remotely and performing cross-validation tasks.
- API: A REST API can be used as a wrapper to include in any programming language such as Python, Ruby, PHP, C#, Java, Bash, Clojure, and Objective C.
BigML’s products include:
WhizzML is a domain-specific language for automating Machine Learning workflows, implementing high-level Machine Learning algorithms, and easily sharing them with others.
WhizzML provides the infrastructure for creating and sharing machine learning scripts and libraries with others.
It is a Machine Learning REST API to easily build, run, and bring predictive models to real-world projects. With BigML.io, anyone can perform basic supervised and unsupervised machine learning tasks.
BigMLer is a command-line tool for BigML’s API. It wraps BigML’s API Python bindings to offer a high-level command-line script to create and publish datasets and models easily, develop ensembles, make local predictions from multiple models, clusters, and simplify many other machine learning tasks.
The BigML PredictServer keeps models in memory and is optimized to make predictions quickly. It is highly scalable and can integrate the BIgML PredictServer with existing applications and data centers.
Flatline is a lisp-language for the specification of values extracted or generated from an input dataset, using a finite sliding window of input rows.
It enables us to programmatically perform an array of data transformations, including filtering and new field generation.
Flatliner is a handy code editor that helps to test the Flatline expressions.
The tools listed above can be considered only a tip of an iceberg. There are dozens if not hundreds of tools that support the ML development in multiple ways. There are tools designed to handle the automation, support versioning or deliver an end-to-end environment for machine learning development.
The key is to find the tool that delivers the best result at the moment – every data scientist or engineer has his or her own way of working and finds some elements of the process more wearing than others. So if you hate some aspect of your otherwise excellent workflow – there is an app for that!
15 Best Tools for Tracking Machine Learning Experiments
Pawel Kijko | Posted February 17, 2020
While working on a machine learning project, getting good results from a single model-training run is one thing, but keeping all of your machine learning experiments organized and having a process that lets you draw valid conclusions from them is quite another. That’s what machine learning experiment management helps with.
In this article, I will explain why you, as data scientists and machine learning engineers, need a tool for tracking machine learning experiments and what is the best software you can use for that.
Tools for tracking machine learning experiments – who needs them and why?
- Data Scientists: In many organizations, machine learning engineers and data scientists tend to work alone. That makes some people think that keeping track of their experimentation process is not that important as long as they can deliver that one last model. This is true to an extent, but when you want to come back to an idea, re-run a model from a couple of months ago or simply compare and visualize the differences between runs, the need for a system or tool for tracking ML experiments becomes (painfully) apparent.
- Teams of Data Scientists: A specialized tool for tracking ML experiments is even more useful for the whole team of data scientists. It allows them to see what others are doing, share the ideas and insights, store experiment metadata, retrieve it at any time and analyze it whenever they need to. It makes the teamwork much more efficient, prevents situations where several people work on the same task, and makes onboarding of new members way easier.
- Managers/Business people: tracking software creates an opportunity to involve other team members like managers or business stakeholder in your machine learning projects. Thanks to the possibility to prepare visualizations, add comments and share the work, managers and co-workers can easily track the progress and cooperate with the machine learning team.
Here is an in-depth article about experiment management for those of you who want to learn more.Continue reading ->