Machine learning and deep learning are the backbones of present-era artificial intelligence. While researchers are still finding their way to Artificial General Intelligence (AGI), the Weak/Narrow Artificial Intelligence has already grabbed a spot in all major industries. Almost 37 percent of organizations are using some form of Narrow AI in 2021. Artificial Intelligence has become more significant in the wake of the Internet of Things (IoT). There are mainly two areas where AI is highly applicable at present. One is cloud platforms and services, where the server is loaded with a high volume of Big Data, and detail-oriented redundant tasks are to be performed on it that may be too boring or error-prone for a human worker. Secondly, weak AI is possible on things (edge devices) where the AI is implemented in a small footprint performing particular cognitive tasks. Like the embedded systems grab 98 percent of the electronics industry due to their task-specific applications, artificial intelligence is set to follow the same trend. Most artificial intelligence will be applicable on edge devices that include IoT devices, smartphone applications, desktop applications, wearables, and industrial equipment.
All the narrow AI available today are machine learning and deep learning implementations. Machine learning is a sub-field of artificial intelligence, and deep learning is a subset of machine learning. While AI encompasses many different fields and technologies under development for Artificial General Intelligence, machine learning is focused on implementing one of the important human cognitive functions, which is learning. Deep learning is a subset of machine learning, where machine learning is implemented in the form of Artificial Neural Networks (ANN) so that the machine can learn from the data itself rather than only identifying data patterns or correlations between data and outcomes. In this article, we introduce the concept of machine learning and discuss how to get started with it.
The DATA-FIRST approach
The 21st century has commenced with the beginning of an era of connectivity. Internet, social media, e-commerce, electronic trading, IoT, IIoT, and online business analytics have made the data so important that the data is the new oil, and data science is the sexiest job of the century. All businesses and organizations have enormous data at their disposal as everyone has at least one or more computing devices with them all the time connected to the internet. This device can be a computer/laptop, smartphone, wearable, or IoT device. Businesses and organizations need to get valuable insights quickly from a large volume of real-time data.
“So, data is everywhere”
Secondly, one area of artificial intelligence is involved in replicating human senses. For example, human vision is reproduced in the form of computer vision. The speech is produced through natural language processing and natural language understanding. And not just the natural senses, machines need to deal with many other types of information like business data, user data, user behavior, sensor data, navigation data, etc. Computers and machines all have to do natively is process data.
“So, everything around is data”
Machine learning evolved as a technology to make data-driven decisions where machines are made to learn from data themselves and apply human-like reasoning independently. It aims at enabling machines to derive their program for processing data based on their ingestion and understanding of the incoming data. Unlike traditional computer theory, where computers are boring machines implementing user-defined programs, machine learning envisions computers and machines as natural receptors of data — using that data to make data-driven decisions on their own without any user-defined program or any human intervention.
This way, machines could derive valuable insights and perform actions independently, making them autonomous creatures with human-like cognitive abilities. Infusing human-like intelligence in autonomous data processing will make it possible to make data-driven decisions at a large scale. With dependence on user-defined programs and involvement of humanware, scalability in detail-oriented and data-heavy tasks can never be achieved.
Machine learning vs. computer programming
Machine learning is an entirely different approach to computer theory itself. Traditional computers are digital circuits with a defined set of instructions and operations. The computers need to be programmed by a user for performing any useful computing task. A program is natively a sequence of instructions on incoming data. It is the program that decides the outcome of the data. Overall, the computer acts as a data processor in traditional computing.
Machine learning takes a very different approach. In machine learning, the computer is not a data processor. It is instead a data observer. The machine is provided access to data and its outcomes, and it tries to infer inherent patterns of the incoming data and all possible correlations between data and its outcomes. By deducing data patterns and the relationship between data and its outcomes, the machine designs a model with the help of a machine learning algorithm. The model is equivalent to a program that predicts new possible values of data or possible outcomes from incoming data based on its experience of past data observations. The machine learning algorithm is a user-defined software to derive the model and deduce a data-processing program itself. With the help of a machine learning algorithm, the computer can process future data without any explicit programming and human intervention. Here, the computer itself becomes a data programmer.
Machine learning is usually applied to imitate specific cognitive functions. These functions in the programming paradigm are known as the Machine Learning Tasks. With a given task, a machine learning algorithm requires some historical data or real-time data for learning or training the model.
The data could be fed from a database, flat files, logs, or real-time inputs/streams. The data could be labeled, i.e., the data attributes or properties are already defined, or it could be not labeled, i.e., it is not predefined or tabulated with attributes or properties. Usually, the task depends upon the nature of the data itself. If the incoming data is factual, the task must be related to identification, recognition, classification, or feature extraction. A classic example of such data is in the case of computer vision and natural language processing. A machine is trained to identify, recognize, classify, or extract features from an image or video in computer vision.
If the incoming data is variable or dynamic, the task must be related to predicting future values, finding anomalies, or predicting data outcomes. For example, in algorithmic trading, a machine learning model predicts the future value of a stock, cryptocurrency, commodity, or exchange rate. Similarly, a machine is trained to identify words, sentences, and their implications in natural language processing.
For machine learning, either the data itself is sufficient, or sometimes both data and outcomes are required. It all depends upon the task to be implemented by the machine learning algorithm. It is possible that the outcomes may be all unknown and may need to be predicted by the model itself. For example, an algorithm to identify the customer’s shopping preferences may never know what items a customer will buy this time. However, it may predict a shopping list only based on past shopping transactions of the customer.
What is machine learning?
The idea of machine learning dates back to Alan Turing’s Universal Turing Machine, conceptualized in 1935. Since then, there have been several feats and accomplishments in the field, and the definition for machine learning has also evolved over the years. A general definition accepted for machine learning is one given by Arthur Samuel:
“Machine learning is the field of study that gives computers the ability to learn without being explicitly programmed.”
In the last two decades, machine learning has seen a lot of practical implementation in the form of narrow AI. So, a more specific definition for machine learning from Tom Mitchell is now widely accepted. It defines machine learning as follows.
“A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.”
Machine learning is applied and implemented in a learning algorithm to train a machine learning model. As clear from the definition itself –
1. The model is intended to execute some specific task T that requires human-like reasoning or analysis.
2. The model trains itself with experience E, where the experience is gained from the incoming data to identify data patterns or correlate data and its outcomes.
3. The model improves its performance P as its experience grows since it can better predict data patterns and possible outcomes by studying the data and/or results.
Where is machine learning useful
At present, machine learning is most useful in narrow AI applications. In such applications, machine learning is applied for specific tasks requiring human cognitive abilities. Machine learning is applicable in several scenarios where developing and maintaining user-defined programs for the same tasks may not be feasible or cost-effective. First of all, machine learning is best suitable for data-heavy tasks that involve a large scale of redundant, detail-oriented data processing. Humans cannot process the data at the speeds of computers, and computers can not apply natural intelligence in data processing with traditional user-defined programs.
Secondly, machine learning is best suitable for the data and systems changing dynamically over time. Human users cannot monitor such systems and applications within accepted time constraints and require the computer to implement some intelligence on the data and its possible outcomes. For example, an internet service provider can use machine learning to track and ensure network connectivity to its customers instead of manually testing for all connections by a human worker.
Third, machine learning is valuable where human expertise or intervention is not available or possible. For example, a rover sent to a different planet cannot depend on human intervention to navigate its path. It essentially needs to learn by observing its environment. Machine learning is also helpful in areas where human expertise cannot be immediately applied to computational tasks or computing explicitly requires domain-specific knowledge like computer vision, speech recognition, and automatic language translation.
Understanding machine learning tasks
Machine learning algorithms are designed to train a model where the model must be performing a specific cognitive task. The machine learning techniques fall under three broad categories as follows.
1. Supervised machine learning
2. Unsupervised machine learning
3. Reinforcement machine learning
In supervised learning, the machine is provided labeled data where attributes or properties are explicitly defined for the data points. Some examples of machine learning tasks in supervised learning are the following.
1. Classification: In this task, the machine requires to classify or categorize data points based on attached attributes. It needs to identify common and different attributes and assign classes or categories to the available data points accordingly. For example, a machine learning model may be trained to classify vehicles at a toll or to classify items of a grocery store. Common machine learning algorithms applied for classification are Logistic Regression, K-Nearest Neighbors, Decision Tree, Naive Bayes, and Support Vector Machine.
2. Regression: The task the machine requires to predict a numerical outcome for given labeled data points. For example, a machine learning model may need to predict the price of houses based on their area, location, and architecture. Common machine learning algorithms applied for regression are Multi-Linear Regression, Decision Tree, and Support Vector Machine.
In unsupervised learning, the data is not labeled with attributes or properties. The machine itself has to predict labels for the data points. Some examples of machine learning tasks in unsupervised learning are the following.
1. Anomaly Detection: In this task, the machine has to identify unusual data patterns or events based on its experience of some historical data. For example, a machine learning model may be applied to detect email spam, identify fraudulent transactions or identify hacking attacks in a cyber-security system. Common machine learning algorithms used for anomaly detection are Local Outlier Factor, K-Nearest Neighbors, Elliptic Envelope, Boxplot, DBSCAN, Isolation Forest, Support Vector Machines, and Z-score.
2. Structured annotation: In this task, the machine adds structured metadata as annotations to the given data points to derive extra information or relationships between data samples. For example, a machine learning model may be applied to add meta tags to parts of some images indicating extracted features like places, people, or objects within the images. Popular data annotation tools include Annotell, V7 Lab Darwin, Lighttag, Hasty, Dataloop AI, Hivemind, Deepen AI, and Datasaur AI.
3. Translation: In this task, a machine requires identifying the language of a given text and translating it to another language. Both supervised, and unsupervised machine learning algorithms are used for natural language processing. Some of the standard, supervised machine learning algorithms applied for NLP are Maximum Entropy, Bayesian Network, Conditional Random Field, Support Vector Machines, and Neural Networks. Recurrent Neural Networks are often used for language translation. The common unsupervised learning techniques used for NLP are Latent Semantic Indexing and Matrix Factorization.
4. Clustering: In this task, the machine needs to arrange data samples into clusters/groups by observing their inherent latent patterns, similarities, dissimilarities, and relationships. For example, a machine learning model may require group products by identifying their features and specifications. Some of the common machine learning algorithms applied for clustering include Spectral Clustering, K-Means, Affinity Propagation, DBSCAN, OPTICS, Mean Shift, BIRCH, Agglomerative Clustering, and Mini-Batch K-Means.
5. Computer vision: In this task, a machine may require to classify images, perform segmentation, do feature extraction and annotation, and perform motion detection. Many machine learning algorithms are applied for different aspects of computer vision. Common machine learning algorithms used in computer vision are SURF, SIFT, Viola-Jones algorithm, Lucas-Kanade algorithm, Mean Shift, KNN, and Naive Bayes.
6. Transcriptions: In this task, a machine must segregate continuous unstructured data into discrete structured data. Some of the examples of transcription include optical character recognition, text extraction from images, and speech-to-text engines.
In reinforcement learning, the machine depends upon the feedback from an agent for learning from the data. In this type of machine learning, the machine continuously observes data and its outcomes from the surrounding environment to predict future results. Some of the popular reinforcement learning models include Markov Decision Process and Q-Learning.
Understanding machine learning experience
In machine learning, experience refers to ingestion data points or data samples. The model observes the data samples to identify the pattern of incoming data or recognize the relationship between data and its outcomes. The model may consume the dataset all at once in the form of historical data, or it may be real-time data acquired over some time. The model can consume data samples at any time. It is an iterative process that ideally never ends. The acquisition of data samples by a machine learning model is called training the model. How the model learns from the experience is a functionality of the machine learning algorithm.
Understanding machine learning performance
A model learns from the experience to improve its performance in the given task. The performance can be measured using indicators like accuracy, sensitivity, precision, specificity, recall, misclassification rate, error rate, F1 score, etc. The particular performance indicators applicable to a task depend upon the applied machine learning algorithm. The performance indicators may be evaluated on training data set or fresh data samples, called validation and test.
Getting started with machine learning
Machine learning is the hottest technology at this point. It is often confusing for beginners how to start machine learning. It is easy to begin with, machine learning with the following three steps.
Python is the best programming language for beginners to start exploring machine learning.
2. Learn to use tools and packages of chosen programming language: Each programming language has its tools and libraries for implementing machine learning algorithms. After selecting a programming language, you need to learn tools and libraries applicable to machine learning.
3. Explore machine learning algorithms: Once acquainted with language-specific tools and packages, explore different machine learning algorithms to train models for real-life applications.
Machine learning is fascinating. It can be implemented in several programming languages where a programming language may be preferred depending upon the intended application. However, there is no one best programming language to apply machine learning. Python is the most preferred language to start with machine learning. It can be easily used on any desktop computer or even single-board computers. For applying machine learning to microcontroller applications, C/C++ must be preferred with tools like TinyML.
You may also like:
Filed Under: Tutorials, What Is
Questions related to this article?
👉Ask and discuss on EDAboard.com and Electro-Tech-Online.com forums.
Tell Us What You Think!!
You must be logged in to post a comment.