Introduction to Artificial Intelligence: What is AI?

liu, tempo Date: 2021-09-17 09:28:09
Views:45 Reply:0

Introduction to Artificial Intelligence explains what AI, machine learning, and deep learning are all about, how they relate to each other, common AI algorithms, and other knowledge. When someone asks you about these concepts, you can explain them in an easy-to-understand way. Knowledge points in this article: the relationship between artificial intelligence (AI), machine learning (ML), and deep learning (DL) is as follows, DL ⊆ ML ⊆ AI.


Artificial intelligence is likened to the child’s brain, and machine learning is the process of getting the child to master the cognitive ability, and deep learning is a very efficient teaching system in this process. Artificial intelligence is the purpose and the result; deep learning and machine learning are the method and the tool. The concept of artificial intelligence was proposed in 1955; the concept of machine learning was proposed in 1990; the concept of deep learning was proposed in 2010. Deep learning used to exist as a “neural network algorithm” in machine learning. With the explosion of big data, deep learning was singled out and became a learning idea.


The leaders of the world’s most influential technology companies are emphasizing their focus on artificial intelligence (AI), as seen in top tech companies like Amazon, Facebook, Google, and Microsoft.


But what is AI, and why is it important? Why is it a good time for AI? We are increasingly interested in AI, but the field is largely understood by experts. The purpose of this article is to “explain AI in plain language.


Let’s start by explaining what AI means and what the key terms are. This paper will explain how one of the most effective areas of AI, ‘Deep Learning’, works. It will explore the problems AI solves and why they are important to AI. Learn about the history of AI and why the concept of AI existed in the 1950s, but only exploded today.


Venture capitalists, who are always trying to find new trends that create value for consumers and companies. They believe AI is a more important evolution in computing than the mobile or cloud shift. “It’s hard to overstate,” writes Amazon CEO Jeff Bezos, “the enormous impact AI will have on society over the next 20 years. Whether you are a consumer, a public servant, an entrepreneur or an investor, this emerging trend is important to us all.


What is AI? Artificial intelligence: The science of intelligent programs (AI), coined by John McCarthy in 1956, is a generic term for hardware or software that exhibits intelligent behavior. In Professor McCarthy’s words, it is “the science and engineering of making intelligent machines, especially intelligent computer programs”. The term “AI” has been around for decades, yet progress has been limited because the algorithms that solve many real-world problems are too complex.


Complex activities include performing medical diagnoses, predicting when a machine will fail or measuring the market value of certain assets, involving thousands of data sets and non-linear relationships between variables.


In these cases, it is difficult to use our best data to ‘optimize’ our predictions. In other cases, including recognizing objects in images and translating languages, we cannot even formulate rules to describe our goals. As an example: How can we write a set of rules that completely describe the appearance of a dog?


What if we could reduce the difficulty of complex predictions (data optimization and feature specification) from programmer to program? This is the key point of modern artificial intelligence.


Deep Learning


Machine Learning: Reasoning – Knowledge – Learning. Machine Learning (ML) is a subset of AI. All machine learning is AI, but not all AI is machine learning. The interest in “AI” is evident today in the enthusiasm for “machine learning”, which is progressing rapidly and clearly. Machine learning allows us to solve some complex problems through algorithms. As AI pioneer Arthur Samuel wrote in 1959, machine learning is an area of research that gives computers the ability to learn rather than the ability to be explicitly programmed.


The goal of most machine learning is to develop predictive engines for specific scenarios. An algorithm will receive information about a domain (e.g., movies a person has watched in the past) and weigh the input to make a useful prediction (the probability of a different movie that one would like to watch in the future). Through the computer’s ability to learn, the algorithm is made to make accurate predictions about the future by optimizing the task of measuring the data available for the variables.


The machine learns through training. The algorithm initially receives its outputs as known examples, at which point it pays attention to the differences between its predictions and the correct outputs, and tunes the weights of the inputs to improve the accuracy of its predictions until they are optimized. Thus, the defining characteristic of machine learning algorithms is that the quality of their predictions improves with experience.


The more data we can provide (usually up to a point), the better the prediction engine can be created. There are more than 15 common machine learning methods, each using a different algorithmic structure to optimize predictions based on the data received. Deep learning is the most popular, while others have received less attention but are very valuable and more applicable to a wide range of use cases.


Specific machine learning algorithms are:

Constructing interval theoretical distributions: cluster analysis and pattern recognition

Artificial neural networks

Decision trees


Support vector machines

Integrated Learning AdaBoost

Dimensionality reduction and metric learning


Bayesian classifiers

Constructing conditional probabilities: regression analysis and statistical classification

Gaussian Process Regression

Linear discriminant analysis

Nearest Neighbor Method

Radial basis function kernel

Constructing probability density functions by regenerative models.

Maximum expectation algorithm

Probabilistic graphical models: including Bayesian nets and Markov random fields

Generative Topographic Mapping

Approximate Inference Techniques.

Markov chains

Monte Carlo methods

Variational Scaling


Optimization: Most of the above methods, directly or indirectly, use optimization algorithms.

Each method has its advantages and disadvantages, and combinations can be used. The choice of algorithm to solve a particular problem will depend on factors including the nature of the available data set. In practice, developers tend to experiment to choose which approach to take.


The use cases for machine learning vary depending on our needs and imagination. Using the right data, we can build algorithms for different purposes, including:

Recommending products based on their previous purchase data.

Predicting when machinery on a production line is abnormal.

Predicting whether an email has been misunderstood.

estimating the probability that a credit card transaction is fraudulent, etc.


Deep Learning . Machine learning in general Writing programs that perform certain tasks is difficult, such as understanding speech and recognizing objects in images.


For example, if we want to write a computer program that recognizes an image of a car, then we cannot specify an algorithm to process the features of the car in order to make a correct recognition in any case. Cars come in a variety of shapes, sizes and colors . Their position, orientation and pose can vary. Background, lighting and many other factors influence the appearance of the object. There are so many variables that even if we had written it out the hard way, it would not be a good scalable solution. In that case, we would need to write separate programs for each kind of object we want.


But Deep Learning (DL), this revolutionizes the world of AI. Deep learning is a subset of machine learning. All deep learning is machine learning, but not all machine learning is deep learning.


Deep learning is useful because it avoids the task of programmers having to follow feature specification (defining the features analyzed in the data) or optimization (how to weigh the data to provide more accurate predictions). Simply put, deep learning allows programmers to not have to look at traits and optimization. How is deep learning implemented?


Deep learning simulates the brain, and the human brain learns to overcome difficulties : including understanding speech and recognizing objects, not by dealing with exhaustive rules, but by practice and feedback. Just like a child who sees a car will know that it is a car, and sees a picture will know the meaning expressed on it. Children don’t have a detailed set of rules to learn; children master these through training.


Deep learning uses the same approach. Artificial and software-based computational units whose approximate functions are connected together as neurons in the brain.


They form a ‘neural network’ which receives an input (to continue our example, a picture of a car), analyzes it; he makes a judgment and is told if his judgment is correct, and is trained as such. If the output is wrong, the connections between the neurons are adjusted by the algorithm, which will change the future predictions. Initially the network will be wrong many times.


In millions of examples, the connections between neurons will be adjusted and practice makes it progressively better, getting closer to perfection step by step.With Deep Learning DL we can now.


Recognize elements in pictures.

Translate languages in real time.

Use voice to control devices (via Apple’s Siri, Google Now; Amazon Alexa and Microsoft Cortana);

Predict how genetic variation affects DNA transcription;

Analyzing emotions in customer comments;

Detecting tumors in medical images;

And many more possibilities.

Leave a comment

You must Register or Login to post a comment.