Look like the new fashion words. As many times, thanks to Google and his conference ‘The Magic in the Machine’, you hear more often than ever about machine learning. But what is the blissful machine learning?
In the first place, we can say that it is a set of techniques applied to the artificial intelligence that allows the machines to learn, where this AI is obtained from already given examples. Or, what is the same: a process of induction of knowledge.
A matter of algorithms
‘For the system to know that this is a dog, we will show you pictures of dogs. If he says this is a dog but not really, he has to understand that he has been wrong . ‘ These are words of Jeremiah Harmsen, leader of the Google Research department in Europe.
This is the machine learning: categorization algorithms. Suppose we have a bag of data: colors, shapes, and sizes. And each color is associated with a vehicle manufacturer. By means of an ML classification algorithm, we can show the machine which corresponds to one or the other brand.
Technically, this is known as linear regression: modeling the relationship between a variable depending on Y, and one or more variables independent of X. The difference between traditional learning (by commands) and ML is that thanks to a set of basic rules about the algorithm, the machine can continue to learn, without the need for anyone to “teach”.
That is, the training algorithm is responsible for generating the selective model, and the prediction algorithm is responsible for classifying future entries based on this information. Obviously, the more entries and definitions, the more accurate the algorithm will be.
The possibilities of the In – On
Thanks to this system of training (In) and labeling (On) we can, for example, detect if a tumor is malignant or benign to judge by its specific characteristics, without needing a medical analysis.
Before being used firmly, these types of models are “trained”, to check if their reactions to the patterns of information are correct. Once trained, their predictions can save time and money for companies that apply different ML algorithms. Depending on the frequency and precision of accuracy, the “quality” of this algorithm will be determined. But let’s not forget that even an expert in your field has some margin of error.
These types of algorithms can also be set to operate independently, for example in vehicle and home sales portals. If the variables X are the characteristics ( a geographical position with respect to the center, number of rooms, materials, size, age), Y would be the resulting price. With this tool, we can predict at what price to sell or buy a new home.
Solving Logistic Regression Problems
ML goes beyond these simple predictions. It can also be used to classify groups. Suppose we have to assign to a data group a different category of three possible ones.
Using a sigmoid function – a formula that studies the logistic function curve – we can create the groups: it receives any input number and returns a real number between 0 and 1, which we interpret as a probability. Depending on the position of the curve in each case, we will know to which group it belongs.
Intelligent Edge new AI chip from Microsoft
Obviously, to apply this logistic regression, we need a previously classified database to train the algorithm. Let us not forget that he is like a child: first, he observes, then he acts, and if he does it badly we “reprimand” him until we correct his behavior.
Logistic regression is one of the most popular tools in the world. It is used both for valuing investment risks and by detecting the spam in our email accounts. So now you explain why some words are interpreted directly as spam but, even if you said the same words in an email between friends, the system would not rule out your email.
K-means, SVM and API’s
All these strange names refer to different algorithms and their possible applications. The clustering k-means has the function of finding clusters or relationships among data we have, without prior training. How? By randomness. We use one of the clusters as a reference point (centroid) and, from there, we see which are the data closest to each centroid, to assign them to this cluster. Something like mayors by neighborhoods.
Microsoft Word Artificial Intelligence Correction for Styles
From here, we just have to move the centroids in an iterative way until we find the “perfect shape”. More demanding and used in search engines of large databases, this algorithm is simple and ideal for finding convergences between thousands of references.
The SVM stands for Support Vector Machine and is also an algorithm for solving classification problems. And the APIs are the different interfaces of programming applications.
If we look at the common use of the big data we can understand it in a simple way: by emulating the system of the black boxes – inside classified data without anyone from the “outside” intervenes – can interpret data in the cloud without needing any infrastructure, through automatic learning.
Or, what is the same, MLaaS (machine learning as a service), services that offer automatic learning tools, visualizing data? Whether it’s the smile recognition algorithm included in millions of cameras, voice language processing, WhatsApp predictive text. All these tools are born of data centers that a provider handles. The rest is a highly optimized real-time calculation.
Of course, this can be applied beyond the obvious. Imagine not only the predictive text recognizes or suggests the words you want to say but associates these words with an emotion. Imagine that in a simple piece of text a tweet, the API interprets how you feel and, consequently, suggests images or GIFs according to the occasion. Well: that has existed for years.
Machine Learning The lost forest
It is “random forest” and is another of the most common and powerful algorithms. It consists of combining decision trees and together forming a tree. Isolated data is nothing, but it can be part of something.
Future AI Treadmill First Artificial Intelligence System
If we have an input value and classify it according to its condition. Different branches will create which, in turn, will belong to a different root category. Think of the classification scheme of the animal kingdom that you studied as a child and you already have it.
Surely you have heard of the popular big.LITTLE, a computer architecture in which (big) is the most powerful processors, those who consume more energy, and (LITTLE) those that give that extra push when they are needed and meanwhile, remain at rest to save energy and thus consume less battery of the mobile. Well, the tool random forest is to define the criteria that distribute the computing load.
Machine Learning We will all live in the cloud
Let’s talk so positively about the big data and cloud computing caters directly to this type of tools. A large database is nothing by itself, but raw information.
Google has its own services, like so many others, and could not operate analogically with so much information: they need automatic sorting and analysis tools. But, above all, they need to have that data.
Machine Learning Free Knowledge
Geoffrey Hinton, a professor of computer science at the University of Toronto, published on Youtube a series of great readings to delve into the. How it applies in object recognition, image segmentation, modeling language, and so on. This is a course where tricks implement to apply different algorithms based on the Python programming language.
Facebook Artificial Intelligence developed its own language
Of course, there are many more algorithms and the combinatorial and configuration possibilities with each algorithm are enormous. You just have to look at the different internet search engines: each one performs a different analysis of the same data that we introduce.
In short, Machine Learning, like real and natural learning, will always have an improvement field, help define what we mean by Artificial Intelligence. And save large amounts of resources: Google applies these advances to make your data centers more efficient, reducing energy consumption by 40%.
There is still much to see a virtual brain behaving with the efficiency of ours. But thanks to these “shortcuts” we can understand the different ways of learning the machines. And how to make them, in the end, more intelligent and intuitive.