header image

Software

What does the AI software do globally ?

With the IA software, we can do facial recognition, objects recognition and sound recognition. The software can analyze the output of the camera or of the microphone, and can interpret that. It use two very well-known method: the machine learning and the deep learning, that we explain to you below.
The software is based on Python and Java and you can download it in the "Download App" tab. It can be use not only for our robot but also on your computer with your own camera and microphone.

The machine learning

First and foremost machine learning carries with it this connotation that it is extremely complex. While it is mathematically rigorous it is really simple when you break it down into mathematical terms and even more simple to grasp once you see a real world example of how it is used all the time on people like you and I. My goal is to teach anyone that has a basic understanding of algebra how this crazy stuff works under the hood.
The seed of AI is rooted in something called a neural network. Neural networks are a mathematical model used to translate the first principles of human thought (neurons) into mathematics and then into computers interpretation of that math.
Modern computers are different than humans though as they process binary data and have to be told a very specific set of instructions in order to operate, whereas when a child is born they are not given instructions to move, eat, or cry. They are “pre-programmed” to be able to do these things on their own. A computer just isn’t a computer without a basic set of instructions. Reasoning by analogy, the same out of the womb behavior that babies have is similar to a computers BIOS, or the most basic set of instructions upon life (in a computers case turning on). The only difference is that the BIOS had to be written by someone else and we humans are not 100% certain of where we came pre-programmed from but that is a whole other point.
Supervised learning relies on a supervisor (the human) and is used to solve two types of problems: classification (grouping by similarity) and regression (formulating a quantitative output based on a series of independent inputs).

There are four main parts in the process of SML (supervised machine learning) that you need to understand and they are really similar to the previous section where we talked about how computers learn, but we will go into further detail of a few key parts of the mathematics of it.


Much like its name, unsupervised, this type of machine learning involves minimal human involvement. Sure, the data needs to be cleaned up and presentable but these models are actually useful in telling humans what doesn’t just meet the eye.
One way this is used is in e-commerce. Companies have enormous amounts of data on customers, both prospective and current. Every company wants to find new customers to serve their products to. With massive datasets containing information on what they know about a person, companies can apply unsupervised learning to this dataset to find new ways to sell to these customers along with finding new products to sell to these customers.

The Deep learning

Deep learning is a specific kind of machine learning. To understand deep learning well, one must have a solid understanding of the basic principles of machine learning.
Deep-learning architectures such as deep neural networks, deep belief networks, recurrent neural networks and convolutional neural networks have been applied to fields including computer vision, machine vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation, bioinformatics, drug design, medical image analysis, material inspection and board game programs, where they have produced results comparable to and in some cases surpassing human expert performance.
Artificial neural networks (ANNs) were inspired by information processing and distributed communication nodes in biological systems. ANNs have various differences from biological brains. Specifically, neural networks tend to be static and symbolic, while the biological brain of most living organisms is dynamic (plastic) and analogue.
The adjective "deep" in deep learning refers to the use of multiple layers in the network. Early work showed that a linear perceptron cannot be a universal classifier, and then that a network with a nonpolynomial activation function with one hidden layer of unbounded width can on the other hand so be. Deep learning is a modern variation which is concerned with an unbounded number of layers of bounded size, which permits practical application and optimized implementation, while retaining theoretical universality under mild conditions. In deep learning the layers are also permitted to be heterogeneous and to deviate widely from biologically informed connectionist models, for the sake of efficiency, trainability and understandability, whence the "structured" part.
Most modern deep learning models are based on artificial neural networks, specifically convolutional neural networks (CNN)s, although they can also include propositional formulas or latent variables organized layer-wise in deep generative models such as the nodes in deep belief networks and deep Boltzmann machines.
In deep learning, each level learns to transform its input data into a slightly more abstract and composite representation. In an image recognition application, the raw input may be a matrix of pixels; the first representational layer may abstract the pixels and encode edges; the second layer may compose and encode arrangements of edges; the third layer may encode a nose and eyes; and the fourth layer may recognize that the image contains a face. Importantly, a deep learning process can learn which features to optimally place in which level on its own. (Of course, this does not completely eliminate the need for hand-tuning; for example, varying numbers of layers and layer sizes can provide different degrees of abstraction.)
The word "deep" in "deep learning" refers to the number of layers through which the data is transformed. More precisely, deep learning systems have a substantial credit assignment path (CAP) depth. The CAP is the chain of transformations from input to output. CAPs describe potentially causal connections between input and output. For a feedforward neural network, the depth of the CAPs is that of the network and is the number of hidden layers plus one (as the output layer is also parameterized). For recurrent neural networks, in which a signal may propagate through a layer more than once, the CAP depth is potentially unlimited. No universally agreed-upon threshold of depth divides shallow learning from deep learning, but most researchers agree that deep learning involves CAP depth higher than 2. CAP of depth 2 has been shown to be a universal approximator in the sense that it can emulate any function. Beyond that, more layers do not add to the function approximator ability of the network. Deep models (CAP > 2) are able to extract better features than shallow models and hence, extra layers help in learning the features effectively.
Deep learning architectures can be constructed with a greedy layer-by-layer method. Deep learning helps to disentangle these abstractions and pick out which features improve performance.
For supervised learning tasks, deep learning methods eliminate feature engineering, by translating the data into compact intermediate representations akin to principal components, and derive layered structures that remove redundancy in representation.
Deep learning algorithms can be applied to unsupervised learning tasks. This is an important benefit because unlabeled data are more abundant than the labeled data. Examples of deep structures that can be trained in an unsupervised manner are neural history compressors and deep belief networks.

2021-03-16 16:32:53

Free Web Hosting