The new AI study may explain why deep learning works

The new AI study may explain why deep learning works

The new AI study may explain why deep learning works

deep learning, deep reinforcement learning, neural networks and deep learning ,deep learning ai
Credit : Ryzhi


The resurgence of artificial intelligence (AI) is largely due to in-depth learning progress in model-recognition, machine learning that does not require clear hard-coding.

The formation of deep neural networks is somewhat inspired by the biological brain and neuroscience. Like the biological brain, the internal function of deep network work is largely unexplained and there is no single integration theory.

Researchers at the Massachusetts Institute of Technology (MIT) recently revealed new insights into how intensive learning networks work to help AI break down the black box of AI machine learning.

Tamaso Pogago at the Center for Brains, Minds and Machines, Andrzej Bonbersky and Kianlee Liao's MIT research trio developed a new theory on why deep networks work and published their study on PNAS on June 9, 2020 (Proceedings on Proceedings Sciences).

The researchers focused their study on the deep network of multivariate functions of certain classes, which are far from the curse of dimensionality - which involves exponential dependence on the number of parameters for accuracy over size. In frequently implemented machine practice, the data are very dimensional. Examples of high dimensional data include facial recognition, customer purchase history, patient health care records, and financial market analysis.

Depth refers to the number of layers in deep networks - the more computed network layers, the deeper the network. To formulate their theory, the team examined the power of deep learning, the dynamics of adaptation, and performance outside of the model.

In the study, the researchers compared deep and shallow networks, both of which used similar processes such as pooling, conviction, linear combination, static nonlinear function of a variable, and dot product. Why do deep networks have more approximate powers and are they both universal approximations to achieve better results than shallow networks?

Scientists have observed that with a deep neural network intertwined with hierarchical regions, this exponential cost disappears and becomes simpler again. They demonstrated that dimensionality can be avoided for certain types of construction tasks for deep-type deep networks. For problems with hierarchical terrain, such as image classification, deep networks are more powerful than exponential networks.

The researchers wrote, "In estimation theory, both shallow and deep networks estimate continuous functions at exponential cost." "However, for some types of structural functions, we have proven that deep types of networks (even without sharing weights) can escape the curse of dimensionality."

The team explained that high-parameter deep networks work well on data outside the model. For classification problems, the researchers proved that if a standard deep network trained with gradient decent algorithms is given, it is an important parameter of directional space rather than a quantity or the size of the load.

Through an interdisciplinary combination of applied mathematics, statistics, engineering, cognitive science and computer science, MIT researchers have developed a theory of why intensive learning tasks can accelerate the development of novel machine learning methods and future artificial intelligence advances.



Post a Comment

0 Comments