Learning Theory

194 sigi mgSystems, functioning entities that take an input and give the output, arise in many scientific studies. Understanding how a specific system performs, that is given an input one aims to predict the output of a system. Especially, one looks for the prediction of the system output. For example, one is maybe interested in predicting the glucose concentration in blood at a given time of the day; or interested in predicting whether an individual customer would appreciate a specific product, such as a movie, book, song, etc. The interior structure of many systems is often unavailable, which makes it difficult to describe the internal mechanisms of the system that for a given input create the output. The only available information about the system is then the knowledge of certain input-output pairs. These pairs are obtained from the system observations, and are often called the empirical data.

In Machine Learning [Alp04, Bis06, Mit97], one is concerned with the design and development of algorithms, called (machine) learning algorithms, that allow computers (machines) to predict, or to make a decision about, the system output based on the empirical data from the system observations. The analysis of learning algorithms is done in the framework of (Computational) Learning Theories. One of such theories is the so called Statistical Learning Theory [PogSma03, Vap98]. According to this theory, the learning algorithm should construct a function, called an estimator, that approximates well the relationship between the system input and the system output. In our research, we construct learning algorithms using the so-called multi-penalty regularization (MPR) techniques, which are known in the field of Regularization of Inverse Problems. We also analyze the quality of the corresponding estimators. In particular, in [LPS12], we showed that MPR in comparison to the conventional approaches creates better extrapolating estimators. In [HNP14], we proposed and analyzed a new method based on the MPR approach for detecting relevant variables in the input-output relations. In the application of this method to the reconstruction of the gene regulatory networks, we obtained results that show a clear evidence of the competitiveness of the proposed method with respect to the previously known approaches.


[Alp04] E. Alpaydin. Introduction to Machine Learning (Adaptive Computation and Machine Learning). MIT Press, 2004.

[Bis06] C. M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006.

[HNP14] K. Hlavackova-Schindler, V. Naumova, S. Pereverzyev Jr. Multi-penalty regularization for detecting relevant variables. Applied Mathematics Preprint Nr. 11, University of Innsbruck, 2014, submitted. [pdf pdf button]

[LPS12] S. Lu, S. Pereverzyev Jr., S. Sampath. Multi-parameter regularization for construction of extrapolating estimators in statistical learning theory. In "Multiscale Signal Analysis and Modeling" (Eds.: X. Shen and A. I. Zayed). Springer Lecture Notes in Electrical Engineering, 2012.

[Mit97] T. M. Mitchell. Machine Learning. McGraw Hill, 1997.

[PogSma03] T. Poggio and S. Smale. The mathematics of learning: dealing with data. Notices Am. Math. Soc., 50(5):537–544, 2003.