Find Research Output

Research Output
  • All
  • Scholar Profiles
  • Research Units
  • Research Output
Filter
Department Publication Year Content Type Data Sources

SELECTED FILTERS

School of Advanced Technology
BOOK
Clear all

1.Fast graph-based semi-supervised learning and its applications

Author:Zhang,Yan Ming;Huang,Kaizhu;Geng,Guang Gang;Liu,Cheng Lin

Source:Semi-Supervised Learning: Background, Applications and Future Directions,2018,Vol.

Abstract:Despite the great success of graph-based transductive learning methods, most of them have serious problems in scalability and robustness. In this chapter, we propose an efficient and robust graph-based transductive classification method, called minimum tree cut (MTC), which is suitable for large scale data. Motivated from the sparse representation of graph, we approximate a graph by a spanning tree. Exploiting the simple structure, we develop a linear-time algorithm to label the tree such that the cut size of the tree is minimized. This significantly improves graph-based methods, which typically have a polynomial time complexity. Moreover, we theoretically and empirically show that the performance of MTC is robust to the graph construction, overcoming another big problem of traditional graph-based methods. Extensive experiments on public data sets and applications on text extraction fromimages demonstrate our method’s advantages in aspect of accuracy, speed, and robustness.

2.Semi-supervised learning: Background, applications and future directions

Author:Zhong,Guoqiang;Huang,Kaizhu

Source:Semi-Supervised Learning: Background, Applications and Future Directions,2018,Vol.

Abstract:Semi-supervised learning is an important area of machine learning. It deals with problems that involve a lot of unlabeled data and very scarce labeled data. The book focuses on state-of-the-art research on semi-supervised learning. In the first chapter, Weng, Dornaika and Jin introduce a graph construction algorithm named the constrained data self-representative graph construction (CSRGC). In the second chapter, to reduce the graph construction complexity, Zhang et al. use anchors that were a special subset chosen from the original data to construct the full graph, while randomness was injected into graphs to improve the classification accuracy and deal with the high dimensionality issue. In the third chapter, Dornaika et al. introduce a kernel version of the Flexible Manifold Embedding (KFME) algorithm. In the fourth chapter, Zhang et al. present an efficient and robust graph-based transductive classification method known as the minimum tree cut (MTC), for large scale applications. In the fifth chapter, Salazar, Safont and Vergara investigated the performance of semi-supervised learning methods in two-class classification problems with a scarce population of one of the classes. In the sixth chapter, by breaking the sample identically and independently distributed (i.i.d.) assumption, one novel framework called the field support vector machine (F-SVM) with both classification (F-SVC) and regression (F-SVR) purposes is introduced. In the seventh chapter, Gong employs the curriculum learning methodology by investigating the difficulty of classifying every unlabeled example. As a result, an optimized classification sequence was generated during the iterative propagations, and the unlabeled examples are logically classified from simple to difficult. In the eighth chapter, Tang combines semi-supervised learning with geo-tagged photo streams and concept detection to explore situation recognition. This book is suitable for university students (undergraduate or graduate) in computer science, statistics, electrical engineering, and anyone else who would potentially use machine learning algorithms; professors, who research artificial intelligence, pattern recognition, machine learning, data mining and related fields; and engineers, who apply machine learning models into their products.

3.Self-training field pattern prediction based on kernel methods

Author:Jiang,Haochuan;Huang,Kaizhu;Zhang,Xu Yao;Zhang,Rui

Source:Semi-Supervised Learning: Background, Applications and Future Directions,2018,Vol.

Abstract:Conventional predictors often regard input samples as identically and independently distributed (i.i.d.). Such an assumption does not always hold in many real scenarios, especially when patterns occur as groups, where each group shares a homogeneous style. These tasks are named as the field prediction, which can be divided into the field classification and the field regression. Traditional i.i.d.-based machine learning models would always face degraded performance. By breaking the i.i.d. assump- tion, one novel framework called Field SupportVector Machine (F-SVM) with both classification (F-SVC) and regression (F-SVR) purposes is in- troduced in this chapter. To be specific, the proposed F-SVM predictor is investigated by learning simultaneously both the predictor and the Style Normalization Transformation (SNT) for each group of data (called field). Such joint learning is proved to be even feasible in the high-dimensional kernel space. An efficient alternative optimization algorithm is further designed with the final convergence guaranteed theoretically and experimentally. More importantly, a self-training based kernelized algorithm is also developed to incorporate the F-SVM model with the unknown field during the training phase by learning the transductive SNT to transfer the trained field information to this unknown style data. A series of experiments are conducted to verify the effectiveness of the F-SVM model with both classification and regression tasks by promoting the classification accuracy and declining regression error. Empirical results demonstrate that the proposed F-SVM achieves in several benchmark datasets the best performance so far, significantly better than those state-of-the-art predictors.
Total 3 results found
Copyright 2006-2020 © Xi'an Jiaotong-Liverpool University 苏ICP备07016150号-1 京公网安备 11010102002019号